When I first heard about the Kozinski story (some mature content in the story), it was on NPR’s All Things Considered. The interviewer spoke with the LA Times reporter, who went on about how the judge had “published” offensive material on a “public website.”

I won’t go into detail on the story itself. But I urge anyone to take the LA Times article with a grain or two of salt. Evidently, the thing got started when someone who had an ax to grind with the judge sent links and info to the media, and said media went on to make it all look as horrible as possible. However, the more we learn about the details in the case, the more it sounds like the LA Times is twisting the truth a great deal. **

To me, though, the content issue isn’t as interesting (or challenging) as the “public website” idea.

Basically, this was a web server with an IP and URL on the Internet that was intended for family to share files on, and whatever else (possibly email server too? I don’t know). It’s the sort of thing that many thousands of people run — I lease one of my own that hosts this blog. But the difference is that Kozinski (or, evidently, his grown son) set it up to be private for just their use. Or at least he thought he had — he didn’t count on a disgruntled individual looking beyond the “index” page (that clearly signaled it as a private site) and discovering other directories where images and what-not were listed.

Lawrence Lessig has a great post here: The Kozinski mess (Lessig Blog). He makes the case that this wasn’t a ‘public’ site at all, since it wasn’t intended to be public. You could only see this content if you typed various additional directories onto the base URL. Lessig likens it to having a faulty lock on your front door, and someone snooping in your private stuff and then telling about it. (Saying it was an improperly installed lock would be more accurate, IMHO.)

The comments on the page go on and on — much debate about the content and the context, private and public and what those things mean in this situation.

One point I don’t see being made (possibly because I didn’t read it all) is that there’s now a difference between “public” and “published.”

It used to be that anything extremely public — that is, able to be seen by more than just a handful of people — could only be there if it was published that way on purpose. It was impossible for more than just the people in physical proximity to hear you, see you or look at your stuff unless you put a lot of time and money into making it that way: publishing a book, setting up a radio or TV station and broadcasting, or (on the low end) using something like a CB radio to purposely send out a public signal (and even then, laws limited the power and reach of such a device).

But the Internet has obliterated that assumption. Now, we can do all kinds of things that are intended for a private context that unwittingly end up more public than we intended. By now almost everyone online has sent an email to more people than they meant to, or accidentally sent a private note to everyone on Twitter. Or perhaps you’ve published a blog article that you only thought a few regular readers would see, but find out that others have read it who were offended because they didn’t get the context?

We need to distinguish between “public” and “published.” We may even need to distinguish between various shades of “published” — the same way we legally distinguish between shades of personal injury — by determining intent.

There’s an informative thread over at Groklaw as well.

**About the supposedly pornographic content, I’ll only say that it sounds like there was no “pornography” as typically understood on the judge’s server, but only content that had accumulated from the many “bad-taste jokes” that get passed around the net all the time. That is, nothing more offensive than you’d see on an episode of Jackass or South Park. Whether or not that sort of thing is your cup of tea, and whether or not you think it is harmfully degrading to any segment of society, is certainly your right. Some of the items described are things that I roll my eyes at as silly, vulgar humor, and then forget about. But describing a video (which is currently on YouTube) where an amorously confused donkey tries mount a guy who was (inadvisedly) trying to relieve himself in a field as “bestiality” is pretty absurd. Monty Python it ain’t; but Caligula it ain’t either.

Everybody’s linking to this article today, but I had to share a chunk of it that gave me goosebumps. It’s this bit from Leonard Kleinrock:

: September 2, 1969, is when the first I.M.P. was connected to the first host, and that happened at U.C.L.A. We didn’t even have a camera or a tape recorder or a written record of that event. I mean, who noticed? Nobody did. . . . on October 29, 1969, at 10:30 in the evening, you will find in a log, a notebook log that I have in my office at U.C.L.A., an entry which says, “Talked to SRI host to host.” If you want to be, shall I say, poetic about it, the September event was when the infant Internet took its first breath.

IDEA 2008

idea08 badge

I’d like to encourage everyone to attend IDEA 2008, a conference (organized by the IA Institute) that’s been getting rave reviews from attendees since it started in 2006. It’s described as “A conference on designing complex information spaces of all kinds” — and it’s happening in grand old Chicago, October 7-8, 2008.

Speakers on the roster include people from game design, interaction design and new-generation advertising/marketing, and the list is growing, including (for some reason) my own self. I think I’m going to be talking about how context works in digital spaces … but I have until October, so who knows what it’ll turn into?

IDEA is less about the speakers, though, than the topics they spark, and the intimate setting of a few hundred folks all seeing the same presentations and having plenty of excuses to converse, dialog and generally brou some haha.

This is based on a slide I’ve been slipping into decks for over a year now as a “quick aside” comment; but it’s been bugging me enough that I need to get it out into a real blog post. So here goes.

We hear the words Strategy and Innovation thrown around a lot, and often we hear them said together. “We need an innovation strategy.” Or perhaps “We need a more innovative strategy” which, of course, is a different animal. But I don’t hear people questioning much exactly what we mean when we say these things. It’s as if we all agree already on what we mean by strategy and innovation, and that they just fit together automatically.

There’s a problem with this assumption. The more I’ve learned about Communities of Practice, the more I’ve come to understand about how innovation happens. And I’ve come to the conclusion that strategy and innovation aren’t made of the same cloth.

strategy and innovation

1. Strategy is top-down; Innovation is bottom-up

Strategy is a top-down approach. In every context I can think of, strategy is about someone at the top of a hierarchy planning what will happen, or what patterns will be invoked to respond to changes on the ground. Strategy is programmed, the way a computer is programmed. Strategy is authoritative and standardized.

Innovation is an emergent event; it happens when practitioners “on the ground” have worked on something enough to discover a new approach in the messy variety of practitioner effort and conversation. Innovation only happens when there is sufficient variety of thought and action; it works more like natural selection, which requires lots of mutation. Innovation is, by its nature, unorthodox.

2. Strategy is defined in advance; Innovation is recognized after the fact

While a strategy is defined ahead of time, nobody can seem to plan what an innovation will be. In fact, many (or most?) innovations are serendipitous accidents, or emerge from a side-project that wasn’t part of the top-down-defined work load to begin with. This is because the string of events that led to the innovation is never truly a rational, logical or linear process. In fact, we don’t even recognize the result as an innovation until after it’s already happened, because whether something is an innovation or not depends on its usefulness after it’s been experienced in context.

We fill in the narrative afterwards — looking back on what happened, we create a story that explains it for us, because our brains need patterns and stories to make sense of things. We “reify” the outcome and assume there’s a process behind it that can be repeated. (Just think of Hollywood, and how it tries to reproduce the success of surprise-hit films that nobody thought would succeed until they became successful.) I discuss this more in a post here.

3. Strategy plans for success in known circumstances; Innovation emerges from failure in unknown circumstances.

One explicit aim of a strategy is to plan ahead of time to limit the chance of failure. Strategy is great for things that have to be carried out with great precision according to known circumstances, or at least predicted circumstances. Of course strategy is more complex than just paint-by-numbers, but a full-fledged strategy has to have all predictable circumstances accounted for with the equivalent of if-then-else statements. Otherwise, it would be a half-baked strategy. In addition, strategy usually aims for the highest level of efficiency, because carrying something off with the least amount of friction and “wasted” energy often makes the difference between winning and losing.

However, if you dig underneath the veneer of the story behind most innovations, you find that there was trial and error going on behind the scenes, and lots of variety happening before the (often accidental) eureka moment. And even after that eureka moment, the only reason we think of the outcome as an innovation is because it found traction and really worked. For every product or idea that worked, there were many that didn’t. Innovation sprouts from the messy, trial-and-error efforts of practitioners in the trenches. Bell Labs, Xerox PARC and other legendary fonts of innovation were crucibles of this dynamic: whether by design or accident, they had the right conditions for letting their people try and fail often enough and quickly enough to stumble upon the great stuff. And there are few things less efficient than trial and error; innovation, or the activity that results in innovation, is inherently inefficient.

So Innovation and Strategy are incompatible?

Does this mean that all managers can do is cross their fingers and hope innovation happens? No. What it does mean is that to having an innovation strategy has nothing to do with planning or strategizing the innovation itself. To misappropriate a quotation from Ecclesiastes, such efforts are all in vain and like “striving after wind.”

Managing for innovation requires a more oblique approach, one which works more directly on creating the right conditions for innovation to occur. And that means setting up mechanisms where practitioners can thrive as a community of practice, and where they can try and fail often enough and quickly enough that great stuff emerges. It also means setting up mechanisms that allow the right people to recognize which outcomes have the best chance of being successes — and therefore, end up being truly innovative.

I’m as tired of hearing about Apple as anyone, but when discussing innovation they always come up. We tend to think of Apple as linear, controlled and very top-down. The popular imagination seems to buy into a mythic understanding of Apple — that Steve Jobs has some kind of preternatural design compass embedded in his brain stem.

Why? Because Jobs treats Apple like theater, and keeps all the messiness behind the curtain. This is one reason why Apple’s legal team is so zealous about tracking down leaks. For people to see the trial and error that happens inside the walls would not only threaten Apple’s intellectual property, it would sully its image. But inside Apple, the strategy for innovation demands that design ideas to be generated in multitudes like fish eggs, because they’re all run through a sort of artificial natural-selection mechanism that kills off the weak and only lets the strongest ideas rise to the top. (See the Business Week article describing Apple’s “10 to 3 to 1” approach. )

Google does the same thing, but they turn the theater part inside-out. They do a modicum of concept-vetting inside the walls, but as soon as possible they push new ideas out into the marketplace (their “Labs” area) and leverage the collective interest and energy of their user base to determine if the idea will work or not, or how it should be refined. (See accounts of this philosophy in a recent Fast Company article.) People don’t mind using something at Google that seems to be only half-successful as a design, because they know it’ll be tweaked and matured quickly. Part of the payoff of using a Google product is the fun of seeing it improved under your very fingertips.

One thing I wonder: to what extent do any of these places treat “strategy” as another design problem to be worked out in the bottom-up, emergent way that they generate their products? I haven’t run across anything that describes such an approach.

At any rate, it’s possible to have an innovation strategy. It’s just that the innovation and the strategy work from different corners of the room. Strategy sets the right conditions, oversees and cultivates the organic mass of activity happening on the floor. It enables, facilitates, and strives to recognize which ideas might fit the market best — or strives to find low-impact ways for ideas to fail in the marketplace in order to winnow down to the ones that succeed. And it’s those ideas that we look back upon and think … wow, that’s innovation.

In the closing talk for this year’s IA Summit, I had a slide that explains the various layers that make up what we use the term “Information Architect” (or “Information Architecture”) to denote. I think it’s important to be self-aware about it, because it helps us avoid a lot of wasted breath and miscommunication.

But I also stressed that I don’t think this model is only true of IA. So please, feel free to replace “IA” in the diagram with the name of any practice, profession or domain of work.

To understand this diagram, especially the part about Practice, it helps to have a basic understanding of what “practice” is and how it emerges from a community that coalesces around a shared concern. The Linkosophy deck gets into that, and my UX as Communities of Practice deck does as well, while getting into more detail about the participation/reification dynamic Wenger describes in his work.

Here’s the model: I’ll do a bit of explanation after the jump.

title and role stack (small version)

Read the rest of this entry »

The granddaddy of the Internet clarifies a popular misconception.

Print What I’ve Learned: Vint Cerf
Al Gore had seen what happened with the National Interstate and Defense Highways Act of 1956, which his father introduced as a military bill. It was very powerful. Housing went up, suburban boom happened, everybody became mobile. Al was attuned to the power of networking much more than any of his elective colleagues. His initiatives led directly to the commercialization of the Internet. So he really does deserve credit.

Something tells me you won’t hear this quoted on Fox News. (Or from hardly anyone else, probably.)

In the “Linkosophy” talk I gave on Monday, I suggested that a helpful distinction between the practices of IxD & IA might be that IxD’s central concern is within a given context (a screen, device, room, etc) while IA’s central concern is how to connect contexts, and even which contexts are necessary to begin with (though that last bit is likely more a research/meta concern that all UX practices deal with).

But one nagging question on a lot of people’s minds seems to be “where did these come from? haven’t we been doing all this already but with older technology?”

I think we have, and we haven’t.

Both of these practices build on earlier knowledge & techniques that emerged from practices that came before. Card sorting & mental models were around before the IA community coalesced around the challenges of infospace, and people were designing devices & industrial products with their users’ interactions in mind long before anybody was in a community that called itself “Interaction Designers.” That is, there were many techniques, methods, tools and principles already in the world from earlier practice … but what happened that sparked the emergence of these newer practice identities?

The key catalyst for both, it seems to me, was the advent of digital simulation.

For IA, the digital simulation is networked “spaces” … infospace that’s made of bits and not atoms, where people cognitively experience one context’s connection to another as moving through space, even though it’s not physical. We had information, and we had physical architecture, but they weren’t the same thing … the Web (and all web-like things) changed that.

For IxD, the digital simulation is with devices. Before digital simulation, devices were just devices — anything from a deck chair to an umbrella, or a power drill to a jackhammer, were three-dimensional, real industrially made products that had real switches, real handles, real feedback. We didn’t think of them as “interactive” or having “interfaces” — because three-dimensional reality is *always* interactive, and it needs no “interface” to translate human action into non-physical effects. Designing these things is “Industrial Design” — and it’s been around for quite a while (though, frankly, only a couple of generations).

The original folks who quite consciously organized around the collective banner of “interaction designer” are digital-technology-centric designers. Not to say that they’ve never worked on anything else … but they’re leaders in that practitioner community.

Now, this is just a comment on origins … I’m not saying they’re necessarily stuck there.

But, with the digital-simulation layer soaking into everything around us, is it really so limiting to say that’s the origin and the primary milieu for these practices?

Of course, I’m not trying to build silos here — only clarify for collective self-awareness purposes. It’s helpful, I believe, to have shared understanding of the stories that make up the “history of learning and making” that forms our practices. It helps us have healthier conversations as we go forward.

Linkosophy

In 2008 I had the distinct honor to present the closing plenary for the IA Summit in Miami, FL. Here’s the talk in its entirety. Unfortunately the podcast version was lost, so there’s no audio version, but 99% of what I had to say is in the notes.

NOTE: To make sense of this, you’ll need to read the notes in full-screen mode. (Or download the 6 MB PDF version.)

(Thanks to David Fiorito for compressing it down from its formerly gigantic size!)

Giving this talk at the IA Summit was humbling and a blast; I’m so grateful for the positive response, and the patience with these still-forming ideas.

If you’re after some resources on Communities of Practice and the like, see the post about the previous year’s presentation which has lots of meaty links and references.

Hey, I’m Andrew! You can read more about who I am on my About page.

If I had a “Follow” button on my forehead, and you met me in person and pushed that button, I’d likely give you a card that had the following text written upon it:

Here’s some explanation about how I use Twitter. It’s probably more than you want to read, and that’s ok. This is more a personal experiment in exploring network etiquette than anything else. If you’re curious about it and read it, let me know what you think?

Disclaimers

  • I use Twitter for personal expression & connection; self-promotion & “personal brand” not so much (that’s more my blog’s job, but even there not so much).
  • I hate not being able to follow everyone I want to, but it’s just too overwhelming. There’s little rhyme/reason to whom I follow or not. Please don’t be offended if I don’t follow you back, or if I stop following for a while and then start again, or whatever. I’d expect you to do the same to me. All of you are terribly interesting and awesome people, but I have limited attention.
  • Please don’t assume I’ll notice an @ mention within any time span. I sometimes go days without looking.
  • Direct-messages are fine, but emails are even better and more reliable for most things (imho).
  • If you’re twittering more than 10 tweets a day, I may have to stop following just so I can keep up with other folks.
  • If you add my feed, I will certainly check to see who you are, but if there’s zero identifying information on your profile, why would I add you back?

A Few Guidelines for Myself (that I humbly consider useful for everybody else too ;-)

  • I’ll try to keep tweets to about 10 or less a day, to avoid clogging my friends’ feeds.
  • I’ll avoid doing scads of “@” replies, since Twitter isn’t a great conversation mechanism, but is pretty ok as an occasional comment-on-a-tweet mechanism.
  • I won’t use any automated mechanism to track who “unfollows” me. And if I notice you dropped me, I won’t think about it much. Not that I don’t care; just seems a waste of time worrying about it.
  • I won’t try to game Twitter, or workaround my followers’ settings (such as defeating their @mentions filter by putting something before the @, forcing them to see replies they’d otherwise not have to skip.)
  • I’ll avoid doing long-form commentary or “live-blogging” using Twitter, since it’s not a great platform for that (RSS feed readers give the user the choice to read each poster’s feed separately; Twitter feed readers do not, and allow over-tweeting to crowd out other voices on my friends’ feeds.)
  • I’ll post links to things only now and then, since I know Twitter is very often used in (and was intended for) mobile contexts that often don’t have access to useful web browsers; and when I do, I’ll give some context, rather than just “this is cool …”
  • I will avoid using anything that automatically Tweets or direct-messages through my account; these things simply offend me (e.g. if I point to a blog post of mine, I’ll actually type a freaking tweet about it).
  • In spite of my best intentions, I’ll probably break these guidelines now and then, but hopefully not too much, whatever “too much” is.

Thanks for indulging my curmudgeonly Twitter diatribe. Good day!

Since so much of our culture is digitized now, we can grab clippings of it and spread it all over our identities the way we used to decorate our notebooks with stickers in grade school. Movies, music, books, periodicals, friends, and everything else. Everything that has a digital referent or avatar in the pervasive digital layer of our lives is game for this appropriation.

I just ran across a short post on honesty in playlists.

The what-I’m-listening-to thing always strikes me as aspirational rather than documentary. It’s really not “what I’m listening to” but rather “what I would be listening to if I were actually as cool as I want you to think I am.”

And my first thought was: but where, in any other part of our lives, are we that “honest”?

Don’t we all tweak our appearances in many ways — both conscious and unconscious — to improve the image we present to the world? Granted, some of us do it more than others. But everybody does it. Even people who say they’re *not* like this actually are … to choose to be style-free is a statement just as strong as being style-conscious, because it’s done in a social context too, either to impress your other style-free, logo-hating friends, or to define yourself over-against the pop-culture mainstream.

Now, of course it would be dishonest to list favorite movies and books and music that you neither consume nor even really like. But my guess is a very small minority do that.

Our decorations have always been aspirational. Always. From idealizing the hunt with wall cave wall drawings to hanging pictures of beautiful still-life scenes of stuff you can’t afford in middle-class homes in the Renaissance, all the way to choosing which books to put on the eye-level shelves in your apartment, or making a cool playlist of music for a party. We never expose *everything* in our lives, we always select subsets that tell others particular things about us.

The digital world isn’t going to be any different.

(See earlier post on Flourishing.)

gygax calls in a paladin

IASummit 2008

Meet me at the IA Summit
Some very nice and well-meaning people have asked me to speak as the closing plenary at the IASummit conference this year, in Miami.

This is, as anyone who has been asked to do such a thing will tell you, a mixed blessing.

But I’m slogging through my insanely huge bucket of random thoughts from the last twelve months to surface the stuff that will, I dearly hope, be of interest and value to the crowd. Or, at the very least, keep their hungover cranial contents entertained long enough to stick around for Five-Minute Madness.

“Linkosophy” is a homely title. But it’s a hell of a lot catchier than “Information Architecture’s Role in the UX Context: What Got It Here, What It’s About, and Where It Might Be Headed.” Or some such claptrap.

Here’s the description and a link:

Closing Plenary: Linkosophy
Monday April 14 2008, 3:00 – 4:00PM

At times, especially in comparison to the industrial and academic disciplines of previous generations, the User Experience family of practices can feel terribly disorganized: so little clarity on roles and responsibilities, so much dithering over semantics and orthodoxy. And in the midst of all this, IA has struggled to explain itself as a practice and a domain of expertise.

But guess what? It turns out all of this is perfectly natural.

To explain why, we’ll use IA as an example to learn about how communities of practice work and why they come to be. Then we’ll dig deeper into describing the “domain” of Information Architecture, and explore the exciting implications for the future of this practice and its role within the bigger picture of User Experience Design.

In addition, I’ve been dragooned (but in a nice way … I just like saying “dragooned”) to participate in a panel about “Presence, identity, and attention in social web architecture” along with Christian Crumlish, Christina Wodtke, and Gene Smith, three people who know a heck of a lot more about this than I do. Normally when people ask me to talk about this topic, I crib stuff from slides those three have already written! Now I have to come up with my own junk. (Leisa Reichelt is another excellent thinker on this “presence” stuff, btw. And since she’s not going to be there, maybe I’ll just crib *her* stuff? heh… just kidding, Leisa. Really.)

Seriously, it should be a fascinating panel — we’ve been discussing it on a mailing list Christian set up, so there should be some sense that we actually prepared for it.

« Older entries § Newer entries »