OK Computer Cover - Wikipedia

I don’t know the exact release date, but I do know that it was right about ten years ago that I first heard OK Computer.

In May of ’97, I had just finished my MFA in Creative Writing at UNC Greensboro. But I had no job prospects. I’d had a job lined up for me at a small press out of state, run by some dear friends and mentors of mine, but money issues and a new baby made it so at the last minute, I had to turn that opportunity down. (I handled it horribly, and lost some dear friends because of it.)

My future, or my identity in a sense, felt completely unmoored. The thing I’d assumed for two years I’d be doing after finishing my degree was no longer an option; I’d fallen out of love with teaching, and didn’t really have any good opportunities to do that anyway. All I had going was this nerdy hobby I’d been obsessing on for some years: the Web.

So, I needed a job, and ended up talking my way into a technical writing gig in the registrar’s office of my MFA alma mater. I wouldn’t be editing poems and fiction for a press or a journal (as I’d gotten used to doing and thinking of myself as doing) but writing tutorials and help instructions for complicated, workaday processes and applications. But at least I’d be on the “Web Team” — helping figure out how to better use the Web for the school. I’d been working with computer applications, designing them and writing instructions for them, off and on in my side-job life while I’d been in grad school, so it wasn’t a total stretch. It just wasn’t where I imagined my career would take me.

That summer, in a fit of (possibly misplaced) optimism and generosity, my new employer sent me to a posh seminar in Orlando to learn better Photoshop skills. And one of the presenters there was the guy who made some of the most collected X-Files trading cards around, and an acknowledged wunderkind with digital mixed-media collages. (Cannot find his name…)

As I was waiting to see this guy’s presentation, and people were filing into the presentation room, he was setting up and had a slideshow of his creepy graphics going onscreen. And this spooky, ethereal, densely harmonic, splintery music was playing over the room’s speakers. I was feeling a little transfixed.

And, of course, when I asked him later what it was, he said it was Radiohead’s OK Computer.

Here’s the thing: I’d heard Radiohead interviewed on NPR by Bob Woodward about a month or so before, where they discussed the new album, the band’s methods, how they recorded most of it in Jane Seymour’s ancient country mansion. And they played clips from it throughout, and I remember thinking “wow, that’s just too over the top for me… a little too strange. I guess I won’t be getting that album — sounds like experiment for its own sake.”

It’s just one of a thousand times this has happened to me — conservative, knee-jerk reaction to something, only to come to embrace it later.

Something about this music connected with me on a deep level at that time in my life, and through a lot of things going on in my own head. It *sounded* like my own head. And, to some degree, it still does, though now I feel it’s more of a remnant of a younger self. Yet this music still feels quite right, quite relevant now, but I hear different things in it.

So. This just occurred to me. Had to share. I’m on record as a huge Radiohead fan, even though I realize this isn’t exactly a unique thing to be. I’ve found every release of theirs to be fascinating, challenging, and rewarding once it has a chance to settle in. (Not a huge fan of Thom Yorke’s solo effort, but I’m glad it’s out of his system, so to speak — then again, who knows, four years from now it may be my favorite thing ever.)

They have a new album coming out sometime this year, if all stars align correctly. Can’t wait.

UPDATE: See this one on SlideShare. You need to see it full-screen to read the notes.

This is my official plug for the Adaptive Path UX Week in Washington, DC, August 13-17.

I’ll be speaking on Monday, on User Experience Design as set of Communities of Practice. Basically, an abbreviated and somewhat tweaked version of what I presented at the IA Summit this year.

Hey, DC in August! I hear the hotel has excellent air conditioning :-)

Via Jay Fienberg, via the IAI discussion list, I hear of this excellent post by professor David Silver about a talk Silver did recently on the Web 2.0 meme.

Silver starts out lauding the amazing, communal experience of blogs and mashups of blogs and RSS feeds and other Web 2.0 goodness, and then gets into giving some needed perspective:

then i stepped back and got critical. first, i identified web 2.0 as a marketing meme, one intended to increase hype of and investment in the web (and web consultants) and hinted at its largely consumer rather than communal directions and applications. second, i warned against the presentism implied in web 2.0. today’s web may indeed be more participatory but it is also an outgrowth of past developments like firefly, amazon’s user book reviews, craigslist, and ebay – not to mention older user generated content applications like usenet, listservs, and MUDs. third, i argued against the medium-centricness of the term web 2.0. user generated content can and does exist in other media, of course, including newspapers’ letters to the editor section, talk radio, and viewers voting on reality tv shows. and i ended with my all-time favorite example of user generated content, the suggestion box, which uses slips of paper, pencils, and a box.

I think this is very true, and good stuff to hear. (Even in the peculiar lower-case typing…fun!) Group participation has been growing steadily on the Internet in one form or another for years.

I do think, though, that some tipping point hit in the last few years. Tools for personal expression, simple syndication, a cultural shift in what people expect to be able to do online, and the rise of broadband and mobile web access — the sum has become somehow much greater than its parts.

Still, I think he’s right that the buzzword “Web 2.0” is mainly an excellent vehicle for hype that gets people thinking they need consultants and new books. (Tim O’Reilly is a nice guy, I’m sure, but he’s also a business man and publisher who knows how to get conversations started.)

Silver mentions Feevy, a sort of ‘live blogroll’ tool for blogs — it has an excerpt of the latest post by each person on your blogroll. Neato tool. I may have to try it out!

I haven’t been doing much political posting here for a while, in the interest of trying to keep a user-experience design focus, for the most part.

But things are getting weirder and weirder in this land of ours. Or, at least, it’s becoming more clear how weird it’s been for quite a while.

I think many of us already knew that Cheney was creepy and secretive, and that he’d managed to cultivate an unusual amount of power for a VP. But I don’t know that many of us suspected how deep it really goes, or how dark.

Hertzberg gets to the point in the New Yorker:

More than anyone else, including his mentor and departed co-conspirator, Donald Rumsfeld, Cheney has been the intellectual author and bureaucratic facilitator of the crimes and misdemeanors that have inflicted unprecedented disgrace on our country’s moral and political standing: the casual trashing of habeas corpus and the Geneva Conventions; the claim of authority to seize suspects, including American citizens, and imprison them indefinitely and incommunicado, with no right to due process of law; the outright encouragement of “cruel,” “inhuman,” and “degrading” treatment of prisoners; the use of undoubted torture, including waterboarding (Cheney: “a no-brainer for me”), which for a century the United States had prosecuted as a war crime; and, of course, the bloody, nightmarish Iraq war itself, launched under false pretenses, conducted with stupefying incompetence, and escalated long after public support for it had evaporated, at the cost of scores of thousands of lives, nearly half a trillion dollars, and the crippling of America’s armed forces, which no longer overawe and will take years to rebuild.

Of course, I’m sure there are plenty of very humane and decent things Cheney has done in the world. It’s perhaps not fair to judge someone solely on the negatives. But what a list of negatives … I suspect he’s hit a tipping point, pushing him from merely corrupt to, well, evil.

Am I being harsh? Is this rhetoric too strong?

The question then becomes: how bad does it have to be for the rhetoric to be necessary? How corrupt and destructive does a public leader need to be in order to justify demonic, polemical characterization — which is often necessary to jar people’s frames of reference enough to wake up and see this is not just another administration, that it’s not just garden-variety incompetence or greed?

So, really, that’s what this post is about. That question. I wonder, in history, how it felt for people living in countries that were doing just fine and seemed nice and moderate and sane, but that were on the brink of catastrophy? What did the signs look like?

It seems like, in all the narratives I hear from such situations, regular people kept making excuses for their leaders, or buying into some watered-down version of their leaders’ more extreme views. “Oh, I’m sure it’s not as bad as all that.” “Oh, come on, this is (insert year here) in (insert country or region here) — that could never happen here!”

I remember news reports from Somalia in the early 90s, when reporters walked around in the ruins talking to people who had been poets, artists, teachers, doctors. There was talk of how modern and sane and moderate Somalia had been, how it had been one of the cultural (in a Western sense, I’m sure) jewels of Africa. Turned to blood and rubble.

People want to believe their leaders aren’t “as bad as all that.” Even people who don’t like their current leaders tend to have a sort of boundary that keeps them from thinking their leader could truly be a dictator in the making.

How bad does our administration have to be in order for us to say, out loud, these are criminals, and they must be stopped? And then, even if we do, what then?

Wired has a great story explaining the profound implications of Google Maps and Google Earth, mainly due to the fact that these maps don’t have to come from only one point of view, but can capture the collective frame of reference from millions of users across the globe: Google Maps Is Changing the Way We See the World.

This quote captures what I think is the main world-changing factor:

“The annotations weren’t created by Google, nor by some official mapping agency. Instead, they are the products of a volunteer army of amateur cartographers. “It didn’t take sophisticated software,” Hanke says. “What it took was a substrate — the satellite imagery of Earth — in an accessible form and a simple authoring language for people to create and share stuff. Once that software existed, the urge to describe and annotate just took off.”

Some of the article is a little more utopian than fits reality, but that’s just Wired. Still, you can’t deny that it really does change, forever, the way the human geographical world describes itself. I think the main thing, for me, is the stories: that because we’re not stuck with a single, 2-dimensional map that can only speak of one or a frames of reference, we can now see a given spot of the earth and learn of its human context — the stories that happened there to regular people, or people you might not otherwise know or understand.

It really is amazing what happens when you have the right banana.

Danah Boyd is pondering some of the rich, loamy stuff she’s uncovering in her long ethnographic study of young people and social networks.

She’s finding signs that there’s a growing social class/standing divide between Facebook and MySpace among high-school-age kids, and she’s wrestling with precisely what that means.

Thankfully, before waiting until it’s all been strained of all personality and doubt and pinned to wax as an official “paper,” she’s putting her neck out there and sharing some of the ideas she’s struggling with. You know, starting conversations, asking for feedback.

Of course, lots of people don’t get it. The BBC posted a story about it as if it were a University-vetted “study” — and it’s getting slashdotted and everything, which is bringing tons of people to the blog/essay who don’t understand ethnographic methods, or her approach to social sharing.

But, all that aside, it’s a fascinating post in itself.

The “blog essay” itself is on her site at danah.org. And the regular blog post where she explains it and is collecting comments from the public is on her blog at zephoria.org.

It’s especially interesting to read through these comments. Of course, as I said, lots of people seem uncomfortable with qualitative, raw, conversational research-analysis-in-formation. There are also some who, in their snooty disdain for MySpace and what they see there, unwittingly prove Boyd’s point to a degree.

What a lot of commenters bring up is that there are important differences between these two social engines that may cause some of the results Boyd is seeing. For example, while MySpace allows a user to create an anonymous account and connect to anyone they want, Facebook requires you to be “yourself” on the site, and allows connections only through referrals or pre-existing offline relation (such as being from the same school).
here are other, more subtle rules-based structural differences discussed throughout.

To me, this is central to what information architecture (at least as I see it) is all about: creating structures (whether categories of content or logical rules for what can and can’t be done and how) to channel people in particular ways.

I mean not so much the tabs/categories/taxonomy but the rules-based structures: who can friend whom; whether or not you can use a pseudonym; what channels can be used to form networks; how much you can customize your personal page, etc.

I wonder why these kinds of design decisions don’t get talked about more among IAs? Though to be fair, it’s on the rise. There were some great sessions about it at the last summit.

I can’t help but have a strong gut feeling, though, that the IA of “categorization and organization” of static structures is going to pale in comparison in terms of importance and impact next to the design decisions behind rules-based structures such as this.

One of many articles out in the last few weeks about the new show from David Milch, the man who brought us the glorious Deadwood.
David Milch mines his imperfect past in ‘John From Cincinnati’ – Los Angeles Times

Milch was quite a mess, according to him, for about 30 years, but then got above it. I liked this quote:

Milch got sober 8 years ago through “God’s grace,” he said. “To me, sobriety is taking the world as I find it. Trying to glorify it in its complexity, its reality, its beauty, its horror, and not try to judge it.”

It’s not just a great way to define sobriety; to me it sounds like the best definition of Faith I’ve possibly ever heard.

I’m not a fan of surfing movies or shows, I hate beaches, and “surf-noir” sounds like, I dunno, Beach Boys in a minor key? But I’m going to give “John” a chance — after all, before Deadwood, I never thought I’d watch another TV Western.

Fascinating post in Danger Room about a new War College research paper explains that insurgencies aren’t even a species of conventional warfare, but very different. Definitely check out the post, but here’s an interesting tidbit:

…the dynamics of contemporary insurgency are more like a violent and competitive market than war in the traditional sense where clear and discrete combatants seek strategic victory.

So here’s an interesting syllogism: If Markets are Conversations, and Insurgencies are Markets, then are Insurgencies = Conversations?

From what the report says, it might be the best way to think of them. The report essentially recommends playing neutral mediator — even if you think one side is better than the other.

This makes me wonder if anybody involved in dealing with Iraq ever paid attention back in the 80s when Hill Street Blues was on. When I was a kid, I remember thinking how strange it was to see cops in a room with “bad guy” gang leaders, negotiating things like truces. I thought: “The bad guys are right there, why don’t you arrest them??” But I realized soon enough that they’d only be replaced by more bad-guy leaders, and that until they brought a modicum of peace between the gangs, they would never manage to reduce violent crime in the city.

Of course, that’s the somewhat idealized TV version, which is much less messy than real life. But isn’t it still a great idea that often works? Or at least, isn’t it an idea that should be tried first, before you just try crushing the bad guys?

I finally got a chance to listen to Bruce Sterling’s rant for SXSW 2007 via podcast as I was driving between PA and NC last week.

There were a lot of great things in it. A number of people have taken great notes and posted them (here’s one example). It’s worth a listen either way — as are all of his talks. I like how Bruce is at a point where he’s allowed to just spin whatever comes to mind for an hour to a group of people. Not because all of it is gold — but because the dross is just as interesting as the gold, and just as necessary.

A lot of this year’s talk was on several books he’s reading, one of which is Yochai Benkler’s The Wealth of Networks. It’s fascinating stuff — and makes me want to actually read this thing. (It’s available online for free — as are some excellent summaries of it, and a giant wiki he set up.)

In the midst of many great lines, one of the things Sterling said that stuck with me was this (likely a paraphrase):

“The distinctions just go away if you’re given powerful-enough compositing tools.”

He was talking about commons-based peer production — things like mashups and remixes, fan art, etc. and how the distinctions between various media (photography, painting, particular instruments, sculpture, etc) blur when you can just cram things together so easily. He said that it used to be you’d work in one medium or genre or another, but now “Digital tools are melting media down into a slum gully.”

First, I think he’s being a little too harsh here. There have always been amateurs who create stuff for and with their peers, and they all think it’s great in a way that has more to do with their own bubble of mutual appreciation than any “universal” measure of “greatness.” It just wasn’t available for everyone to see online across the globe. I’ve been in enough neighborhood writer’s circles and seen enough neighborhood art club “gallery shows” to know this. I’m sure he has too. This is stuff that gives a lot of people a great deal of satisfaction and joy (and drama, but what doesn’t?). It’s hard to fault it — it’s not like it’s going to really take over the world somehow.

I think his pique has more to do with how the “Wired Culture” at large (the SXSW-attending afficianados and pundits) seem to be enamored with it, lauding it as some kind of great democratizing force for creative freedom. But that’s just hype — so all you really have to do is say “we’ll get over it” and move on.

Second, though, is the larger implication: a blurring between long-standing assumptions and cultural norms in communities of creative and design practice. Until recently, media have changed so slowly in human history that we could take for granted the distinctions between photography, design, architecture, painting, writing, and even things like information science, human factors and programming.

But if you think of the Web as the most powerful “compositing tool” ever invented, it starts to be more clear why so many professions / practices / disciplines are struggling to maintain a sense of identity — of distinction between themselves and everyone else. It’s even happening in corporations, where Marketing, Technical Writing, Programming and these wacky start-up User-Experience Design people are all having to figure each other out. The Web is indeed a digital tool that is “melting” things down, but not just media.

My obsession with what I call the “game layer” aside, it’s interesting that the mainstream press are now reporting on how using “game mechanics” in business software can create more engaging & useful ways of working with data, collaborating, and getting work done.

Why Work Is Looking More Like a Video Game – New York Times

Rave adapts a variety of gaming techniques. For instance, you can build a dossier of your clients and sales prospects that includes photographs and lists of their likes, dislikes and buying interests, much like the character descriptions in many video games. Prospects are given ratings, not by how new they are — common in C.R.M. programs — but by how likely they are to buy something. All prospects are also tracked on a timeline, another gamelike feature.

(Thanks, Casey, for the link!)

Wow! Evidently Architectures of Conversation is (at the moment of this posting) is SlideShare’s 17th Most Favorited this month.

Yes… 17th… it’s a nice, prime number.

I just want to thank the little people. And point out that it took the Web to turn “Favorite” into a verb.

glider emblem

This is delightful. A sort of logo for hacker culture. Not hackers as in criminals (hacker culture calls those people ‘crackers’ among other things) but hackers as in lateral-thinking technology heads.

The graphic … is called a glider. It’s a pattern from a mathematical simulation called the Game of Life. In this simulation, very simple rules about the behavior of dots on a grid give rise to wonderfully complex emergent phenomena. The glider is the simplest Life pattern that moves, and the most instantly recognizable of all Life patterns.

I love this emblem because it really does reference so many things I adore about the internet, what’s happening on it, and the culture that I believe to be the beating heart of it.

Here’s some of the explanation from Frequently Asked Questions about the Glider Emblem

The glider is an appropriate emblem on many levels. Start with history: the Game of Life was first publicly described in Scientific American in 1970. It was born at almost the same time as the Internet and Unix. It has fascinated hackers ever since.
In the Game of Life, simple rules of cooperation with what’s nearby lead to unexpected, even startling complexities that you could not have predicted from the rules (emergent phenomena). This is a neat parallel to the way that startling and unexpected phenomena like open-source development emerge in the hacker community… The glider fulfils the criteria for a good logo. It’s simple, bold, hard to mistake for anything else, and easy to print on a mug or T-shirt. It could be varied, combined with other emblems, or modified and infinitely repeated for use as a background

I’ve been going on and on about how the internet has given rise to a “game layer” to the world we live in: a sort of subcutaenous skin of data that connects everything, and mirrors the logic of our world. (Hence the number of friends you have on MySpace; the location you’re twittering from in Twitter; which songs you listen to the most on your iPod; the ability to track a UPS package at every turn; and on and on). Everything we attach to the network becomes more data, and if it’s data, it’s game-able.

Hacking itself is a kind of game, and the culture is very playful. I can’t get enough of this idea that “play” and “game,” once expanded some in their meaning and context, show us entirely new frames of reference that help explain what’s happening in the world.

« Older entries § Newer entries »