What exactly is the Ides of March??
The Believer is a cool magazine, and this is a fun interview… well if you’re entertained by philosophy chatter.
It kinda made me feel like I was in college again.
And I agree… screw Descartes’ “cogito”!!
Introducing the Information Architecture Institute
To achieve wider recognition for information architecture, the Institute’s leadership embarked on a process to create a new identity. While the AIfIA name has been well-received and well-known in the user experience community, the name has little equity in the world beyond. Difficulty with spelling and pronouncing AIfIA led us to look for a simpler, clearer alternative.
When we were trying to decide on a name, in that brightly lit room on the Asilomar grounds, we figured it would be, well, arrogant to just call it “the institute” for IA, since that would seem to imply that we claimed some kind of ownership over all of Information Architecture, whatever that was (and is).
But since in the last two years, nobody *else* has made an institute for it, I applaud the decision to just cut to the chase, as it were.
The people who stuck with this organization and continued to do the hard work of molding and nurturing it certainly deserve to make the claim — we’re the institute for IA.
Huzzah :-)
Jef Raskin: He Thought Different
Jef Raskin wasn’t the typical tech industry power broker. He was never a celebrity CEO, never a Midas-touch venture capitalist, and never conspicuously wealthy (although he was wealthy). Yet until his Feb. 26 death at 61, the creator of the Macintosh led the rallying cry for easy-to-use computers, leaving an indelible mark on Silicon Valley and helping to revolutionize the computer industry.
The title to this entry is oversimplified, but it addresses the issue of whether or not “User Experience” as an umbrella discipline of sorts contains IA and all the other things that are related to the new kinds of design we’re all doing for the internet and elsewhere.
In my company, somebody ran across this article by Peter Boersma (at peterboersma.com), and asked what we thought.
I dashed off a reply that sort of fell out of my head, so it’s not entirely refined, but it’s just the same thing I’ve been obsessing about and digging away at for a long while now. I wonder if I’m insane, or if this make sense to anyone else?? Here’s what I wrote.
I still think that the internet has added a new paradigm to design that isn’t covered by traditional disciplines. Until the last decade, nobody had to think about massively populated environments where everything is made of language, and documents are places and vice versa. It’s the shaping of *that* kind of space that necessitates a new kind of architecture, one that isn’t so much concerned with how the thing looks or what statement it makes artistically, but with more emphasis on function.
What we need is a discipline that combines urban planning, civil engineering and “architecture” all within networked electronic environments. If there is another discipline that does this, then I’ll start calling myself after that discipline. But for now I use “information architecture.”
My issue with bundling all of this within “user experience” is that “user experience” puts emphasis on the singular “user” as well as the idea of a received/perceived “experience” — this lends itself to being more about interfaces for individual users involved in specific, solitary tasks.
But the internet has made necessary an approach to design that looks beyond these specific user experiences to the collective experience, which is truly a whole greater than the sum of its parts. This is why Metcalfe’s Law is so important: the usefulness, or utility, of a network equals the square of its users. Why not just the sum of its users? Because with each additional user, the potential for synthesis increases exponentially.
The internet changes what we mean when we say “space” and “time” and “community” — I just don’t see how serially adding together all the various disciplines that have become involved in internet-related design covers this sea change.
One thing my point does, however, is draw a boundary around information architecture that defines it as internet-related (or massive networked electronic environment related — our intranet is technically not ‘on the internet’ but it’s definitely the same order of being). A lot of IA people don’t like that, and want to say that IA is about everything. That’s where it gets watered down, though. If you’re talking about all the old orders of reality, then yeah — there are many lovely disciplines and traditions that have been designing for those orders of reality for generations. The necessity for IA is internet-specific.
This is an excellent overview of the insidious logic behind all these otherwise silly sounding obsessions.
It’s like this. U2 has made some great music, even in the last 15 years or so. But honestly, they stopped being a rock and roll band when they cancelled a chunk of their Zooropa tour because their GIANT SCREEN TV BROKE.
I’m glad somebody is posting their opinion like on the link above, though… because really, based on the sort of persona the U2 members have cultivated over the years, you’d think they would care. (I seriously doubt Radiohead would put up with this stuff, for example.)
But U2 hasn’t really been four guys in a band in quite a while. It’s been a major media force, a corporation, for years.
You’d like to think, though, wouldn’t you… that they’d not let things like “picking the pretty people from the line for the DVD taping” happen. That’s just awful.
The Tonight Show with Johnny Carson is synonymous with being curled up between my parents in bed, warm, safe, just barely awake after being asleep a while already, only to open my eyes some and see the flicker of the screen and hear those voices, and that friendly self-deprecating, wry delivery that always meant to me “go back to sleep, everybody’s laughing, everybody’s happy, everything’s fine.”
It is truly, truly amazing… the philistine buffoonery that has been allowed to masquerade as enlightened leadership in our country.
Scalia To Synagogue – Jews Are Safer With Christians In Charge
and this (even though it almost pales in comparison to the insanity linked above) …
Harvard President – Women Really Aren’t as Good at Math and Science
The Mac mini is gorgeous. Brilliant.
When I heard about it I was afraid it would be underpowered… not the case, at all.
I’m fascinated with how societies, communities, and therefore companies, manage to create meaning, agreement, belief. They need it in order to get things done, so it’s important stuff. What the iconoclast may see as simpleminded dogma or group-think boosterism is also the very drive that allows a group of people to move in the same direction and achieve things larger than their individual selves.
But sometimes those very qualities are their undoing.
What’s amazing to me is how human systems create and agree upon truth. How they go about defining it.
In a couple of recent New Yorker articles, I saw some parallels to what I see in companies I’ve worked with (including my present employer). In Malcolm Gladwell’s review of the new book “Collapse” — by “Guns, Germs and Steel” author Jared Diamond — we see how entire civilizations can commit slow suicide simply by adhering to their assumptions about social survival, even when they’re at odds with the obvious needs for biological survival. For example, the Norse settlements in Greenland starved themselves to death because they insisted on living and working the land the same way in that place as their kin did in Norway. So, they ignored and even saw as ungodly the way the native Inuit lived on fish and seal. According to the book, “the Norse ate their cattle down to the hoofs, and … in the end, they had to eat their pets. But [excavations haven’t uncovered] fish bones, of course. Right up until they starved to death, the Norse never lost sight of what they stood for.” They starved in the shadow of their cathedrals.
The thing is that the way of life itself isn’t inherently “bad” … it’s a matter of context. He also points out that Easter Island died off not because it was any more incompetent than any other pacific island civilization. It used the same techniques for survival that everyone else used on other islands, but happened to be on the one island with a combination of factors that made it very fragile for deforestation and such.
Looking at this in a strictly moral light runs the risk of missing the point, I think. The point isn’t that we should adopt some kind of dogma about how to treat the land in all cases, but to open our eyes and realize that there’s a context in which we live, and that the context doesn’t change based on our taboos or beliefs or habits.
Anyway, I think the same thing happens in companies all the time. Perfectly well-meaning belief systems can come into being that work very well for some things, but in the end go down the wrong path. There was an article in the Harvard Business Review a while back called “Why Bad Projects are So Hard to Kill” (I can’t find a link to it at the moment) that explained how huge gaffes at major companies happened, millions wasted, because of infectious ideas that had everyone convinced of their validity in spite of all signs to the contrary.
In fact, I’m involved in a project now that could easily be derailed if we don’t look outside a particular ideological box that was itself created a couple of years ago by people who were legitimately thinking outside of their own box. The idea itself wasn’t a bad one, it just runs the risk of being held up as the *only* one.
How do ideas end up being taken for granted like that? Well, sometimes it’s from dogged proselytizing by champions who usually mean well. But how does it get swallowed so easily and propogated so quickly?
My theory is that the first person to write something down in a coherent way — and thereby give structure to that previously vague, ethereal “something” — often ends up being the author of a new dogma, whether they wanted to be or not.
How many times have you seen a PowerPoint presentation given, and that presentation’s bullet points end up propogated throughout the ideology of a company in various permutations with other ideas until you wonder if anybody remembers the original context of the original ideas? I don’t know about you, but I’ve seen it more times than I can count at this point.
Anyway, this dynamic came to mind when I read this other New Yorker story about the psychiatric manual for diagnosis, and the guy who transformed it into the “bible” it is used as today. In “The Dictionary of Disorder”, Alix Spiegel explains how Robert Spitzer managed to take as his life’s work the task of making psychological diagnosis more reliable (and in the discipline’s parlance, reliable means consistent, repeatable — that is, one person with a given set of symptoms would be diagnosed the same way by different people with the same set of criteria).
What Spitzer essentially did was create a sort of controlled vocabulary. Even though the discipline was full of disagreement on many definitions, he just pushed his way through and started defining things. According to him, his criteria for putting in a diagnosis was whether or not it was ‘logical’ — if it made sense as a category unto itself. There wasn’t a ton of empirical data behind what was happening … it was mainly him with some assistants gathering various candidate diagnoses and evaluating them, and then defining them and categorizing.
I suspect that if the book had gone through those years (in which Spitzer was running it) with true democratic committee-based decisionmaking, it would’ve been very different. It seems like it needed one guy to say “ok, this is the picture I have in *my* head … and it makes sense, it hangs together, there’s an internal logic to it” and suddenly here was a workable system. Why stick with your own partial set of terms and ideas if there’s now a documented system that’s more complete, and that now defines the language with which you can discuss your work with peers and diagnose in a way that’s clearly defined?
Now, of course, the DSM is being maintained and updated by a more committee-like structure. Which makes sense. But that’s only possible because a logical framework is already in place, and agreed upon by enough people (either tacitly or explicitly) that the essential structure will continue to stand, evolving through gradual work.
It strikes me that defining things is such a powerful act. Just think: a document like the DSM is just language, but it’s a document that affects the way people work and live. It provides a landscape for people to be defined and define themselves, for good or ill. It changes the way our culture talks about (and therefore acts toward) people with various unconventional patterns of behavior.
These are just WORDS! And yet, much like the U.S. Constitution or other documents that have influenced the way societies work, the words are just as tangible a presence as if someone had wrought giant walls and roadways and plopped them into a civilization, causing people to walk and interact in different ways than they did before.
So, no wonder the Norse settlements starved. They had belief structures so palpable that they blinded them from the obvious routes to survival — dying of starvation next to waters so full of fish, that to this day you can just reach down and grab them out with your bare hands.
Any community, country, or company that thinks it’s immune to this kind of blindness is only proving the point.