Language

You are currently browsing articles tagged Language.

I just saw that the BBC tv documentary series based on Stuart Brand’s “How Buildings Learn” has been posted on Google Video. Huzzah!

It’s been a while since I read the book, so I watched a bit of the first episode, and it kicked up a thought or two about the language we use for design. Brand makes a sharp distinction between architecture that’s all about making a “statement” — a stylistic gesture — and architecture that serves the needs of a building’s inhabitants. (Arguably a somewhat artificial distinction, but a useful one nonetheless. For the record, Joshua Prince Ramus made a similar distinction at IASummit07.)

The modernist “statements” Brand shows us are certainly experiences — and were designed to be ‘experienced’ in the sense of any hermetic work of ‘difficult’ art. But it’s harder to say they were designed to be inhabited. On the other hand, he’s talking about something more than mere “use” as well. Maybe, for me at least, the word “use” has a temporary or disposable shade of meaning?

It struck me that saying a design is to be “inhabited” makes me think about different values & priorities than if I a design is to be “used” or “experienced.”

I’m not arguing for or against any of these words in general. I just found the thought intriguing… and I wonder just how much difference it makes how we talk about what we’re making, not only to our clients but to one another and ourselves.

Has anyone else found that how you talk about your work affects the work? The way you see it? The way others respond to it?

Moral Dimensions

Without going into a lot of detail about it (no time!) I wanted to quote from this article discussing the ideas of Jonathan Haidt. It’s actually supposed to be a review of George Lakoff’s writing on political language, but it gets further into Haidt’s ideas and research as a better alternative. He’s not so kind to dear Lakoff (whose earlier work is very influential among many of my IA friends).

Essentially, the article draws a distinction between Lakoff’s idea that people act based on their metaphorical-linguistic interpretation of the world and Haidt’s psycho-evolutionary (?) view that there are deeper things than what we think of as language that guide us individually and socially. And Haidt is working to name those things, and figure out how they function.

Oddly enough, I remembered once I’d gotten a paragraph into this post that I linked to and wrote about Haidt a couple of years before. But I hadn’t really looked into it much further. Now I’m really wanting to read more of his work.

Haidt maps five major scales against which we can categorize (or measure) our moral responses. One of those is the one that seems least changeable or approachable by reason, the one that describes our visceral reaction of elevation or disgust in the presence of certain things we find taboo, without necessarily being able to explain why in a purely rational or utilitarian way.

Will Wilkinson — What’s the Frequency Lakoff?

Most intriguing is the possibility of systematic left-right differences on the purity dimension, which Haidt pegs as the source of religious emotion. In a fascinating chapter in his illuminating recent book, The Happiness Hypothesis, Haidt explains how a primal biological system—the disgust system—designed to keep us clear of rotten meat, expanded over our evolutionary history to encompass sexual norms, physical deformations, and much more. …

The flipside of disgust is the emotion Haidt calls “elevation,” based in a sense of purification and transcendence of our animal incarnation. Cultures the world over picture humanity as midway on a ladder of being between the demonically disgusting and the divinely pure. Most world religions express it through taboos of food, body, and sex, and in rituals of de-animalizing purification and sacralization. The warm, open sense of elevation and the shivering nausea of disgust are high and low notes in the same emotional key.

Haidt’s suggestion is partly that morally broad-band conservatives are better able to exploit the emotional logic of religiosity by deploying rhetoric and imagery that calls on powerful sentiments of elevation and disgust. A bit deaf to the divine, narrow-band liberals are at a disadvantage to stir religious Americans. And there are a lot of religious Americans out there.

I like this approach because it doesn’t refute the linguistic approach so much as explain it in a larger context. (Lakoff has come under criticism for his possibly over-simplification about how people live by metaphor — I”ll leave that debate to the experts.)

And it explains how people can have a real change of heart in their lives, how their morals can shift. Just this week, the mayor of San Diego decided to reverse a view he’d held for years, both personally and as a campaign promise, to veto any marriage-equality bill. Evidently one of his scales changed the other — he was caught in a classic Euthyphro conundrum between loyalty to his party and loyalty to the reality of his daughter. Unlike with Euthyphro, family won out. Or perhaps the particular experience of his daughter convinced him that the general assumption of homosexuality as evil is flawed? Who knows.

Whatever the cause, once you get a bit of a handle on Haidt’s model, you can almost see the bars in the chart shifting in front of you when you hear of such a change in someone.

And you can see very plainly how Karl Rove and others have masterfully manipulated this tendency. They have an intuitive grasp of this gut-level “digust/elevation” complex, and how to use it to get voters to act. I wonder, too, if it helps explain the weird fixation “socially conservative” people of all stripes had with the “Passion of Christ” film? Just think — that extreme level of detailed violence to a human being ramping up the digust meter, with the elevation meter being cranked just as high from the sense of transcendent salvation and martyr’s love that the gruesome ritual killing represented. What a combination.

The downside to Democrats here is that they can’t fake it. According to Wilkinson, there’s no way to just word-massage their way into this emotional dynamic with the public on the current dominant issues that tap into it. In his words, “Their best long-term hopes rest in moving the fight to a battlefield with more favorable terrain.”

(PS: I dig Wilkinson’s blog name too — a nice oblique reference to Wittgenstein, who said the aim of Philosophy is to “shew the fly the way out of the bottle.” )

Edited to Add: There’s a nice writeup on Haidt in the Times here.

I’m a huge fan of Jonathan Lethem. And I hadn’t gotten round to reading all of his essay in Harper’s until just lately. Here’s a slice. And yes, the writing is this sharp and elegant all the way through:

“The ecstasy of influence: A plagiarism” by Jonathan Lethem (Harper’s Magazine)

For substantially all ideas are secondhand, consciously and unconsciously drawn from a million outside sources, and daily used by the garnerer with a pride and satisfaction born of the superstition that he originated them; whereas there is not a rag of originality about them anywhere except the little discoloration they get from his mental and moral caliber and his temperament, and which is revealed in characteristics of phrasing. Old and new make the warp and woof of every moment. There is no thread that is not a twist of these two strands. By necessity, by proclivity, and by delight, we all quote. Neurological study has lately shown that memory, imagination, and consciousness itself is stitched, quilted, pastiched. If we cut-and-paste our selves, might we not forgive it of our artworks?

It strikes me that his argument about art and influence is applicable to communities of practice as well. That we all borrow and re-contextualize our tools, ideas, methods.

It also strikes me that language itself works this way. What if, at some point early in civilized human development, as soon as one primitive came up with a name for something, nobody else was allowed to use that very name for that thing, without paying a fee of some kind? The very reason we have a rich language is that it can be fluid — it can grow, morph, and brawl its way through history — and because of that, we have civilization itself.

theobromine molecule (courtesy of wikipedia)
Theobromine, often confused with caffeine, is the molecule responsible for the mild mood-elevating effects of fine, high-concentration (dark) chocolate.

Theo = “god” & Broma = “food” — truly the food of the gods.

I’m not much of a joiner. I’m not saying I’m too good for it. I just don’t take to it naturally.

So I tend to be a little Johnny-come-lately to the fresh stuff the cool kids are doing.

For example, when I kept seeing “Web 2.0” mentioned a while back, I didn’t really think about it much, I thought maybe I’d misunderstood … since Verizon was telling me my phone could now do Wap 2.0, I wondered if it had something to do with that?

See? I’m the guy at the party who was lost in thought (wondering why the ficus in the corner looks like Karl Marx if you squint just right) and looks up after everybody’s finished laughing at something and saying, “what was that again?”

So, when I finally realize what the hype is, I tend to already be a little contrary, if only to rescue my pride. (Oh, well, that wasn’t such a funny joke anyway, I’ll go back to squinting at the ficus, thank you.)

After a while, though, I started realizing that Web 2.0 is a lot like the Mirror of Erised in the first Harry Potter novel. People look into it and see what they want to see, but it’s really just a reflection of their own desires. They look around at others and assume they all see the same thing. (This is just the first example I could think of for this trope: a common one in literature and especially in science fiction.)

People can go on for quite a while assuming they’re seeing the same thing, before realizing that there’s a divergence.

I’ve seen this happen in projects at work many times, in fact. A project charter comes out, and several stakeholders have their own ideas in their heads about what it “means” — sometimes it takes getting halfway through the project before it dawns on some of them that there are differences of opinion. On occasion they’ll assume the others have gone off the mark, rather than realizing that nobody was on the same mark to begin with.

I’m not wanting to completely disparage the Web 2.0 meme, only to be realistic about it. Unlike the Mirror of Erised (“desire” backwards) Web 2.0 is just a term, not even an object. So it lends itself especially well to multiple interpretations.

A couple of weeks ago, this post by Nicholas Carr went up: The amorality of Web 2.0. It’s generated a lot of discussion. Carr basically tries to put a pin in the inflated bubble of exuberance around the dream of the participation model. He shows how Wikipedia isn’t actually all that well written or accurate, for example. He takes to task Kevin Kelly’s Wired article (referenced in my blog a few days ago) about the new dawning age of the collectively wired consciousness.

I think it’s important to be a devil’s advocate about this stuff when so many people are waxing religiously poetic (myself included at times). I wondered if Carr really understood what he was talking about at certain points — for example, doing a core sample of Wikipedia and judging the quality of the whole based on entries about Bill Gates and Jane Fonda sort of misses the point of what Wikipedia does in toto. (But in the comments to his post, I see he recognizes a difference between value and quality, and that he understands the problems around “authority” of texts.) Still, it’s a useful bit of polemic. One thing it helps us do is remember that the ‘net is only what we make it, and that sitting back and believing the collective conscious is going to head into nirvana without any setbacks or commercial influence is dangerously naive.

At any rate, all we’re doing with all this “Web 2.0” talk is coming to the realization that 1) the Web isn’t about a specific technology or model of browsing, but that all these methods and technologies will be temporary or evolved very quickly, and that 2) it’s not, at its core, really about buying crap and looking things up — it’s about connecting people with other people.

So I guess my problem with the term “Web 2.0” is that it’s actually about more than the Web. It’s about internetworking that reduces the inertia of time and space and creates new modes of civilization. Not utopian modes — just new ones. (And not even, really, that completely new — just newly global, massive and immediate for human beings.) And it’s not about “2.0” but about “n” where “n” is any number up to infinity.

But then again, I’m wrong. I can’t tell people what “Web 2.0” means because what it means is up to the person thinking about it. Because Web 2.0 is, after all, a sign or cypher, an avatar, for whatever hopes and dreams people have for infospace. On an individual level, it represents what each person’s own idiosyncratic obsessions might be (AJAX for one person, Wiki-hivemind for the next). And on a larger scale, for the community at large, it’s a shorthand way of saying “we’re done with the old model, we’re ready for a new one.” It’s a realization that, hey, it’s bigger and stranger than we realized. It’s also messy, and a real mix of mediocrity and brilliance. Just like we are.

Ustiquity

In my ever-expanding obsession with coining terms*, I’ve come up with another one: Ustiquity.

It’s the property of being both “ubiquitous” and “sticky” that describes information on the Internet and the increasingly available manner in which we access that information.

See, we’re all creating information, having conversations, making thoughts explicit with language. We’ve always done that. But now, because we do so much of it online, it’s sticking around — discussions I had on usenet in 1993 are still out there someplace, searchable with Google.

Stickiness has existed in other media, of course, such as writing something and putting it into a shoebox or publishing it, in which case it’s in a library and others can find it there. But on the Internet, it’s available to everyone all the time.

In addition to this, the Internet is doing a pretty impressive job of not “staying put” — it won’t stay on our computers. It’s leaking out all over the place. Onto our phones, our iPods, our Blackberries, our car consoles, and even some high-end refrigerators. Basically, the deal is that this is only the beginning. Younger people are already expecting to be able to access the ‘net from wherever they happen to be (Good lord, who knew that Buckaroo Bonzai’s silly “Wherever you go, there you are” koan would end up being prescient??).

Ustiquity is this property of stickiness (things don’t decay or drift away as easily — they tend to stay around, even if it’s just on the archive.org Wayback Machine or in Google’s cache database) plus ubiquity (all that stuff that’s sticking around is becoming more and more available, anywhere, anytime).

This doesn’t mean that it makes it easier to access … it just means it’s possible. The more ustiquitous stuff there is in the world, the harder it is to find any particular item.

So, ustiquity is an opportunity, but it’s also a heck of a challenge.

Feel free to use this term whenever you want. You can credit me or not, up to you. I can always point to my dated entry on my blog, or give ustiquity a much-ballyhooed Technorati tag, which will probably end up on a web archive someplace, somewhere, and therefore everywhere and always.

Unless, of course, it doesn’t.

(* Previous coinings include “metafatigue” and “gurule.” Yes, this is a sad little hobby. )

Cool article (via bloug) for a number of reasons. But the one thing that really popped out for me was the fact that missionaries, in order to convert other cultures to Christianity, are first converting other cultures into written-language cultures.

It’s like “terraforming” (converting a planet into one hospitable to earth life forms), but for religion. The missionaries are certainly creating written languages for spoken ones in part to just help societies enter the global community (I suppose), but also to get them on track with Biblical scripture and whatnot.

And it begs a question, for me (and not out of disrespect, because I still consider myself Christian), about the nature of religious truth. Or truth in general. How does the cognitive landscape shift when a culture’s language suddenly becomes writable and readable? How does it affect history and communal understanding?

I can’t imagine a more fundamental, bone-level shift in reality for human beings.

How Linguists and Missionaries Share a Bible of 6,912 Languages – New York Times

Based in Dallas, S.I.L. (which stands for Summer Institute of Linguistics) trains missionaries to be linguists, sending them to learn local languages, design alphabets for unwritten languages and introduce literacy. Before they begin translating the Bible, they find out how many translations are needed by testing the degree to which speech varieties are mutually unintelligible. “The definition of language we use in the Ethnologue places a strong emphasis,” said Dr. Lewis, “on the ability to intercommunicate as the test for splitting or joining.”

Interestingness

I’m still giddy, even in my jaded state, whenever I hear about yet another yummy infospace architecture element that creates emergent structures.

I’m not even sure if I just said anything that makes sense … what’s the official terminology?

Anyway, now Flickr is using some fun math to track the ‘interestingness’ of photos on the site.

Flickr: Explore interesting photos around Flickr

There are lots of things that make a photo ‘interesting’ (or not) in the Flickr. Where the clickthroughs are coming from; who comments on it and when; who marks it as a favorite; its tags and many more things which are constantly changing. Interestingness changes over time, as more and more fantastic photos and stories are added to Flickr.

I’m doing some research on old technology and how people talked about it when it was new to them, and ran across this terrific site with an article about Murry Mercier and TV in 1929

My favorite part of this page is the scan of the news article from April 29, 1929, The Ohio State Journal newspaper.

“Already they have achieved success in developing an instrument that outdoes the magic of storybook fame by showing a scene radiocast from another city hundreds of miles away. This is not to be confused with telephoto which reproduces the picture on paper. Television is instantaneous. For instance, one can watch a prize fight or a wedding ceremony in Pittsburgh. It reproduces the scenes as rapidly as they change, the same as a mirror would reflect them.”

What fascinates me is how someone in 1929 (not that long ago) was struggling to explain in a literal, non-technical way how television worked.
Remember trying to explain to someone what the Internet is? It feels similar.
It’s amazing how something so strange that there weren’t quite words for describing it (even the ‘mirror’ thing doesn’t quite get it) and yet now Television has become its own concept, not requiring explanation at all. It just happens. Now, rather than describing TV by talking about mirrors and lanterns, we describe other things by referencing TV. (“Yeah, mom, the Internet is like TV but not, I mean, you can do things in it and it responds…wait that’s not quite it…”)

Gurule

I hereby coin the term “gurule” — and announce that I’m tired of gurules.

By “gurule” I mean overly simplistic rules made up by design gurus, mostly for the purpose of sounding smart and making a name for themselves.

“The Back Button is Always Bad”

“Redundancy is Bad”

“Frames are Bad”

Hm, usually they seem to be about things that are bad.

Not long ago I posted a comment on IA Slash about this.

Yes some things are usually a sign of flawed design, and some things are typically hallmarks of good design, but sticking these insights into categorical pronouncements is just one more step in the slippery slope to hell that is the powerpointification of America.