Net Culture

You are currently browsing articles tagged Net Culture.

WSJ had a front page (!!!) story on Friday called “To Find a Mate,
Raid a Dungeon Or Speak Like an Elf” that tells of people who have met their significant others in MMOGs. Luckily, it’s not written as a puff piece or a “hey, look at these weirdos” piece but seriously considers the fact that people really can get to know one another pretty well in a MMOG setting. Not that it’s as satisfying or fulfilling as real life, but that it can allow for much more insight into another’s character than their profile on eHarmony.

I think a major factor is that these games (for the most part) aren’t about dating (or job hunting or any other RL relationship/pairing endeavor, for that matter) but some other goal. On a dating site — or a date — both parties are generally on their very best behavior, but you don’t get to see what they’re like in a stressful or non-romantic goal-oriented situation. But in a goal-driven MMOG you do.

One interesting statistic I hadn’t yet seen: “Yankee Group, a Boston technology-research firm, estimates that MMOGs, which can be played simultaneously by thousands of people using the Internet, are played by 25 million to 30 million people world-wide.”

Unfortunately, the article likely won’t be readable without a subscription for more than a few days, but here’s a chunk:

http://online.wsj.com/article_email/article_print/SB114980862872575564-lMyQjAxMDE2NDA5OTgwMDk4Wj.html

David Knife, 32, fell in love with his wife, Tracy, 30, while playing Anarchy Online, a science-fiction game. Mr. Knife says he was impressed by her leadership skills. Ms. Knife, who in the game led a guild of about 50 players, “was very motherly to many of the players,” he says. “It’s the way she controls everyone by still being very nice.”

He also liked her use of emoticons, the symbols in text messages that denote kisses or hugs, among other things. “She was very forward in the game, especially with me,” he says.

In August 2004, about six months after they met in the game, her character proposed to Mr. Knife’s. That prompted Mr. Knife to inquire about a possible relationship outside the game, even though he is an Australian, who was living in Melbourne, Australia, and she lived in Red Lion, Pa., where she was raising a daughter after a divorce.

They started talking regularly using an online voice-chat service and said “I love you” before they met in person. “We have very similar personalities,” says Ms. Knife. “We’re both kind of computer geeks.” In February last year, Mr. Knife flew to Pennsylvania for a two-week vacation and proposed on Valentine’s Day.

“I have to remember two wedding days and two engagement days,” says Mr. Knife. The couple were married in January and live in Red Lion.

In discussing some weird policies in the World of Warcraft online game, Cory Doctorow nicely articulates an important insight about environments like WoW:

Online games are incredibly, deeply moving social software that have hit on a perfect formula for getting players to devote themselves to play: make play into a set of social grooming negotiations. Big chunks of our brains are devoted to figuring out how to socialize with one another — it’s how our primate ancestors enabled the cooperation that turned them into evolutionary winners.

http://www.boingboing.net/2006/01/27/world_of_warcraft_do.html

Virtual worlds can have a deep emotional impact on people. This is as true of an old-fashioned BBS or discussion forum like The Well, as well as for MMOGs (Massively Multiplayer Online Games) like the recently deceased Asheron’s Call 2.

Unfortunately, the more resources it takes to run a particular world, the more money it has to make. If it doesn’t keep in the black, it dies. Someone posted a sad little log of the last moments with their friends in this world here.

Things like this intrigue me to no end. I realize that this wasn’t a truly real world that disappeared. That is, the people behind the avatars/characters they played are still alive, sitting at their screens. They had plenty of time to contact one another and make sure they could all meet again in some other game, so it wasn’t necessarily like a tragic sudden diaspora (though some people do go through such an experience if the world they’ve counted on has suddenly had the plug pulled).

Still, the human mind (and heart?) only needs a few things to make a virtual place feel emotionally significant, if not ‘real.’ Reading the log linked above, you see that the participants do have perspective on their reality, even if you think their pining is a little ren-faire cheesy. But they can’t help being attached to the places they formed friendships in, played and talked in, for so long. It seems a little like leaving college — if you made meaningful friendships there, you can never really go back to that context again, even if you keep up with friends afterward. Except instead of graduation, you stand in the quad and part of you “dies” along with the whole campus.

I think the discussion linked above about the Well articulates pretty well just what these kinds of communities can mean to people. Further discussion and inquiry goes on all over the ‘net, including a site called “Project Daedalus” about the “psychology of mmorpgs”. (Edited to add: I also found a new publication called “Games & Culture” with at least one article specific to serious academic study of MMOGs. And I’m sure there are plenty more at places like Academic Gamers and Gamasutra.)

One problem with the participation model is that so much of it is fueled by idealists. Well, it’s not totally a problem, because we need idealists. But it makes the “movement” behind the model seem naive to the more realistic and/or cynical.

I like to think I’m more cynical than not, though I often surprise myself with an occasional rash of gullibility. (I’ve posted stuff here that, looking back, I have to roll my eyes at.)

Wikis (just google it if you need context) have their orthodox fanatics, for sure. There are tons of people who are appalled that anyone would sully a simple, elegant wiki with login authorization or levels of access, for example. It’s not “the Wiki Way!”

But that’s like the first person to invent the wheel being angry when someone adds an axle.

Reflecting its somewhat idealistic origins, the wiki concept started with complete openness, and is now having to mature into other less specialized uses with more general audiences (or, for that matter, more specialized audiences), and it’s having to adapt some more “closed” genetic code for those environments. (Kind of like wheels eventually need tires, brakes and differentials….)

Over at The Register, Andrew Orlowski sneeringly opines about Wikipedia, that the “inner sanctum” is finally admitting the system isn’t flawless and all things Wiki aren’t necessarily holy and beyond criticism. Well, yeah, if that’s true, that’s a good thing. He also points out that Wikipedia is now getting replicated all over the web in spots like Answers.com. I would agree that this is probably not a good idea — taking the wiki articles out of the context of Wikipedia leads people at other sites to think that this is “published” official information, when if they were on Wikipedia they at least understand that it’s editable by anybody, and therefore somewhat less official.

What I don’t especially appreciate about the article is the patronizing stance. It’s dismissive of the participation model, treating it as so much pot-induced hippie talk. When the fact is, many more knowledgeable and experienced people have endorsed this concept and used it in the real world. (Two examples: John Seely Brown’s “Eureka” knowledge network at Xerox; and CAVNET at the US Military.) Are these the idealistic Wiki concept, with no vetting or hierarchy? No. But the “ideal” wiki in any serious real-world endeavor is a straw-man example. Peter Morville recently wrote a more reasoned and informed take on Wikipedia in his column on Authority. See? There are mature perspectives on the usefulness of the model that don’t buy into the hype.

Nick Carr (whose blog I’d not even seen until yesterday, but which is suddenly gotten me sucked into its comments here and there) had this to say in
The law of the wiki

It reveals that collectivism and intelligence are actually inversely correlated. Here, then, is what I’ll propose as the Law of the Wiki: Output quality declines as the number of contributors increases. Making matters worse, the best contributors will tend to become more and more alienated as they watch their work get mucked up by the knuckleheads, and they’ll eventually stop contributing altogether…

I commented the following: I don’t think it’s necessarily true that the number of participants decreases the quality of the output. It depends on the subject and the people involved. Two physicists can write an entry on Quantum Mechanics, but twenty of them can fill it out will all kinds of specialized information that only two wouldn’t have to offer. But that’s because it’s a circumscribed topic relevant to a somewhat narrow community of practice (which is actually what wiki’s were sort of created to support to begin with).
Once you start opening things up to *anybody* about *anything* — i.e. “vulgar” interests — you will end up with mediocre writing and information about topics that appeal to the lowest common denominator. That’s just how human systems structure themselves.
So yes, once you take the “pure wiki” out of the rarified environment of specialists or small groups, you definitely have to impose some top-down structure and peer review. I don’t think anyone but the most absurdly wide-eyed idealists would say differently.

He answers that he was indeed talking about the purely democratic wiki, and also asks if even in the specialized info areas, there could be such a thing as “too many” involved, that would erode quality.

I think that’s entirely possible, sure. But to some degree I wonder if it misses the point.

In fact, calling Wikipedia after an encyclopedia kind of misses the point too. I think it’s part of what’s throwing so many people off. But I’m not sure what else to call it, really. Other than “The Big General Knowledge Wiki.”

To my knowledge, even the first simple “idealistic” wikis were never meant as “authoritative” sources of knowledge (and by authority here, I mean the conventionally agreed upon and empirically verified facts on a topic). It was a place for quick gatherings over certain topics, simple entry and editing, for groups of people who wanted a place where all their thoughts could stick for later viewing and shaping. That’s great … and that’s what Wikipedia really is, just very very large. It does its job very well: it acts as a quick, simple repository of what people are thinking or communicating about particular topics. It’s not made for eloquence or even necessarily permanence.

Wikipedia is an experiment in taking that totally open environment (the wiki) and seeing what happens if we layer into it elements of traditional knowledge-authority-making. And (I think wisely) the organizers started with as little structure as possible rather than assuming it needed too much. Because it’s a living, breathing entity that can change over time through the actions of its community, it didn’t have to be a Parthenon. It just had to be a decent Quonset Hut. The other feature can be added and tweaked as needed.

As for the messiness and unevenness, I think people need to get over it. If you prefer your information pre-packaged and pre-authorized, go to the traditonal sources (not that they’ll all be correct, but at least you won’t get in *trouble* for it, probably — you can always say “I checked in a real book!” Kind of like all those companies using Linux even though they bought it from IBM — because it’s more official and feels less risky and messy.)

All Wikipedia does is take the relatively invisible market of ideas, the activity of the knowledge hive, and make it visible, in all its messy glory. “Official” knowledge came from the same hive — but the activity wasn’t out there for everyone to watch. It was in academic conferences or the editorial meetings at People magazine.

But we still need “peer reviewed” authoritative decisionmaking around what information should be referred to when somebody wants “the right answer.”

So, Wikipedia, I think, ought to explain this somehow to its users. Now that the community using it has gone way beyond the early adopter crowd, and hits on all kinds of things on Google are pointing to Wikipedia in the first 10 lines, they probably should let people know: what you read here was put here mostly by people like you. Always check primary sources, etc etc etc .

At some point, Wikipedia needs to have ways of denoting how ‘official’ something is — maybe a coding system, where the Quantum Physics page has several physicists looking after it, but the J-Lo page is basically a free for all?

I believe in the Wiki Way. I do. I just think it’s only one virtue among many, and that it has to be shaped to meet the demands of different contexts.

I’m not much of a joiner. I’m not saying I’m too good for it. I just don’t take to it naturally.

So I tend to be a little Johnny-come-lately to the fresh stuff the cool kids are doing.

For example, when I kept seeing “Web 2.0” mentioned a while back, I didn’t really think about it much, I thought maybe I’d misunderstood … since Verizon was telling me my phone could now do Wap 2.0, I wondered if it had something to do with that?

See? I’m the guy at the party who was lost in thought (wondering why the ficus in the corner looks like Karl Marx if you squint just right) and looks up after everybody’s finished laughing at something and saying, “what was that again?”

So, when I finally realize what the hype is, I tend to already be a little contrary, if only to rescue my pride. (Oh, well, that wasn’t such a funny joke anyway, I’ll go back to squinting at the ficus, thank you.)

After a while, though, I started realizing that Web 2.0 is a lot like the Mirror of Erised in the first Harry Potter novel. People look into it and see what they want to see, but it’s really just a reflection of their own desires. They look around at others and assume they all see the same thing. (This is just the first example I could think of for this trope: a common one in literature and especially in science fiction.)

People can go on for quite a while assuming they’re seeing the same thing, before realizing that there’s a divergence.

I’ve seen this happen in projects at work many times, in fact. A project charter comes out, and several stakeholders have their own ideas in their heads about what it “means” — sometimes it takes getting halfway through the project before it dawns on some of them that there are differences of opinion. On occasion they’ll assume the others have gone off the mark, rather than realizing that nobody was on the same mark to begin with.

I’m not wanting to completely disparage the Web 2.0 meme, only to be realistic about it. Unlike the Mirror of Erised (“desire” backwards) Web 2.0 is just a term, not even an object. So it lends itself especially well to multiple interpretations.

A couple of weeks ago, this post by Nicholas Carr went up: The amorality of Web 2.0. It’s generated a lot of discussion. Carr basically tries to put a pin in the inflated bubble of exuberance around the dream of the participation model. He shows how Wikipedia isn’t actually all that well written or accurate, for example. He takes to task Kevin Kelly’s Wired article (referenced in my blog a few days ago) about the new dawning age of the collectively wired consciousness.

I think it’s important to be a devil’s advocate about this stuff when so many people are waxing religiously poetic (myself included at times). I wondered if Carr really understood what he was talking about at certain points — for example, doing a core sample of Wikipedia and judging the quality of the whole based on entries about Bill Gates and Jane Fonda sort of misses the point of what Wikipedia does in toto. (But in the comments to his post, I see he recognizes a difference between value and quality, and that he understands the problems around “authority” of texts.) Still, it’s a useful bit of polemic. One thing it helps us do is remember that the ‘net is only what we make it, and that sitting back and believing the collective conscious is going to head into nirvana without any setbacks or commercial influence is dangerously naive.

At any rate, all we’re doing with all this “Web 2.0” talk is coming to the realization that 1) the Web isn’t about a specific technology or model of browsing, but that all these methods and technologies will be temporary or evolved very quickly, and that 2) it’s not, at its core, really about buying crap and looking things up — it’s about connecting people with other people.

So I guess my problem with the term “Web 2.0” is that it’s actually about more than the Web. It’s about internetworking that reduces the inertia of time and space and creates new modes of civilization. Not utopian modes — just new ones. (And not even, really, that completely new — just newly global, massive and immediate for human beings.) And it’s not about “2.0” but about “n” where “n” is any number up to infinity.

But then again, I’m wrong. I can’t tell people what “Web 2.0” means because what it means is up to the person thinking about it. Because Web 2.0 is, after all, a sign or cypher, an avatar, for whatever hopes and dreams people have for infospace. On an individual level, it represents what each person’s own idiosyncratic obsessions might be (AJAX for one person, Wiki-hivemind for the next). And on a larger scale, for the community at large, it’s a shorthand way of saying “we’re done with the old model, we’re ready for a new one.” It’s a realization that, hey, it’s bigger and stranger than we realized. It’s also messy, and a real mix of mediocrity and brilliance. Just like we are.

Oldest .com’s

Cory Doctorow at Boing Boing shares a link to the 100 oldest .COM names in the registry, and wonders about the “visionaries” who might’ve realized they needed a “.com” domain in 1985.

But many of those companies likely weren’t thinking about commercial Internet possibilities. They just happened to be involved in the academic, scientific and defense contracting fields, either directly or tangentially, and according to the rules in the registry, they had to be “.com” to show they were commercial enterprises, unlike the majority of the Internet nodes at the time, which were .edu or .gov (and a few .orgs I guess, might’ve been the minority? Hm. )

Anyway, I mention this not just to be persnickety, but because I think it’s interesting how easy it is to forget what the context was 20 or hell even 12 years ago. I’m fascinated at how quickly the ‘net became a “land of opportunity” as opposed to an under-the-radar propeller-head network, and how to some degree we’re all coming back to the ‘net’s DNA of community (which has always been prevalant, it’s just not gotten the press because the ‘real’ community happening online isn’t necessarily connected to any IPO’s).

The market isn’t using the net for its own ends. People are using the market to utilize the net for their own ends… and as always, people are mainly interested in connecting with, sharing with, creating with other people.

We are the Web

I’m big on the idea that the Internet isn’t really about commerce or information reference, but mainly about community and conversation (which of course include things like commerce and knowledge — but only as facets of the larger social drive).

In Wired last month, (We Are the Web) Kevin Kelly evidently agrees:

What we all failed to see was how much of this new world would be manufactured by users, not corporate interests.

I was saying this back in 2002: The real killer app is people. But it didn’t start with me, alas… I seem to remember even in 1999 Whole Earth and other places were discussing the “highways of the mind” as social spheres, and the “WELL” as the prototype of sorts. So, I don’t know that Mr. Kelly is quite right in “we all failed to see” … I think many of us just forgot in the mad rush to make a killing in the tech bubble, perhaps?

Still, I’m glad this meme is propogating … it helps keep us aware of the real human context of all this technology.

Google Talk

For once, I’m not the last person to hear about something months later.

Google Talk sounds exciting. No voice-capable Mac client for it, but Adium and iChat both work. If nothing else, it’ll educate the masses about the “Jabber” protocol.

Google does have gobs of cash, and it’s lots of fun seeing what they can do with it. What I wish they’d really do is start a mobile phone company, because I would imagine it wouldn’t suck nearly as much as Verizon and others. But that’s merely a pipe dream.

Anyway, it’ll be interesting to see how this affects IM users, if it’ll catch on or not. I still don’t understand why AIM and MSN and others don’t open up their protocols. Maybe Google is what it takes to get people to realize if they use an open protocol (Jabber) they can talk to *anybody.*

However, I wonder if Google plans on using its ads technology for scanning IM’s and displaying ads while chatting or the like? That would kill it for me. I can put up with it in emails, somehow. It’s unobtrusive, and they’re giving me a couple of gigs of space. But for IM’s … for some reason that would cross a line for me. Not sure why.

It will be fun, though, to see if they can do a lot of cool integration like they’ve been doing with their other services. Phone messaging, search, blogging, etc.

Wake-Up Call

Companies of the world, pay attention. These are your future customers.

Pay attention not just to the fact that they’re online, but what they’re doing and how. Pay attention to how integrated their physical space is with their infospace, and how relational their infospace has become. They bounce between applications, they earn and spend “virtual” money in massive multiplayer environments. They live in this place.
And your cute little web-widgets that *might* be finished at the end of their 3-5 year development programs are going to feel to them about as sophisticated and useful as a tire swing feels to a circus acrobat.

Pew Internet & American Life Project Report: Pew Internet: Teens and Technology

Today’s American teens live in a world enveloped by communications technologies; the internet and cell phones have become a central force that fuels the rhythm of daily life.

The number of teenagers using the internet has grown 24% in the past four years and 87% of those between the ages of 12 and 17 are online. Compared to four years ago, teens’ use of the internet has intensified and broadened as they log on more often and do more things when they are online.

via JOHO/Blog

Like so many other great ideas and technologies for the Internet, Flickr emerged from a soup of game thinking. Nice interview with JJG:

adaptive path » an interview with ludicorp’s eric costello

JJG: How much of The Game Neverending would you say is still present in Flickr in its current state?
EC: I think the spirit of it is there, definitely. Someone once described Flickr as “massively multiplayer online photo sharing.” I think that’s a good description. There’s kind of a feeling of exploration within Flickr. It feels like a world where you can move around and find wonderful things – the wonderful things being the great photographs that people upload.
And because it’s got the social network aspect of it, you can kind of build neighborhoods within Flickr. The page in Flickr that shows you all the photos from your friends and family is very much a space like you might find in a game. It’s a place where you go and interact with the people you know.

The original “weblog” was Jorn Barger’s robotwisdom.com. I used to read it every day, and went to about half the links. And that’s about all his blog was for a long while … he used to do small paragraphs, but then finally was posting 15-20 or more links a day with small comments next to them. It would range from cartoons to James Joyce, technology to poetry. It felt great because here was a guy who consumed the Internet the way I did, and found it a drug in the same way. So much at your fingertips, and all of it pouring in faster than you can comprehend. Just dipping fingers into the stream and tasting.

I didn’t realize Jorn ended up broke and panhandling. I hadn’t kept up with him, really, and just figured he’d moved on… but then I saw a post over in Sean’s blog to an article about him (this one:What happened to Jorn).

On his panhandler sign, Barger had written:

Coined the term ‘weblog,’ never made a dime.

Then I looked on Wikipedia and found this. And it says that Barger now says that most of what the post above said was “fiction.”
See his comment, posted just a few weeks ago, in the Wikipedia discussion page.

Whatever the situation, this is a guy who doesn’t fit in the cultural grid of “normal” — he’s brilliant, and probably just short of Unabomber (that is, super smart and looks at the world in weirdly accurate but nonconventional ways, but luckily hasn’t decided that the alien world around him needs to be bombed into oblivion).

The Wikipedia bit about him is pretty informative, and explains a lot of why his head is so widespread. He’s looking sideways through the cracks in our silos.

The thing is, I wonder how much synthesis he does? His weblog is a serial parade of nodes, but does anything combine and spawn from all that input? Based on what the Wikipedia page says, probably so (he’s published on various topics, and been known to write academic and critical stuff).

Luckily, RobotWisdom is back up, as of Feb 2005.

After reading Half-Blood Prince, I was wondering if the word “Horcrux” had any previous etymology, and looked it up on Wikipedia.
Only to discover that it really doesn’t… but that amazingly, less than 24 hours after the release of the new Harry Potter book, there’s a whole entry on the term on Wikipedia. Yet another very cool example of how amazing this site is.
Don’t read it, though, if you don’t want the plot spoiled! Just sayin’
Horcrux – Wikipedia, the free encyclopedia

« Older entries § Newer entries »