Management

You are currently browsing articles tagged Management.

I’m happy to announce I’m collaborating with my Macquarium colleague, Patrick Quattlebaum, and Happy Cog Philadelphia’s inimitable Kevin Hoffman on presenting an all-day pre-conference workshop for this year’s Information Architecture Summit, in Denver, CO. See more about it (and register to attend!) on the IA Summit site.

One of the things I’ve been fascinated with lately is how important it is to have an explicit understanding of the organizational and personal context not only of your users but of your own corporate environment, whether it’s your client’s or your own as an internal employee. When engaging over a project, having an understanding of motivations, power structures, systemic incentives and the rest of the mechanisms that make an organization run is immeasurably helpful to knowing how to go about planning and executing that engagement.

It turns out, we have excellent tools at our disposal for understanding the client: UX design methods like contextual inquiry, interviews, collaborative analysis interpretation, personas/scenarios, and the like; all these methods are just as useful for getting the context of the engagement as they are for getting the context of the user base.

Additionally, there are general rules of thumb that tend to be true in most organizations, such as how process starts out as a tool, but calcifies into unnecessary constraint, or how middle management tends to work in a reactive mode, afraid to clarify or question the often-vague direction of their superiors. Not to mention tips on how to introduce UX practice into traditional company hierarchies and workflows.

It’s also fascinating to me how understanding individuals is so interdependent with understanding the organization itself, and vice-versa. The ongoing explosion of new knowledge in social psychology and neuroscience  is giving us a lot of insight into what really motivates people, how and why they make their decisions, and the rest. These are among the topics Patrick & I will be covering during our portion of the workshop.

As the glue between the individual, the organization and the work, there are meetings. So half the workshop, led by Kevin Hoffman, will focus specifically on designing the meeting experience.  It’s in meetings, after all, where the all parties have to come to terms with their context in the organizational dynamics — so Kevin’s techniques for increasing not just the efficiency of meetings but the human & interpersonal growth that can happen in them, will be invaluable. Kevin’s been honing this material for a while now, to rave reviews, and it will be a treat.

I’m really looking forward to the workshop; partly because, as in the past, I’m sure to learn as much or more from the attendees as they learn from the workshop presenters.

This is based on a slide I’ve been slipping into decks for over a year now as a “quick aside” comment; but it’s been bugging me enough that I need to get it out into a real blog post. So here goes.

We hear the words Strategy and Innovation thrown around a lot, and often we hear them said together. “We need an innovation strategy.” Or perhaps “We need a more innovative strategy” which, of course, is a different animal. But I don’t hear people questioning much exactly what we mean when we say these things. It’s as if we all agree already on what we mean by strategy and innovation, and that they just fit together automatically.

There’s a problem with this assumption. The more I’ve learned about Communities of Practice, the more I’ve come to understand about how innovation happens. And I’ve come to the conclusion that strategy and innovation aren’t made of the same cloth.

strategy and innovation

1. Strategy is top-down; Innovation is bottom-up

Strategy is a top-down approach. In every context I can think of, strategy is about someone at the top of a hierarchy planning what will happen, or what patterns will be invoked to respond to changes on the ground. Strategy is programmed, the way a computer is programmed. Strategy is authoritative and standardized.

Innovation is an emergent event; it happens when practitioners “on the ground” have worked on something enough to discover a new approach in the messy variety of practitioner effort and conversation. Innovation only happens when there is sufficient variety of thought and action; it works more like natural selection, which requires lots of mutation. Innovation is, by its nature, unorthodox.

2. Strategy is defined in advance; Innovation is recognized after the fact

While a strategy is defined ahead of time, nobody can seem to plan what an innovation will be. In fact, many (or most?) innovations are serendipitous accidents, or emerge from a side-project that wasn’t part of the top-down-defined work load to begin with. This is because the string of events that led to the innovation is never truly a rational, logical or linear process. In fact, we don’t even recognize the result as an innovation until after it’s already happened, because whether something is an innovation or not depends on its usefulness after it’s been experienced in context.

We fill in the narrative afterwards — looking back on what happened, we create a story that explains it for us, because our brains need patterns and stories to make sense of things. We “reify” the outcome and assume there’s a process behind it that can be repeated. (Just think of Hollywood, and how it tries to reproduce the success of surprise-hit films that nobody thought would succeed until they became successful.) I discuss this more in a post here.

3. Strategy plans for success in known circumstances; Innovation emerges from failure in unknown circumstances.

One explicit aim of a strategy is to plan ahead of time to limit the chance of failure. Strategy is great for things that have to be carried out with great precision according to known circumstances, or at least predicted circumstances. Of course strategy is more complex than just paint-by-numbers, but a full-fledged strategy has to have all predictable circumstances accounted for with the equivalent of if-then-else statements. Otherwise, it would be a half-baked strategy. In addition, strategy usually aims for the highest level of efficiency, because carrying something off with the least amount of friction and “wasted” energy often makes the difference between winning and losing.

However, if you dig underneath the veneer of the story behind most innovations, you find that there was trial and error going on behind the scenes, and lots of variety happening before the (often accidental) eureka moment. And even after that eureka moment, the only reason we think of the outcome as an innovation is because it found traction and really worked. For every product or idea that worked, there were many that didn’t. Innovation sprouts from the messy, trial-and-error efforts of practitioners in the trenches. Bell Labs, Xerox PARC and other legendary fonts of innovation were crucibles of this dynamic: whether by design or accident, they had the right conditions for letting their people try and fail often enough and quickly enough to stumble upon the great stuff. And there are few things less efficient than trial and error; innovation, or the activity that results in innovation, is inherently inefficient.

So Innovation and Strategy are incompatible?

Does this mean that all managers can do is cross their fingers and hope innovation happens? No. What it does mean is that to having an innovation strategy has nothing to do with planning or strategizing the innovation itself. To misappropriate a quotation from Ecclesiastes, such efforts are all in vain and like “striving after wind.”

Managing for innovation requires a more oblique approach, one which works more directly on creating the right conditions for innovation to occur. And that means setting up mechanisms where practitioners can thrive as a community of practice, and where they can try and fail often enough and quickly enough that great stuff emerges. It also means setting up mechanisms that allow the right people to recognize which outcomes have the best chance of being successes — and therefore, end up being truly innovative.

I’m as tired of hearing about Apple as anyone, but when discussing innovation they always come up. We tend to think of Apple as linear, controlled and very top-down. The popular imagination seems to buy into a mythic understanding of Apple — that Steve Jobs has some kind of preternatural design compass embedded in his brain stem.

Why? Because Jobs treats Apple like theater, and keeps all the messiness behind the curtain. This is one reason why Apple’s legal team is so zealous about tracking down leaks. For people to see the trial and error that happens inside the walls would not only threaten Apple’s intellectual property, it would sully its image. But inside Apple, the strategy for innovation demands that design ideas to be generated in multitudes like fish eggs, because they’re all run through a sort of artificial natural-selection mechanism that kills off the weak and only lets the strongest ideas rise to the top. (See the Business Week article describing Apple’s “10 to 3 to 1” approach. )

Google does the same thing, but they turn the theater part inside-out. They do a modicum of concept-vetting inside the walls, but as soon as possible they push new ideas out into the marketplace (their “Labs” area) and leverage the collective interest and energy of their user base to determine if the idea will work or not, or how it should be refined. (See accounts of this philosophy in a recent Fast Company article.) People don’t mind using something at Google that seems to be only half-successful as a design, because they know it’ll be tweaked and matured quickly. Part of the payoff of using a Google product is the fun of seeing it improved under your very fingertips.

One thing I wonder: to what extent do any of these places treat “strategy” as another design problem to be worked out in the bottom-up, emergent way that they generate their products? I haven’t run across anything that describes such an approach.

At any rate, it’s possible to have an innovation strategy. It’s just that the innovation and the strategy work from different corners of the room. Strategy sets the right conditions, oversees and cultivates the organic mass of activity happening on the floor. It enables, facilitates, and strives to recognize which ideas might fit the market best — or strives to find low-impact ways for ideas to fail in the marketplace in order to winnow down to the ones that succeed. And it’s those ideas that we look back upon and think … wow, that’s innovation.

I like this column by Nicholas Taleb. I haven’t read his book (The Black Swan) but now I think I might.

I’m more and more convinced that this ineffable activity called “innovation” is merely the story we user after the fact, to help ourselves feel like we understand what happened to bring that innovation about. But, much like the faces we think we see in the chaos of clouds, these explanations are merely comfortable fictions that allow us to feel we’re in control of the outcome. When, in fact, success so often comes from trying and failing, even playing, until the law of averages and random inspiration collide to create something new. The trick is making sure the conditions are ideal for people to fail over and over, until imagination stumbles upon insight.

You Can’t Predict Who Will Change The World – Forbes.com

It is high time to recognize that we humans are far better at doing than understanding, and better at tinkering than inventing. But we don’t know it. We truly live under the illusion of order, believing that planning and forecasting are possible. We are scared of the random, yet we live from its fruits. We are so scared of the random that we create disciplines that try to make sense of the past–but we ultimately fail to understand it, just as we fail to see the future. … We need more tinkering: uninhibited, aggressive, proud tinkering. We need to make our own luck. We can be scared and worried about the future, or we can look at it as a collection of happy surprises that lie outside the path of our imagination.

He rails against the wrong-headed approach factory-style standardization for learning and doing. He doesn’t name them outright, but I suspect No Child Left Behind and Six Sigma are targets.

Caveat: the column does tend to oversimplify a few things, such as describing whole cultures as non-inventive instruction-following drones, but that may just be part of the polemic. There’s more good stuff than ill, though.

I finally got a chance to listen to Bruce Sterling’s rant for SXSW 2007 via podcast as I was driving between PA and NC last week.

There were a lot of great things in it. A number of people have taken great notes and posted them (here’s one example). It’s worth a listen either way — as are all of his talks. I like how Bruce is at a point where he’s allowed to just spin whatever comes to mind for an hour to a group of people. Not because all of it is gold — but because the dross is just as interesting as the gold, and just as necessary.

A lot of this year’s talk was on several books he’s reading, one of which is Yochai Benkler’s The Wealth of Networks. It’s fascinating stuff — and makes me want to actually read this thing. (It’s available online for free — as are some excellent summaries of it, and a giant wiki he set up.)

In the midst of many great lines, one of the things Sterling said that stuck with me was this (likely a paraphrase):

“The distinctions just go away if you’re given powerful-enough compositing tools.”

He was talking about commons-based peer production — things like mashups and remixes, fan art, etc. and how the distinctions between various media (photography, painting, particular instruments, sculpture, etc) blur when you can just cram things together so easily. He said that it used to be you’d work in one medium or genre or another, but now “Digital tools are melting media down into a slum gully.”

First, I think he’s being a little too harsh here. There have always been amateurs who create stuff for and with their peers, and they all think it’s great in a way that has more to do with their own bubble of mutual appreciation than any “universal” measure of “greatness.” It just wasn’t available for everyone to see online across the globe. I’ve been in enough neighborhood writer’s circles and seen enough neighborhood art club “gallery shows” to know this. I’m sure he has too. This is stuff that gives a lot of people a great deal of satisfaction and joy (and drama, but what doesn’t?). It’s hard to fault it — it’s not like it’s going to really take over the world somehow.

I think his pique has more to do with how the “Wired Culture” at large (the SXSW-attending afficianados and pundits) seem to be enamored with it, lauding it as some kind of great democratizing force for creative freedom. But that’s just hype — so all you really have to do is say “we’ll get over it” and move on.

Second, though, is the larger implication: a blurring between long-standing assumptions and cultural norms in communities of creative and design practice. Until recently, media have changed so slowly in human history that we could take for granted the distinctions between photography, design, architecture, painting, writing, and even things like information science, human factors and programming.

But if you think of the Web as the most powerful “compositing tool” ever invented, it starts to be more clear why so many professions / practices / disciplines are struggling to maintain a sense of identity — of distinction between themselves and everyone else. It’s even happening in corporations, where Marketing, Technical Writing, Programming and these wacky start-up User-Experience Design people are all having to figure each other out. The Web is indeed a digital tool that is “melting” things down, but not just media.

Austin Govella puts a question to me in his post here: Does Comcast have the DNA to compete in a 2.0 world? at Thinking and Making

Context of the post: Austin is wondering about this story from WSJ, “Cable Giant Comcast Tries to Channel Web TV” — specifically Jeremy Allaire’s comments doubting Comcast’s ability to compete in a “Web 2.0” environment.

At the end of his post, Austin says:

And the more important question, for every organization, how do you best change your DNA to adapt to new ages? Is it as simple as adjusting your organization’s architecture to enable more participation from good DNA? What happens if your internal conversations propagate bad DNA?
This is my question for Andrew: how do you architect community spaces to engender good DNA and fight infections of bad DNA?

My answer: I don’t know. I think this is something everybody is trying to figure out at once. It’s why Clay Shirky is obsessing over it. It’s why Tim O’Reilly and others are talking about Codes of Conduct.

So, when it comes to specifics, I don’t know that we have a lot of templates that we can say work most of the time… it’s so dependent on the kind of community, culture, etc.

However, in general, I think moderation tools that allow the organism to tend to itself are the best way to go. By that I mean “karma” functions that allow users to rate, comment, and police one another to a degree.

That, plus giving users the opportunity to create rich profiles that they come to identify with. Any geeks out there like me know what it’s like to create a quickie D&D character just to play with for the day — you can do whatever you want with it and it doesn’t matter. But one that you’ve invested time in, and developed over many sessions of gaming, is much more important to you. I think people invest themselves in their online ‘avatars’ (if you consider, for example, a MySpace profile to be an avatar — I do), and they’re generally careful about them, if they can be tied to the identity in a real way (i.e. it isn’t just an anonymous ‘alt’).

In short, a few simple rules can create the right structure for healthy complexity.

As for Comcast, I suspect that the company’s image is generally perceived to be a lumbering last-century-media leviathan. So it’s easy for people like Allaire to make these assumptions. I think I might have made similar assumptions, if I didn’t personally know some of the talented people who work at Comcast now!

What Allaire doesn’t come right out and say (maybe he doesn’t understand it?) is that the Web 2.0 video space isn’t so much about delivering video as about providing the social platform for people to engage one another around the content. Like Cory Doctorow said (and yes, I’m quoting it for like the 100th time), content isn’t king, “conversation is king; content is just something to talk about.”

Having the content isn’t good enough. Having the pipes and the captive audience isn’t good enough either. From what I’ve seen, of Ziddio and the like, Comcast is aware of this.

But it’s weird that the story in WSJ only mentions the social web as a kind of afterthought: “Competitors also are adding social networking and other features to their sites to distinguish them from traditional television.” As if social networking is just an added feature, like cup holders in cars. Obviously, WSJ isn’t quite clued in to where the generative power of Web 2.0 really lives. Maybe it’s because they’re stuck in an old-media mindset? Talk about DNA!

Ran across this bit in Andrew McAfee’s Blog

Ford once enlisted an efficiency expert to examine the operation of his company. While his report was generally favorable, the man did express reservations about a particular employee.

“It’s that man down the corridor,” he explained. “Every time I go by his office he’s just sitting there with his feet on his desk. He’s wasting your money.” “That man,” Ford replied, “once had an idea that saved us millions of dollars. At the time, I believe his feet were planted right where they are now.”

I managed to finish my presentation for this year’s IA Summit, and present it in under 50 minutes. Huzzah!

As promised, I’m posting the whole thing with notes here on the blog. If you want the PDF of the presentation (16MB), go here: https://www.inkblurt.com/media/hinton_summit07.pdf

And if you want to see the “blog post of record” about the presentation — with extra reference and research information & links — then check out the post here: https://www.inkblurt.com/archives/446

Thanks to everyone who attended the presentation and asked such terrific questions!

Colleague Michael Magoolaghan passed along a link to the transcript of Tim Berners-Lee’s testimony before Congress.

Hearing on the “Digital Future of the United States: Part I — The Future of the World Wide Web”

It’s fascinating reading, and extremely quotable. But one part that really struck me is in the first paragraphs (emphasis added):

To introduce myself, I should mention that I studied Physics at Oxford, but on graduating discovered the new world of microprocessors and joined the electronics and computer science industry for several years. In 1980, I worked on a contract at CERN, the European Particle Physics Laboratory, and wrote for my own benefit a simple program for tracking the various parts of the project using linked note cards. In 1984 I returned to CERN for ten years, during which time I found the need for a universal information system, and developed the World Wide Web as a side project in 1990.

While TBL didn’t invent the Internet entirely, his bit of brilliance made it relevant for the masses. Even though that wasn’t his intention right off the bat, it became so as he realized the implications of what he’d done.

But let’s look at the three bits in bold:

  1. He started studying Physics, then decided to follow a side interest in microprocessors.
  2. He created an e-notecard system for himself, on the side of (and to help with) what he was contracted to do at CERN.
  3. He developed a universal version of his notecard system so everyone could share and link together, as a side project in 1990.

Imagine the world impact of those three “side projects”?

This really begs the question for any organization. Does it give its members the leeway for “side” interests? Are they considered inefficient, or just odd?

It’s not that every person is going to invent another Web. It’s more that the few people who might do something like that get trampled before they get started, and that the slightly larger group of people who might do something merely impressive are trampled in the same way.

There was a time when amateurs were the experts — they were the ones who dabbled and learned and communicated in excited screeds and philosophical societies. They were “blessed” to have the time and money to do as they pleased, and the intellectual curiosity to dig in and dirty their hands with figuring out the world.

It could very well be that we’re in the midst of a similar rush of amateur dabbling. Just think of all the millionaires who are now figuring out things like AIDS, malaria and space flight. Or the empowerment people have to just go and remix and remake their worlds. There’s an excellent O’Reilly Conference keynote I wish I’d seen, but the pdf of the slides gives a decent accounting. Here’s an abstract:

Rules for Remixing; Rael Dornfest & Tim O’Reilly

Citizen engineers are throwing their warranties to the wind, hacking their TiVos, Xboxes, and home networks. Wily geeks are jacking Jetsons-like technology into their cars for music, movies, geolocation, and internet connectivity on the road. E-commerce and network service giants like Amazon, eBay, PayPal, and Google are decoupling, opening, and syndicating their services, then realizing and sharing the network effects. Professional musicians and weekend DJs are serving up custom mixes on the dance floor. Operating system and software application makers are tearing down the arbitrary walls they’ve built, turning the monolithic PC into a box of loosely coupled component parts and services. The massive IT infrastructure of the ’90s is giving way to what analyst Doc Searls calls “do-it-yourself IT.
We see all of this as a reflection of the same trend: the mass amateurization of technology, or, as Fast Company put it, “the amateur revolution.” And it’s these hacks, tweaks, re-combinations, and shaping of the future we’re exploring in this year’s Emerging Technology Conference theme: Remix.

I saw Mark Frauenfelder on Colbert Report last night, talking about Make Magazine and the very things mentioned in the abstract above. Colbert marveled at the ingenuity, and I wondered how many people watching would think to themselves: “Hey, yeah! Why not just take things apart and change them to the way I want them???”

It’s on the rise, isn’t it? Wow. Another sea change, and I’m not even 40. What a time to be alive.

All hail side projects and passionate tangents. Long may they reign.

But for now… I gotta get back to work.

Austin Govella makes the point razor-sharp in his post on Agile Development and Design:

Agile development won’t give you better design. Design models things to be made. Development makes things you’ve modeled. Agile development methods promise better model-making, but don’t promise better models. Agile development can actually devastate design.

Thanks man. I’m going to quote you in, like, a hundred meetings in this month alone.

Excellent video interview with Wenger.

Interview with Etienne Wenger on Communities of Practice — Knowledge Lab

Etienne Wenger is one of the founding fathers of Social Learning Theory and the concept of “Practiced Communities”. People are learning together – every individual deals and engage in many different communities of practice. Here people negotiate and define what competence and knowledge is. To know something or to be competent builds on the individuals experiences of being in the world – learning is a constant transformation or journey of the self.

HBS prof and Enterprise 2.0 thinker/blogger Andrew McAfee back in July, commenting on the implications of people being fired for what they say on personal blogs or otherwise (as in the Axsmith case).
Andrew McAfee

Smart organizations will accept and embrace the fact that Enterprise 2.0 tools will be used to voice dissent within the community. And they’ll realize that this is more than just OK; it’s important.

Let’s close this post with a quote from Theordore Roosevelt, who wrote about dissent and the American President in a 1918 Kansas City Star editorial:

“… it is absolutely necessary that there should be full liberty to tell the truth about his acts, and this means that it is exactly necessary to blame him when he does wrong as to praise him when he does right. Any other attitude in an American citizen is both base and servile. To announce that there must be no criticism of the President, or that we are to stand by the President, right or wrong, is not only unpatriotic and servile, but is morally treasonable to the American public.”

I’d love to hear a presidential candidate quote that in the coming months.

For a year or so now, “innovation” has been bobbing around at the very top of the memepool. Everybody wants to bottle the stuff and mix it into their corporate water supplies.

I’ve been on the bandwagon too, I confess. It fascinates me — where do ideas come from and how do they end up seeing the light of day? How does an idea become relevant and actionable?

There’s a recent commercial for Fedex where a group of pensive executives are sitting around a conference table, their salt-and-pepper haired and square-jawed CEO (I assume) sitting at the head of the group and a weak-chinned, rumpled and dorky underling sitting next to him. The CEO asks how they can cut costs (I’m paraphrasing) and the little younger dorky guy recommends one of Fedex’s new services. He’s ignored. But then the CEO says exactly the same thing, and everybody nods in agreement and congratulates him on his genius.

The whole setup is a big cliche. We’ve seen it time and again in sitcoms and elsewhere. But what makes this rendition different is how it points out the difference in delivery and context.

In looking for a transcript of this thing, I found another blog that summarizes it nicely, so I’ll point to it and quote here.

The group loudly concurs as the camera moves to the face of the worker who proposed the idea in the first place. Perplexed, he declares, “You just said what I just said only you did this,” as he mimics his boss’s hand motions.
The boss looks not at him, but straight ahead, and says, “No, I did this,” as he repeats his hand motion. The group of sycophants proclaims, “Bingo, Got it, Great.” The camera captures the contributor, who has a sour grimace on his face.

(Thanks Joanne Cini for the handy recap.)

What it also captures is the reaction of an older colleague sitting next to the grimacing dorky guy who gives a little nod to him that shows a mixture of pity, complicity in what just happened, and a sort of weariness that seems to say, “yeah, see? that’s how it works young fella.”

It’s a particularly insightful bit of comedy. It lampoons the fact that so much of how ideas happen in a group environment depends on context, delivery, and perception (and here I’m going to pick on business, but it happens everywhere in slightly different flavors). Dork-guy not only doesn’t get the language that’s being used (physical and tonal), but doesn’t “see” it well enough to even be able to imitate it correctly. He doesn’t have the literacy in that language that the others in the room do, and feels suddenly as if he’s surrounded by aliens. Of course, they all perceive him as alien (or just clueless) as well.

I know I’m reading a lot into this slight character, but I can’t help it. By the way, I’m not trying to insult him by calling him dork-guy — it’s just the way he’s set up in the commercial; I think the dork in all of us identify with him. I definitely do.

In fact, I know from personal experience that, in dork-guy’s internal value matrix, none of the posturing means a hill of beans. He and his friends probably make fun of people who put so much weight on external signals — they think of it as a shallow veneer. Like most nerdy people, the assumption is that your gestures, haircut or tone of voice doesn’t affect whether you win the chess match or not. But in the corporate game of social capital, “presence” is an essential part of winning.

Ok, so back to innovation. There’s a tension among those who talk and think about innovation between Collective Intelligence (CI) and Individual Genius (IG). To some degree there are those who favor one over the other, but I think most people who think seriously about innovation and try to do anything about it struggle with the tension within themselves. How do we create the right conditions for CI and IG to work in synergy?

The Collective Intelligence side has lots of things in its favor, especially lately. With so many collective, emergent activities happening on the Web, people now have the tools to tap into CI like never before — when else in history did we have the ability for people all over the world to collaborate almost instantaneously in rapid conversation, discussion and idea-vetting? Open Source philosophy and the “Wisdom of Crowds” have really found their moment in our culture.

I’m a big believer too, frankly. I’m not an especially rabid social constructivist, but I’m certainly a convert. Innovation (outside of the occasional bit that’s just for an individual privately) derives its value from communal context. And most innovations that we encounter daily were, in one way or another, vetted, refined and amplified by collaboration.

Still, I also realize that the Eureka Moments don’t happen in multiple minds all at once. There’s usually someone who blurts out the Eureka thought that catalyzes a whole new conversation from that “so perfect it should’ve been obvious” insight. Sometimes, of course, an individual can’t find anyone who hears and understands the Eureka thought, and their Individual Genius goes on its lonely course until either they do find the right context that “gets” their idea or it just never goes anywhere.

This tension betwen IG and CI is rich for discussion and theorizing, but I’m not going to do much of that here. It’s all just a very long setup for me to write down something that was on my mind today.

In order for individuals to care enough to have their Eureka thoughts, they have to be in a fertile, receptive environment that encourages that mindset. People new to a company often have a lot of that passion, but it can be drained away long before their 401k matching is vested. But is what these people are after personal glory? Well, yeah, that’s part of it. But they also want to be the person who thought of the thing that changed everybody’s lives for the better. They want to be able to walk around and see the results of that idea. Both of these incentives are crucial, and they’re both important ingredients in the feed and care of the delicate balance that brings forth innovation.

Take the Fedex commercial from above. The guy had the idea and he’ll see it executed. Why wouldn’t he be gratified to see the savings in the company’s bottom line and to see people happier? Because that’s only part of his incentive. The other part is for his boss, at the quarterly budget meeting, to look over and say “X over there had a great idea to use this service, and look what it saved us; everybody give a round of applause to X!” A bonus or promotion wouldn’t hurt either, but public acknowledgement of an idea’s origins goes a very very long way.

I’ve worked in a number of different business and academic environments, and they vary widely in how they handle this bit of etiquette. And it is a kind of etiquette. It’s not much different from what I did above, where I thanked the source of the text I quoted. Maybe it’s my academic experience that drilled this into me, but it’s just the right thing to do to acknowledge your sources.

In some of my employment situations, I’ve been in meetings where an idea I’ve been evangelizing for months finally emerges from the lips of one of my superiors, and it’s stated as if it just came to them out of the blue. Maybe I’m naive, but I usually assume the person just didn’t remember they’d heard it first from me. But even if that’s the case, it’s a failure of leadership. (I’ve heard it done not just to my ideas but to others’ too. I also fully acknowledge I could be just as guilty of this as anyone, because I’m relatively absent-minded, but I consciously work to be sure I point out how anything I do was supported or enhanced by others.) It’s a well-known strategy to subliminally get a boss to think something is his or her own idea in order to make sure it happens, but if that strategy is the rule rather than the exception, it’s a strong indicator of an unhealthy place for ideas and innovation (not to mention people).

But the Fedex commercial does bring a harsh lesson to bear — a lesson I still struggle with learning. No matter how good an idea is, it’s only as effective as the manner in which it’s communicated. Sometimes you have no control over this; it’s just built into the wiring. In the (admittedly exaggerated, but not very much) situation in the Fedex commercial, it’s obvious that most of the dork-guy’s problem is he works in a codependent culture full of sycophants who mollycoddle a narcissistic boss.

But perhaps as much as half of dork-guy’s problem is that he’s dork-guy. It’s possible that there are some idyllic work environments where everyone respects and celebrates the contributions of everyone else, no matter what their personal quirks. But chances are it’s either a Kindergarten classroom or a non-profit organization. And I happen to be a big fan of both! I’m just saying, I’m learning that if you want to play in certain environments, you have to play by their rules, both written and unwritten. And I think we all know that the ratio of unwritten-to-written is something like ten-to-one.

In dork-guy’s company, sitting up straight, having a good haircut and a pressed shirt mean a lot. But what means even more is saying what you have to say with confidence, and an air of calm inevitability. Granted, his boss probably would still steal the idea, but his colleagues will start thinking of him as a leader and, over time, maybe he’ll manage to claw his way higher up the ladder. I’m not celebrating this worldview, by the way. But I’m not condemning it either. It just is. (There is much written hither and yon about how gender and ethnicity complicate things even further; speaking with confidence as a woman can come off negatively in some environments, and for some cultural and ethnic backgrounds, it would be very rude. Whole books cover this better than I can here, but it’s worth mentioning.)

Well, it may be a common reality, but it certainly isn’t the best way to get innovation out of a community of coworkers. In environments like that, great ideas flower in spite of where they are, not because of it. The sad thing is, too many workplaces assume that “oh we had four great ideas happen last year, so we must have an excellent environment for innovation,” not realizing that they’re killing off hundreds of possibly better seedlings in the process.

I’ve managed smaller teams on occasion, sometimes officially and sometimes not, but I haven’t been responsible for whole departments or large teams. Managing people isn’t easy. It’s damn hard. It’s easy for me to sit at my laptop and second-guess other people with responsibilities I’ve never shared. That said, sometimes I’m amazed at how ignorant and self-destructive as a group some management teams can be. They can talk about innovation or quality or whatever buzzword du jour, and they can institute all sorts of new activities, pronouncements and processes to further said buzzword, but not do anything about the major rifts in their own ranks that painfully hinder their workers from collaborating or sharing knowledge; they reinforce (either on purpose or unwittingly) cultural norms that alienate the eccentric-but-talented and give comfort to the bland-but-mediocre. They crow about thinking outside the box, while perpetuating a hierarchical corporate system that’s one of the most primitive boxes around.

Ok, that last bit was a rant. Mea Culpa.

My personal take-away from all this hand-wringing? I can’t blame the ‘system’ or ‘the man’ for anything until I’ve done an honest job of playing by the un/written rules of my environment. It’s either that, or play a new game. To me, it’s an interesting challenge if I look at it that way; otherwise it’s just disheartening. I figure either I’ll succeed or I’ll get so tired of beating myself against the cubicle partitions, I’ll give up and find a new game to play.

Still, eventually? It’d be great to change the environment itself. Maybe I should go stand in front of my bathroom mirror and practice saying that with authority? First, I have to starch my shirts.

« Older entries