Getting past the 'corporate immune system'
Imagine the scene. A warm summer’s day. The tree-lined slopes of gentle hills stretch towards a darker valley below. There’s a river running through it, sparkling water, dappled light through the leaves. Now place a party of schoolboys there, shipped out from the nearby town – lucky kids from well-off families, enjoying the fun of summer camp. On the first day, they form into two teams. They’re given different huts to sleep in and different colored badges to wear. They’re encouraged to choose a name and an identity. During the coming days, they’ll compete in all sorts of games and projects.
Sounds idyllic, yet, by the end of the week, there is almost open warfare between the two teams.
What began with name calling and petty violence (each group burnt the flag they had captured from the other team) moved on to raiding parties which attacked the opposition’s huts, overturned beds, ripped apart furnishing, and stole key possessions. Before long, there was a real risk of violence. The teams armed themselves with baseball bats and socks filled with rocks and marched towards each other for a showdown when the camp counselors finally intervened!
This was the famous “Robbers Cave” experiment devised by Muzafer and Carolyn Sherif to explore how inter-group conflicts occur when there are limited resources and a strong element of competition. Being in one group – in this case, you could be either in “The Rattlers” or “The Eagles” – meant that you had a great deal of loyalty towards your fellow teammates and an equal antagonism towards the others.
In many ways, this mirrored work by Henri Tajfel and others around social identity theory – the idea that we define ourselves by the groups we identify with and which we try to belong. Their famous studies gave us the idea of the “in group” (people like us) and the “out-group” (the others outside of our circle). Once again, research showed the propensity for conflict between the two groups and the lack of trust which could quickly build up – even if the basis of who’s in which group is as simple as being allocated a color or a group name.
Significantly, the same effect can be reproduced very easily with different groups and in different situations – essentially underlining an important aspect of the way we have evolved as social animals. We bond together tightly (which is good evolutionary practice when facing a common enemy). However, it has a downside, which is a tendency to distrust people belonging to groups outside our circle and the ease with which this can escalate into open hostility.
What has all of this got to do with innovation?
Quite a lot, actually. It helps us understand the famous “Not Invented Here” (NIH) effect. NIH is one of those surprisingly common features of the innovation landscape – it refers to the situation in which an organization rejects a new idea offered from outside.
If you're unfamiliar with the term, let me first give you some examples of the "Not Invented Here" syndrome. Young inventor Alexander Graham Bell was looking for a partner to help him commercialize his idea for a telephone – a device which could revolutionize the communications industry. He started with the U.S. market leader, Western Union, the guys who’d spent so much time and effort stringing telegraph wires alongside railways tracks to link up the continent.
It seems like a good fit from the outside. However, their reception was frosty. In a famous comment the President of Western Union, William Orton, who was known as one of the best-informed electrical experts in the country said: “There is nothing in this patent whatever, nor is there anything in the scheme itself, except as a toy. If the device has any value, the Western Union owns a prior patent … which makes the Bell device worthless.”
A few other remarkable instances of the NIH syndrome is Kodak’s rejection of both Edwin Land’s idea for the Polaroid process and Chester Carlson’s xerography. These examples underline how easy it is to put up defenses against ideas originating from outside. NIH is a theme which my colleague Oana-Maria Pop has written a great blog post about, but its persistence makes it worthwhile to take another look.
Elting E. Morison gives a wonderful example in his detailed study of “Gunfire at Sea,” which explores the tortuous journey the innovation of continuous-aim gunnery had in finding its way on to the decks of U.S. warships. Back in the late 19th century, naval gunnery was not very accurate. A U.S. Bureau of Ordnance study of one thousand shells fired during an exercise around the time of the Spanish-American war suggested that less than 3 percent were hitting the target. That’s a problem.
A long way away in the South China Sea, Admiral Percy Scott of the British Navy was working on the solution. His squadron was doing gunnery practice with similarly poor results – except for the crews on one ship (rather inaptly named HMS Terrible) who were recording surprisingly accurate performance. Looking more closely revealed the use of a prototype gun-sight and a novel method of tracking the target called “continuous-aim gunfire.” Scott supported the development, trained all the crews on all his ships, and eventually changed practices across the British Navy.
The fascinating part of the story concerns a young U.S. lieutenant, William Sims, on secondment with the squadron. He is aware of the Bureau of Ordnance study and the poor U.S. performance and sees in the new British system an opportunity to make his name and career by introducing this better system to his superiors in Washington.
What follows is a classic case of NIH – all sorts of arguments assembled to prove that the new system was no better. For example, a side-by-side test was arranged on dry land where the advantages of the new system in dealing with moving targets at sea were neutralized! It took President Roosevelt intervening himself to get the U.S. Navy to take the idea seriously and eventually adopt the new system.
So why does it happen?
It would be wrong to see this behavior as the result of blind stupidity or outdated attitudes. Significantly, in most NIH cases, there is a very plausible defense to be mounted – the lack of fit with the core business, the risk of having to cannibalize existing activities, the unproven nature of the new technology, etc. What’s really going on is subtler and owes a lot to the ideas introduced above around group identity and defenses.
We sometimes talk about a corporate immune system, and this is a good metaphor because it accurately captures what an immune system does for our bodies: protect them against dangerous things from outside. The narratives around resistance to outside ideas – not invented here – are very much those of a well-meaning immune system.
One way this hits our innovation world is when the new ideas emerge from across national borders. There is little doubt that “lean” thinking has changed the world – first through manufacturing and then across services both public and private. In its early days, lean was conflated with Japanese manufacturing techniques which had a frosty reception outside Japan – a common argument was that “it works over there, but it isn’t right for our kind of organization.” The same goes for many of the quality management principles which we now accept as second nature but once saw as something peculiar to Japanese corporate culture and not transferable.
Studies in psychology have shown the close links between the ideas raised by social psychologists like Sherif and Tajfel. For example, Alex Haslam and colleagues looked at perceptions of creative ideas arising from groups. Their findings confirmed on many occasions that when those ideas came from within the group, they were highly rated and valued where those coming from another group were lacking in innovativeness or value. And a recent article by Frank Piller and David Antons distills a variety of other psychological studies, which give us a clear sense that this is not an occasional effect – it is deep-rooted.
The big question for innovation management is what should we do about NIH?
How can we reduce the risk that we miss out on something important from outside because the way our “immune system” operates?
One useful place to start is with Sherif’s original experiments. In their later work on trying to understand inter-group conflict, they found that giving groups a superordinate goal made a difference. In other words, make the challenge big enough and everyone will co-operate, share, and work together towards the target. The “moon-shot” project is a powerful way of overcoming tribal rivalries, and it works just as well inside large organizations.
Another approach is to mix people up. The more we can experience first-hand that people are like us, the harder it is to maintain inter-group boundaries and barriers. Cross-functional teams, secondment, and rotation are all helpful strategies, especially in innovation where ideas from across different functional or discipline boundaries are often powerful assets in solving the overall challenge.
Interestingly, we’ve known this for a long time. Back in the 1960s, a pioneering set of experiments were carried out by Paul Lawrence and Jay Lorsch looking at innovation in textiles, plastics, and food. They found that the extent to which differences between functions was an important influence on how long it took to get new products to the marketplace. By extension, those groups with multiple integration mechanisms fared better, sharing ideas, defusing tensions, and working together towards the common goal.