The Concorde Fallacy

And other reasons why Embedded Managers, executives and CFOs Make Avoidable and Counterproductive Marketing Decisions

Asking the Important Questions

 

A Related Perspective:

 

I asked my colleague Steve Roemerman, CEO of Lone Star (a highly respected consulting company to business and to the military) and former senior VP at Texas Instruments (TI) and president of Raytheon Dallas, to review this paper from his perspective to see if he was experiencing similar encounters. He replied as follows (with his permission):

 

“It is ironic that you sent me this while I’m looking at some research on psychological effects related to “Prospect Theory” for some work we are doing on modeling how companies bid in auction situations. There is something over 150 named biases, fallacies and other effects related to cognitive errors in decision making. The nature of humans is to make flawed judgments choosing what to consider, in the way we gather what we’ve chosen, and how to use the data we get. It is nearly impossible to do your own research and build your own models. So, insourcing this kind of work has its limits.

 

“To me, the Dashboard is a unique value proposition, which makes the inside/outside distinction less stark. It makes the internal team more reliable, and provides them unbiased data to use, or to contrast with their own findings.

 

“I had lunch with the head of strategy for Wal-Mart. I asked him about their modeling for holiday sales. This is THE most important analysis for any retailer. He told me they run 5 – 7 different models each year. Two of them are Wal-Mart internal tools. He said they think they are the best in the world. I asked why buy the others if you have the best. He told me they valued the contrast, and they knew that even the best in the world is not the same as “always right.”

 

 

And Now the Rest of the Story

 

As a matter of background, Dolores is a retired teacher who specialized in data-based development of educational literacy programs. Having more than enough credits to gain a Ph.D. she chose not to waste a year writing a thesis as she chose to be in the classroom anyway. She has spent the past eight years learning about embedded technology. In short, she’s no dummy.

 

Being a curious person she would often ask vendors such questions as:

 

  • How much do you spend on your marketing efforts?
  • Where do you get your data?
  • How many people do you pay to get the data for you?
  • Why do you think that speaking to a few customers actually gets you the market insights on which your company depends?

 

She would then ask them why they are willing to spend so much money on internal marketing assistants when they can get comprehensive data and a tool by which they can examine the marketplace from their own perspective. They can get it all for around $1000/month.

 

They usually say that they don’t have a budget for this. Dolores was having none of this. She would constantly badger me with the obvious question: Why would such highly intelligent and experienced folks waste money and potential markets and ignore data shared with them that has immediate implications to their business? The 2nd most offered excuse we heard from marketing directors and VPs was “I really don’t have the time to look at the data”. “How then”, she would ask, “Can you do your job effectively?”

 

I confessed that I couldn’t explain it – and many such professionals are friends and colleagues who love the data but “don’t have time to address it”. Dolores questions why this wasn’t included in my MBA program and I have no good response to the questions from a results oriented educational professional.

 

Recently, I was reading the new book (Think Like a Freak, William Morrow/Harper Collins) from my favorite economists Steven Levitt and Steven Dubner (of Freakonomics fame) that gave me some ideas that I immediately discussed with Dolores. After thinking about it she said that it made sense to her and might explain why things are as they are.

 

She encouraged us to write about these thoughts to share with our friends and colleagues (we are too old to care if some don’t agree). So we broke it out as follows – we encourage feedback of all types.

 

  • Concorde Fallacy – or why is failure preferable to admitting a mistake and changing direction
  • Going Against the Facts: If one is presented compelling information-based data that shows the best options for success, why are they frequently ignored?
  • The plural of Anecdote is NOT Data – or why case studies are an easy way to explain away poor sales. There is safety is being able to point to meaningless information and justify going the same direction as everyone else
  • Secrets of Data Acquisition: No data is better than bad data – how surveys and case studies can mislead and how to get the best information out of them

The Concorde Fallacy

 

Have you ever bought something or invested effort and money in something, only to discover that it’s really not going to meet your needs? But you now discover that what you really need is expensive and requires abandoning your investment – and perhaps having to explain it to your boss.

 

Your first purchase is what is commonly referred to as a sunk cost and it frequently carries some emotional baggage. Is it better to quit and admit to a mistake or to justify continuing along because of the prospect of wasting the initial investment? This is not an MBA problem up for some theoretical discussion. It may be personal and a possible career changer. Will your boss and colleagues understand or will they deem you a failure? It takes a special talent to admit to what might have been an honest mistake.

 

The biologist Richard Dawkins termed this the Concorde Fallacy in honor of the incredibly stupid competition between the British and the French over the supersonic airplane. After each side had spent billions of dollars in developing and testing their respective designs, it began to be clear to both governments that the planes were not economically feasible. Considering the mutual investments of money, social factors and importantly political implications, both sides found it better to continue spending on their commitments rather than admitting that it was a waste of resources.

 

Leavitt and Dubner in addressing this insanity (my conclusion not theirs) pointed out that there was an additional opportunity cost – that is by continuing to invest in this wastefulness, one didn’t have the resources to invest in more productive enterprises.

 

The planes were an engineering marvel – but as expected on both sides, a financial disaster. England’s chief pilot was a friend and a colleague of mine and offered to let me log 30 minutes on the Concorde if I would fly it to England (can you imagine my log book – 865 hours single engine; 12 hours dual engine and 30 minutes Concorde). In today’s dollars it would have been around a $10,000 round trip.

 

Let me give you an example of a wasted embedded opportunity.

 

Some 15 years ago there were some market research forecasters that touted the “inevitable” take over of VME by Compact PCI. The consequences were very preventable, but many vendors took the plunge, bought the Kool-Aid and many went out of business. Motorola Computer Group and Force Computers were the dominant board companies at the time and Motorola bought Force to become the dominant industry vendor reaching nearly $700 million in annual sales. Later, after investing heavily in Compact PCI, sales fell and the combined Motorola-Force enterprise sold for 28 cents on the dollar.

 

The only ones that made money were the market research folks who made out like bandits (literally). I was not one of them. I had forecast the demise and renamed what was called the “two billion dollar marketplace” as the “zero billion dollar” marketplace. Life is like that.

 

It was clear that from a market perspective (anyone who understood the difference between polyopoly and commodity markets), that like the Concorde there was no chance that CompactPCI could be profitable and support niche markets.

 

Why was this preventable? Board vendor executives that wanted to impress their bosses and investors should have seen that there was very little money to be made.  In the VME marketplace, it took twenty four companies to account for 75% of the total available market. Ninety-five percent of the CompactPCI marketplace was dominated by three companies. Niche markets for VME vendors were plentiful – not so for the restricted CompactPCI marketplace.

 

Interestingly, Compact PCI as a technology still exists, but it’s doesn’t generate a lot of revenue. The major vendors are the huge prime contractors (e.g., GE, Curtis Wright, etc.) who bought Compact PCI capabilities (at a steal) to be able to deploy it along with their other technologies.

 

In looking back at those that took the Kool-Aid can we be critical? To me the answer is a decisive NO. Who wants to risk their career by going against the prevailing wisdom – even those that fully understood that it was a fool’s game. With one’s career on the line, it was better to use flawed research as justification to pursue what others were doing. Standing out in the crowd was not a good career move.

 

It was decisively better (and understandable) to fail than to quit and live to pursue other opportunities.

 

The Concorde Fallacy is not a venture in stupidity; it was an exercise in survival notwithstanding the collective failure that resulted.

 

Unfortunately the survival of companies in a highly competitive (and maybe zero sum) marketplace depends on changing the corporate culture at the highest levels. Many employees will find jobs elsewhere, but those with a vested corporate interest will need to step up as they will be the big losers.

 

So how can companies avoid the risk of failure – particularly in a constantly changing environment? The answer is good data and the ability to use the data to examine market alternatives by ones self rather than looking at consolidated data from market research, be it internal or external.

 

Going Against the Facts

 

Levitt and Dubner tell the story about the World Cup. You are the best player on your team and you have a penalty kick that will make you, if you are successful, a legend in your own country, but if you miss you might need to move somewhere else.

 

You have 3 options (right, left, or center):

 

  • You can kick either to the right, left or center. It happens so fast that the goalie will try to guess which way you will kick.
  • Data shows that goal keepers jump left 57% of the time; right 41% of the time and stay in the center (don’t move) 2% of the time (assuming that you’re not drunk and miss the net entirely in which case no country on earth will take you and your family)

.

Although the data shows that the center kick has an overwhelming chance of success, only 17% of penalty kicks are made to the center. Why does this happen? Every soccer player is familiar with these data yet with the World Cup on the line, why do the vast majority of kicks go to the right or left? And what can we learn from this?

 

There is an emotional aspect of the soccer story as well as with marketing decisions. Let’s assume that the kick is successful – you are a legend and can live out your life in fame. Let’s say the goalie makes the save. You won’t be a legend, but you won’t have to move. The goalie made a great save –no shame on you.

 

But assume that you tried a center kick and the goalie wasn’t aware that he was supposed to move and stayed still. You will look like a fool and certainly will be shamed in your country. Hence instead of taking the high percentage kick, he took the lesser but explainable option.

 

Can we say something similar about the executive who ignores the data rather than taking the best approach to success? Is it safer to follow the herd and to do what your competitors are doing (even if they are gaining market share)?

 

 

The Plural of Anecdote is NOT Data

 

 

I am amused when I hear or watch a presentation in which several “case studies” are used to promote a product or idea. I always ask where they got the case study from. It is usually from a current customer who is more than willing to expand on the topic of “what the market needs” or the future of the industry.

 

Frankly listening to a couple of case studies doesn’t move me very much – and I’m always asking he question “is this all you got”?

 

What’s really important is whether one is getting an honest opinion. There are people pleasers and also negative respondents. Is the answer an honest one and more important does their opinion reflect what the broader market thinks?

 

I remember my experience in developing distribution markets for my small medical device company. Not having a large marketing budget, my job was to visit with prospective distributors to set up nation wide sales. Of course these reps would take me and my product into the hospitals to meet with potential buyers (mostly doctors) that they knew and trusted. Being a doctor gave me a certain advantage in being able to present the product as a clinical advantage. I quickly learned that there was no guarantee – the feedback was good if they liked you and bad if they didn’t like you. The best market feedback was whether they would actually get a purchase order.

 

One of our products was an inexpensive headlight which we designed to make the user look really cool. Initially we took different versions into the local hospitals where I was known and we discovered that what the product looked like was just as important is what it did. One such doc really didn’t like me and while he was telling me that I looked too stupid to have developed such a device – all the time he was looking in a mirror. I took the lamp from him, thanked him for his time and was walking out the door when he called to me that he “might” be willing to “try” the lamp in surgery. I walked out. Later he called the rep and wanted to order 5 lamps. We signed up the distributor.

 

In working with distributors across the country, I learned a great deal from them as to how they evaluated a product and made a decision as to whether or not to carry it. In some cases the product was a stand alone winner while in others it was a door opener in which sales reps would call a doctor and say that they had a really interesting device that they wanted to show them. This gave them the opportunity to pitch other “big ticket” products (from other vendors) that would have been more difficult to get an appointment for. Sometimes I was told that the product was a good one but that it didn’t fit the distributor’s offerings. Sometimes I was referred to others and sometimes I had to understand that there wasn’t a market for our product.

 

Although the process was tedious and very time consuming, it was a wonderful learning experience and taught me a lot – lessons that I have tried to pass on.

 

In addition, I began to notice that everywhere I went physicians were telling me that if I could only give them a cardiac output monitor they would buy a dozen for their hospital. So I wondered if this was a real market (assuming that we could develop such a device) or not. So whenever I was asked if I could provide a cardiac output device, I asked what they would pay for it, and whatever they said I told them that I had such a device in the car and that I would bring it in and have them test it. The condition was that we go to purchasing and get a purchase order conditioned on the physicians testing the accuracy of the device.

 

The fact that I didn’t have such a device was incidental – I offered it to at least 30 doctors and got zero purchase orders. I did get a lot of interesting excuses. We dropped all effort to develop such a device. Many years late Hewlett Packard did bring a cardiac output monitor to market. It was not successful.

 

There is always a risk when one depends on defining the marketplace based on a few case interviews.

 

Levitt and Dubner tell the story of a California group that wanted to look at the factors that would encourage residents to conserve energy.  A diverse set of California residents were interviewed by phone and asked to rank the following list of reasons to conserve energy:

 

1)     It saves money

2)     It protects the environment

3)     It benefits society

4)     A lot of other people are trying to do it

 

The results were that the environment and society ranked first and second, while “other people doing it” ranked last. As a follow up study, researchers went house to house and left different placards on each door – each with one of the possible results. The “protect the environment” card said “you can prevent 262 pounds of greenhouse gasses per month” while the “Join you neighbors” card said that “77% of your neighbors were using fans rather than air conditioning”.

 

On follow up there was one response that overwhelmed the others. It was “Join your neighbors” – the herd mentality. What we can learn from this is that how you structure the questions and how you maintain respondent neutrality is essential for you getting the data you really need.

 

Maybe we can see an explanation for the Concorde Fallacy in these results.

 

Getting and Trusting the Data that You Need

 

It is clear that one can get different responses depending on how the question is asked. If one is asked to give personal testimony for any tool, OS or vendor there may be a reluctance to provide a truthful response. May people may feel that what they really think or feel might make them appear different or wrong. One needs to structure the questionnaire to elicit accurate data by asking emotionally unbiased neutral questions. Also questions that asked for too many personal opinions don’t necessarily shed light on important aspects of the marketplace.

 

Most important, the questionnaire must be structured and analysis tools provided such that issues of greatest importance to the user can be answered. For example, the Annual EMF Survey of Embedded Developers and Managers is structured to elicit responses to questions regarding, among others (the typical EMF survey consists of more than 90 questions that cover all aspects of embedded development):

 

  • Their design factors (number of developers per project, types of engineers, geographic location, current stage of development)
  • Vertical market of their design
  • Level of development complexity
  • Time to market (from design start to product shipment)
  • Percent of designs completed behind schedule or cancelled,
  • Closeness of final design outcomes to pre-design expectations, Testing outcomes, etc.),
  • The tools they used (development, modeling, Java, Eclipse, and other development tools),
  • Their choice of OS, IDE, communication middleware, and processors used
  • How they learn about new products, tools and concepts (web sites, publications, white papers, fellow workers, etc.)
  • The issues that are of greatest concern to their development efforts (what worries them the most)
  • The factors that most impact their purchasing decisions

 

In addition, EMF working with Larry Wilson of Wilson Research offers an Executive Dashboard – a unique tool that allows users to verify and validate (or invalidate) their own suspicions, intuitions and inclinations.

 

  • Vendors can simultaneously compare what their customers are doing, their issues of concern, and what they like and dislike with what their competitor’s customers are reporting
  • Marketing executives can use competitive analysis data for sales promo and strategic planning
  • Developers beginning a project can compare the experiences of hundreds of fellow developers that undertook similar projects to gain insights as to what worked best (and worst) before making design commitments
  • CFOs and senior managers can look at what OSes, tools, microprocessors and processes resulted in the greatest cost savings given multiple levels of design complexity

 

Most important is that the Dashboard allows the user to look at the dynamics of the marketplace from THEIR perspective and not have to rely on analysis charts and graphs generated by market researchers who are not familiar with the market perspectives of the user. This tool allows a company to look at market specifics internally without having to share publicly their views and strategies.

 

An example of the capabilities of carefully constructed survey data can be illustrated by examining the use of Open Source software. Open Source software is the current rage and the feelings of its use follows the idea that if the software is free and if there is a strong eco system of developers that support it, that it is better to design with open source than to pay for commercial OSes.

 

The problem with this analysis is that there is a difference between cost of acquisition and the total cost of ownership. If using open source software saves the user the cost of paying for the acquisition of a commercial OS but the associated cost of development is substantially higher using open source, then it’s not free. Also, if it takes longer to get the product to market there may be an additional cost of lost sales or delayed revenue.

 

To address this question, EMF used the Dashboard to create two cadres of data; one is from the users of open source software and the second is from users of a particular commercial OS. The database was filtered for each of these cadres. For this analysis we chose ThreadX to represent the commercial OS (Nucleus was also used in a different calculation with a similar outcome). By looking at the following, the Total Cost of Development can be calculated:

 

  • Number of developers on a project (software and hardware – or software alone if desired)
  • Time it takes from design start to shipment
  • Percent of designs cancelled and the number of months it took before the cancellation
  • Percent of designs completed behind schedule and the average number of months late

 

The comparative Total Cost of Development was calculated and presented in the following Table.

 

 

  2013 EMF Survey Data     Open Source Commercial Non-Commercial  
  All Respondents Ind ave ThreadX Software Linux Linux  
  Devel time Months – Start to Ship 13.9 12.9 13.2 12.9 15.1  
  % behind schedule 47.0% 36.9% 45.1% 47.5% 46.7%  
  Months behind 3.8 3.0 3.6 3.1 4.1  
  % cancelled 11.2% 13.4% 10.9% 11.7% 11.2%  
  Months before cancellation 4.4 4.6 4.4 3.6 4.4  
  SW Developers/project 14.7 6.8 19.0 17.4 22.3  
  Average Developer months/project 204.3 87.7 250.8 224.5 336.7  
  Developer months lost to schedule 26.3 7.5 30.8 25.6 42.7  
  Developer months lost to cancellation 7.2 4.2 9.1 7.3 11.0  
               
  Total developer months/ project 237.8 99.4 290.8 257.4 390.4  
  At $10,000/developer month            
  Average developer cost/project $2,043,300 $877,200 $2,508,000 $2,244,600 $3,367,300  
  Average cost to delay $262,542 $75,276 $308,484 $256,215 $426,978  
  Average cost to cancellation $72,442 $41,915 $91,124 $73,289 $109,894  
               
  Total developer cost/project $2,378,284 $994,391 $2,907,608 $2,574,104 $3,904,173  
               

 

Table I: 2013 Cost of Development Comparisons between Commercial RTOSes, Linux and Open Source Software

 

Table I not only shows the cost advantage of a commercial OS (ThreadX) over Open Source  software, but also the advantage of using commercial Linux over free Linux.

 

It is instructive to point out that a market analyst who interviewed users or followed up on case studies would have been told that the future is with Open Source software. This is the same case as it was with the original “Linux is free” claims dating back eight years ago.

 

This is the danger inherent in gathering information incorrectly – and not knowing it.

 

Back in 1914 the Indiana State Senate voted to change the value of Pi to 3.00 from 3.14 in order to make it easier for school children to make geometric calculations. Of course this didn’t pan out – or as the scientific community gleefully pointed out that the “Senators couldn’t get their circles to close”.

 

It seems that the great grandchildren of those Indiana senators are now employed as market researchers.

Click this link for a PDF copy of the white paper Concorde Fallacy.

 

 

 

 

Leave a Reply