≡ Menu

BP’s Big Spill: Lesson for your new products?

What new product lessons can we learn from BP's oil spillYesterday I heard BP’s COO, Doug Suttles, admit that it could take months to get the Deepwater Horizon disaster under control. The journalist interviewing him jumped on that by asking why BP didn’t have a back-up plan in place before the spill. Of course, there’s a political and legal tsunami coming for BP, so Suttles smartly ignored the question and stayed on his talking points about the extraordinary efforts BP is taking, etc., etc.

Too bad – because it was a really good question. After all, this is far from the first time an undersea drilling rig has had a major leak. In fact, the oil recovery domes that BP is about to deploy are similar in design to those used after rigs were destroyed during Katrina and other recent gulf hurricanes. Unfortunately, they weren’t ready and waiting to go after the accident, but were only constructed afterward.

There’s a valuable lesson here for anyone involved in new product innovation. Okay, your new products might not have the potential for becoming ecological disasters on the scale of Chernobyl, the Exxon Valdez, or BP’s current disaster. But don’t you still owe it to your customers and your stakeholders to have a back-up plan and eliminate the risks that you can foresee? To do that, here are several question that you should consider answering as part of your new product development checklist:

  1. What is the potential for a harmful failure of your product?
  2. What are the potential unintended effects or side-effects of your product?
  3. What are the potential misuses of your product?
  4. What back-up plans should be in place in case of a failure?

There are several tools available to help you in this effort. Many engineers are trained in analysis using Failure Mode & Effect Analysis (FMEA). This facilitated process leads you through a serious of “what-if’s” while evaluating the potential likelihood and severity of a failure and developing solutions to avoid them. The Theory of Constraints (TOC) thinking processes also offer some useful cause and effect tools such as the future reality tree planning tool. It includes includes negative branch reservations where undesirable effects (UDE’s) are eliminated by steps that trim the negative branches.

One caution in using these techniques – they are only as good as the quality of the analysis done. The inventors themselves might be too close to the product to see potential shortcomings, so it’s a good idea to include outside facilitation and expert peer review in the process.

The Simple Bottom Line – Taking these proactive steps to evaluate potential product failures might not be fun, but it can help ensure that your new product doesn’t become the next breaking news story – except maybe because it’s selling so well.

So tell us what you think and when you’ve seen this kind of upfront planning pay off.

Photo Credit - Igor Golubenkov: waterfowl after a 2006 Black Sea spill

{ 12 comments… add one }
  • Michael A. Dalton May 5, 2010, 3:15 am

    Thanks for your comments Calvin – I have to wonder though how much would it be worth to Exxon to not have their name inextricably linked to one of the worst ecological disasters of all time. Google Exxon and there are multiple front page mentions of it…ouch!

  • John S. Erickson, Ph May 5, 2010, 3:01 pm

    Thanks very much for this post, Michael!

    I think the failure to plan rigorously for disasters is related to the same reluctance to realistically assess risks that plagues project management and business planning. Anyone who has been through a rigorous risk management course knows that usually this discipline is counter-intuitive, which is usually the case when we soberly face what statistics rather than intuition tells us.

    Consider Apollo 13: Jim Lovell's crew and the team in Mission Control had not anticipated a disaster of the scale they experienced; the oxygen tank explosion was way off the charts, even for NASA's mission planners. They did however plan for and rehearse many, many other scenarios, giving them experience "working the problem" and were able to properly weigh the risks even when presented with a new situation that required urgent action.

    Young electrical engineers typically have difficulty when learning to debug their first designs because they haven't learned to think outside the boundaries of expected behavior — they assume new materials actually work as designed, they assume signals are clean, they assume PCBs were manufactured properly. But soon they figure out how to rank the varied sources of failure in a kind of hierarchy of probabilities, which they work through as they debug.

    These are all "failures of imagination" of varying degrees, because it is the human imagination that allows us to think about and plan for situations that previously had been unthinkable. We need more teams that allow their imaginations to run wild, and who properly evaluate the risk inherent in the scenarios they think up.

  • Jane Witheridge May 6, 2010, 5:44 am

    While innovation requires rigorous testing in the laboratory and field, it is not a replacement for internal and external safety checks and emergency drills.

    Environmental rules and regulations for mining wastes and industrial products pale in comparison to laws devoted to municipal solid waste. Case in point is Ohio where the local Boards of Health are charged with the responsibility to inspect waste management practices, and even where municipal waste landfills are shown to be compliant, the regulatory agents nonetheless routinely inspect facilities twice a week.

    Anyone out there know how frequently rigs are inspected by regulators?

    Surely there are routine safety checks performed by BP at their rigs, and no doubt there are more rigorous internal safety and environmental audit programs that address more fundamental issues. If you follow the trail of paperwork, I would bet that the failure of safety measures at rigs has been identified, but not fully addressed. BP’s management team is likely to be just as concerned as other oil companies about environmental and safety issues while balancing quarterly earnings with ‘the art of the long view’.

    Perhaps what sets them apart is Lady Luck.

    Next time oil prices go down, why not impose a federal tax on gasoline to encourage lower consumption? With the tax monies generated, foster innovation!

  • Chris@Dell May 6, 2010, 4:43 pm

    If we could peel back the layers and go back in time I thinkwe would learn that the engineers and designers probably identified every Risk you can think of. The blowout that caused this problem was a widely known issue. It was so widely known that they designed special hardware to prevent it. There are also mehcanisms in place on other Rigs around the world that offer more backups to the main blowout preventer. Automated and remote control devices, explosives…you name it. There are tons of solutions to this mess.

    The real issue is that companies take the road to profit and cut as much cost as possible. Often times that means implementing the extra technology that can save lives or the enviornment in the face of disaster. When you talk about human life and our ecosystem there should be no price too high. There should be no excuse for execitives not listening to the designers and project managers when they publish Risk analysis.

  • serg Anishchenko May 6, 2010, 10:04 pm

    I do not believe it is possible at all. If we take a look at the Altshuller’s definition of the ideal technical system – the system which performs its functions but does not exist – it’s clear: we are far away from there. It is also true – if some of components of the system exist there is some risk of failure. Because any system needs some innovations until it is eliminated but functions are performed.

    Of course any invention has to be tested against known failures until deployment. But the risk factor will always attract people to gamble. But if it is also true probably the only right way to lower the risk is to ensure the decision makers are not only powerful but too pragmatic and scrupulous.

    But honestly, looking at the current BP’s disaster, it is too hard to believe the spilling was not considered neither by BP nor government who issued the license. It is too obvious to skip it. So, I do not think it is an engineering problem but rather greasiness. And we need to turn to psychoanalytics for a solution.

  • Syb Leijenaar May 10, 2010, 6:46 am

    I agree with Serg Anishchenko. A blow out shut off valve is a standard safety feature on an oil rig. Not to have a back up plan for a failing shut off valve is something that must have been considered consciously. If the oil rig had been in BP's back garden the North Sea, government regulations would have required a back up plan. In the much deeper Mexican Gulf this seems not required. According to the lean philosophy it would be waste for BP to have a back up plan that is not legally required.

    In my opinion it is the same logic as what caused the financial crisis: Taking risks can be profitable and when regulations allow for it, then risks will be taken. As a society we can not rely on companies that they do not gamble. If after a wrong gamble the company is gone, so be it. But if their gambling affects the society, the society has to intervene.

  • Rick May 10, 2010, 7:45 am

    I would like to add a couple of thought on the topic overall and then answer the author's question:

    1) I would like to see more of a focus on prevention of this happening again, immediately.

    I would like to see a great deal of attention brought to the prevention of recurrance of this type of disaster. Short term actions like verification of proper operation of all similar shutoff valves (especially from the same valve manufacturer) in use throughout the world should be publicly available and immediately completed. To have this happen once is tragic, to let the same thing happen again would be unthinkable. Understanding the true causes of why this occured and preventing the recurrance should be the first priority.

    2) Ensure the proper incentives are in place for ongoing future risk reduction.

    A well run company is one that protects itself against risks to its assets. The goal of a well run company is to increase the future value of the company and that includes avoiding risks to the value of the company. Risks like the protection of ongoing operational asset (I am sure the rig was making money for BP) as well as reducing the risks of damages, fines and lawsuits.

    However, I also know that not every company is well run. Not all companies have excellent engineering, quality and safety professionals. I would expect as a logical outcome that operations like oil rigs that contain risks to the public should have an ongoing system of checks and balances (like a manditory auditing system to ensure all safety equipment like shutoff valves are in good operational order). A key role of government is to ensure the safety of the public and the health of its citizens. A good system of checks and balances with healthy tension of outside auditing for the public safety.

    Back to the author's original question (which is excellent) on the role of inventors and the importance of failure mode and effect analysis (FMEA).

    FMEA is a well understood product development tool and it is used to a great extent by automotive and aerospace companies as part of thier product design and production validation processes.

    Some public engineering design tool companies sell automated FMEA tools that reduce the effort it takes to do this critical process well.

    In this case, the innovators of this oil rig included a risk reduction device in the design (shutoff valves). What was not done was to ensure the valve was maintained and would operate properly if needed and that is the key to learning here.

    The failure mode risk is only reduced if the risk protection devices actually work over a lifetime. This is typically called out in a Control Plan in manufacturing.

    A great analogy is your home's electrical circuit breakers. It is recommended to cycle them annually to test for proper operation. How many of us actually cycle our circuit breakers annually like recommended? How many of us cycle out GFCI plugs in our bathrooms like we are supposed to? I know that I do not.

    Some sort of onging verification of safety equipment operation is needed in addition to a good FMEA for key safety areas.

    Thanks again to the author for a good question and I hope that the FMEA's and robust Control Plans for the oil industry are reviewed and they are updated to ensure ongoing compliance to risk reduction.

  • Syb Leijenaar May 11, 2010, 3:52 am

    Here is a link to a very explanatory inside story about the cause of the disaster from the Huffington Post:


    At the moment of the disaster they had a party on the oil rig to celebrate 7 years of accident free operation… A lesson can be that you get used to the risks but never should turn your back on them.

  • rich graham May 21, 2010, 6:59 am

    Clearly BP and the rig manufacturer had not tested/been prepared for all potential disasters — and you'd think oil containment would be #1. All rigs should be tested and retrofitted so this can't happen again. Perhaps the levels of pumping need to be reduced. We should also not be so eager to develop new drill sites when we can't control/now see the damage of existing ones.

  • matt May 28, 2010, 4:09 pm

    Sure, FMEA is a useful tool, but it is not comprehensive enough to use as the only hazard analysis methodology. FMEA has a bad habit of focusing on single-point failures and missing systemic failures. Or common-cause failures. A top-down hazard analysis like Fault Tree Analysis should always be used in conjunction with FMEA on complex systems.

  • christian June 2, 2010, 2:00 am

    I would interest whether BP has executed a FMEA about the platform " Deepwater Horizon " and whether this is observably? Farther would interest me whether the measures are filling to the leak like suck off bell and do press from difficult mud, rubber balls, ripe spite about a FMEA were secured?

  • Mike G June 3, 2010, 6:12 am

    The baisc problem is we let greed overcome our ability to assess risk correctly. We have enough technology and know how to tell us we have risks involved in drilling as deep as BP did here. BP went there without having a proper backup or shut down plan in place in the event of a disaster. I believe management in this case is criminally negligent due to the fact that lives were lost, our environment will be damaged for years to come, and we still do not have a solution. Oil companies are getting very fat from profits due to high gas prices, etc. Now they have to be held accountable for their actions.

Leave a Comment