Risk Management Tools
- Hits: 10065
There are a great many risk management tools available, in a variety of forms. Most of us are familiar with failure mode and effects analysis (FMEA) for design risk management, or some other form of risk matrix for project risk planning.
But what do we use for real-time decision making? Our management decisions often cause the greatest risks, but we don’t have fancy tools for them.
Fancy tools just aren’t useful for real-time, immediate-need decisions; there isn’t time to build a matrix and debate criticality vs. severity of consequence. We rely on experience and common sense for real-time judgment and decision making. The unfortunate thing is that even the most experienced of us can lose sight of common sense, especially when we’re under pressure.
Pressure = poor decisions
I’ve recently witnessed several of the organizations I work with suffer from risks that became problems because of poor management decisions. Management took unreasonable or desperate risks, and the gambles didn’t pay off.
Under normal circumstances, those of us in leadership roles evaluate the risks of the decisions we make, hedge our bets, and mitigate the risks of our on-demand plans. However, when things aren’t normal, and when we feel pressure to make good things happen, immediately, with the resources already in hand, our good habits for managing risk give way to our desire to meet the expectations placed on us. Instead of rationally looking for the safest path forward, we start desperately looking for any solution that will give us the result we want or need. We stop managing risk and start gambling on hope.
This is the most important thing to understand about real-time risk management: We stop doing it when we shift into a mode of wishful thinking.
Following is an example based on product qualification testing. A business sends a sample of its new product to an independent lab for performance testing and evaluation. The product is a complex system of parts, materials, and subassemblies. and the test is destructive and very expensive.
The sample product fails the test. Obviously, some adjustments to the design are necessary, but there’s pressure to launch the product. Management makes the decision to reuse components and subsystems that appear to be intact after the test to more quickly incorporate changes and return to test readiness.
It’s a brilliant win-win decision, right? The business gets a new test article ready faster and with less expense. When the product passes because of the design adjustments to the components that apparently failed, the business can launch the product. It makes perfect sense. It’s also a perfect example of wishful thinking.
If the test article passes the test, everything is fine. If the test article fails, however, the path forward is substantially more complicated and takes much longer than if a test of the new parts was performed instead. If the partially reused test article fails, the business doesn’t know if it failed because the design changes weren’t sufficient, or if some of the reused components failed because they were compromised during the first destructive test.
That makes another risk decision necessary. Should the business put faith in the design, build and test an all-new test article, and run a third test without any further design changes, or should it invest more time and expense in further design changes that might or might not be necessary?
If the business is acting cautiously, it would make more changes, but it also wouldn’t be in the position to have to make this decision in the first place. However, if the business is operating with a wishful-thinking mindset, it will run a test with the second design and new parts, and hope that it passes. If that test fails as well, then a third design and maybe a fourth becomes necessary. That’s a lot of time and money lost because of risky decisions.
Let’s take the same example a step further. Suppose the independent test facility, as part of its expertise and service, provides a model or calculated estimate of the dynamic energy imparted and subsequently mitigated by the design during the test. The business doesn’t like the numbers it sees, however, and the management team wishes the numbers were different, so it asks another group to model and estimate the energies to provide a second opinion.
That’s a big mistake. In such a situation, it’s assured that two groups will model and estimate the complex dynamics of the situation differently. The numbers will certainly be different. If the second group calculates a lower number, it might be OK. But if it calculates a higher number, the business has a big ethical problem to address.
If the business accepts the second opinion, it calls into doubt the test results and calculations of all previous evaluations provided by the test facility. Does that open the door for invalidating previous declarations of qualification of products tested by the same facility? That could be a huge mess to sort out.
When the business makes decisions like reusing potentially compromised parts, or embracing the second opinion of calculations on qualification performance, it’s clearly operating based on wishful thinking. It’s hoping that the best-case scenario comes true, and it’s not assessing the consequences of any other scenario coming true instead. It’s gambling wildly instead of protectively hedging its bets.
Consider a pair of suggestions
Thus, my first suggestion to bolster your organization’s habit for real-time risk management and to protect against the breakdown of common sense under pressure is to establish the trigger phrase, “wishful thinking.” Make it a code phrase for less-than-rational decisions.
Use it with team members and leaders to trigger an immediate double-check of the decision, before taking action. If everyone knows what it means, then it simplifies the effort to discuss whether the decision being made is the right one, and not just a desperate gamble. When someone suggests that “wishful thinking” might be in play, it should immediately trigger a discussion about the possible pass/fail scenarios.
That leads to my second suggestion. To verify or justify a decision, we need to consider what happens if things go well, but also what happens when things don’t.
When we are operating normally, when we plan that bad things don’t happen, we tend to focus on the risk. Alternatively, when we are operating under pressure to make everything go well, we tend to focus only on the desired result and we don’t consider the risks.
We have to train ourselves to recognize when we are operating under normal pressure to prevent something “bad” or when we are operating under undue pressure to produce anything “good.” When we’re focusing on the latter, we should make a habit of engaging someone else, preferably someone experienced and typically rational, to help us answer two critical questions:
1. “What happens if this decision works?”
2. “What happens if this decision doesn’t work; what are the consequences?”
If we’re making a decision between two options, there are four questions to answer:
1. “What happens if we choose option A and it works?”
2. “What happens if we choose option A and it doesn’t work?”
3. “What happens if we choose option B and it works?”
4. “What happens if we choose option B and it doesn’t work?”
Under what might be wishful-thinking circumstances, it helps to engage another, rational expert to discuss the answers. If we’re blinded by wishful thinking we don’t always visualize the consequences of things not going the way we hope.
I know that what I’m suggesting isn’t revolutionary. It’s something we’re already accustomed to doing, out of habit. What I hope we recognize is that our habits break down when we feel the pressure to make something work instead of to protect against those things that might not. When we recognize that our normal risk management habits might be compromised, a simple formula or process, such as answering a few basic questions, conscientiously, can supplement our normal common sense.
This week set your thoughts toward identifying wishful thinking. If you observe someone making a wishful-thinking decision, politely challenge them by asking, “What will happen if that decision doesn’t lead to the result you want?” See if a polite discussion doesn’t help bring everyone’s thoughts back to rational risk management. Make it a habit for yourself and your organizational peers and cohorts. Stop the desperate gambles driven by wishful thinking.
Article Reference:Quality Digest