Book Review: The Failure of Risk Management

http://syndetics.com/index.php?isbn=9780470387955/LC.JPG&client=ryera&type=hw7

Douglas W. Hubbard, The failure of risk management: why it’s broken and how to fix it. Hoboken, N.J. : Wiley, c2009.  Ryerson Library Call Number:  HD61 .H76 2009, 7th Floor.

In June 2012 I attended for the first time the annual conference of the Canadian Society of Civil Engineers.  A challenge during a conference is that there are always so many interesting presentations that you would not like to just sit for all the presentations in one room; rather, you may opt to shuttle around.  This conference was no exception for me: I jumped over different themes from structures and earthquake through transportation to construction and project management. One title that really attracted me went with something like “integration of value engineering and risk analysis”.   The presenter was from industry with a manager title as well as a PhD degree from a renowned university from western Canada.  Later I found out his PhD supervisor was a big name in the field of project risk management.

I had been very enthusiastic about this presentation because I wondered what one could really add onto this subject.  After all, according to Steve Holmes, senior engineer of the Ontario Ministry of Transportation and a influential proponent of value engineering in Canada, from whom I received the basic training in value analysis, risk analysis must be an essential element of any value analysis (VA).

Steve is really a dedicated person who would grasp whatever opportunity and means to convince people to study and employ VA.  Upon several telephone and email communications, I met Steve in person for the first time at the CSVA conference last October.  After the conference, I wrote to Steve that the VA was indeed a very structured approach to innovation. However, I also politely proposed that much research would be needed before the VA could get wider acceptance in planning, engineering design and project evaluation.  One of the areas that need immediate improvement is the risk assessment methods that the structured VA currently adopts.  I said this because I felt that the VA was just a repacked and much simplified tool that had been practiced for a long time in system engineering.  For example, there have been many proven tools for functional analysis, performance measurements and risk analysis in the field of system engineering.  Surprisingly, Steve gave me an emotional reply, seemingly suggesting me that there was nothing wrong in VA and that it was the university professors who should be blamed for this lack of acceptance of VA because we did not teach students VA in our curriculum.  I admitted to him that I did not know the details of value analysis in transportation engineering.  Steve kindly arranged me a VA101 course and I attended it in November.  After the one-day training workshop at Edward Garden in Toronto, my view on value analysis did not change.  Rather, I was more convinced that the scoring approach to risk analysis and the a little too arbitrary definition of ‘value’ were probably the main reasons that have prevented itself from wider acceptance by industry.

So I sat the CSCE conference presentation, wishing to see some sophisticated risk analysis method to be proposed for the value analysis.  To my disappointment, after a grandiose opening the presenter swiftly changed the routine scoring technique for evaluating the “likelihood” and  “consequence”.  No surprise, he also multiplied the two scores to obtain the number of risk.

Right after he turned to the ‘thank you’ slide, I already raised my hand. Without any rhetoric, “how can you multiply two ordinal numbers to obtain a cardinal one for comparison?” I asked.   After a little puzzle, “what do you mean?” the presenter replied.  I probably already lost my patience by that time as I did not recall that I explained my objection clearly.  However, I remember I told him and the audience that 2 by 2 is not necessarily 4 in your calculation.  A little frustrated by my comments, the presenter must have been thinking, I could tell, where’s this guy coming from?  To give himself some strength in his defense, he told me that professor ABC (the big name I mentioned in the beginning) also used the approach.  Alas, ‘the best practice’, so-called!

So when I’ve got Hubbard’s Failure of Risk Management, I cannot help completing it in one straight reading.  The author reveals all of the tricks that the ‘risk management professional’ used to sell the ‘snake oil’ (pp.71-74).   For example, when reading “Sell ‘structured’ approaches”, I thought, yes, the value analysis workshop was structured to take five days as if the whole facilitation procedure has been inscribed to stone.  In “Sell What Feels Right,” Mr Hubbard tells the cold joke that ‘if you call it a score, it will sound more like golf, and it will be more fun for them.’  Undoubtedly, Mr. Hubbard would smile if he read Steve’s response and the presenter’s reaction to my questions.  In the whole day of reading this book, I kept nodding and saying to myself, “that is exactly what I wanted to say but was not bold enough to speak out in many situations.’

The book starts with asking three basic questions: 1) do any of the popular risk management methods work? 2) would anyone in the organization even know if they didn’t work? 3) if they didn’t work, what would be the consequences?  The author states that the methods are fundamentally flawed and the risk managers suffer from or enjoy a placebo effect. The consequence? A common mode failure, or in fact, “a weak risk management approach is effectively the biggest risk in the organization (p.6).”  The reasons for this failure include

  • the failure to measure and validate methods as a whole or in part,
  • the use of components that are known not to work, and
  • the lack of use of components that are known to work.

Clearly, among the four key elements of risk management (identification, modeling and analysis, mitigation, and review), the modeling and analysis was the author’s focus of examination.

The author compared the trites of four major professionals using risk management: actuaries, war quants (or Operations Researcher), economists (financial engineers), and management consultants.  This comparison is much clearer and more conclusive than a recent publication by Samson et al. (2009), who after reviewing the many different views, added one more confusion to the community.

Mr Hubbard identified seven challenges for risk management:

  1. confusion regarding the concept of risk
  2. completely avoidable human errors in subjective judgments of risk
  3. entirely ineffectual but popular subjective scoring methods
  4. misconceptions that block the use of better, existing methods
  5. recurring errors in even the most sophisticated models
  6. institutional factors, and
  7. unproductive incentive structures

With engineering background, I was particularly satisfied with the author’s analysis of the flaws in the popular scoring methods.  I also admired very much his way of dismantling all sorts of accuse of sophisticated, quantitative methods of risk modeling.  The summary of Kahneman and Tversky’s work on judgment and biases is very concise.  I also agree with the author’s standing on subjectivism in interpretation of probability.  Of course, the common errors in quantitative models discussed in Chapter 9 were not new to me.  One area that I felt could be strengthened is the probability calibration for inputs in Monte Carlo simulation.

The thesis of the book is to advocate the use of scientific, quantitative risk modeling, rather than qualitative (scoring) approaches that were often touted as ‘best practices’, in risk management.  I encourage every student from my Risk & Reliability for Engineers class to read this book.

This entry was posted in Book Review, Risk & Reliability, Uncertainty. Bookmark the permalink.