Sunday, 8 May 2016

Delusion and Deception in Project Selection

Two Basic Methods for Prevention and Avoidance




It is well known that the future is uncertain, where uncertainty is an unmeasurable or truly unknown outcome, often unique. This can be clearly seen on large infrastructure projects, which often bring into focus the issues around project selection. A remarkable number of these projects are unsuccessful, by exceeding their time and cost estimates, or inefficient because their returns and/or benefits are well below forecasts.

Major infrastructure projects are typically selected under conditions of uncertainty, not risk. Risk is identifiable and measurable, uncertainty is not. There are three main reasons:

  1. Costs and benefits are many years into the future, and the estimates depend on the assumptions and type of model used;
  2. These projects are often large enough to change their economic environment, hence generate unintended consequences, with the Oresund Bridge between Sweden and Denmark the prime example; and
  3.  Stakeholder action creates a dynamic context, with the possibility of escalation of commitment driven by earlier decisions.
In their 2009 paper ‘Delusion and deception in large infrastructure projects’ Flyvbjerg, Garbuto and Lovallo argued project planners are often far too optimistic in their estimates (delusion) or ‘strategically misrepresent’ their project to approving and funding organisations (deception). Clearly, one path to better project selection that would address these issues is better information about the proposed project.

One source of such information can be found in the performance of previous similar projects. Although it seems obvious, this has only recently become common practice by some experienced private sector clients’ when considering major projects, as the example of IPA shows.

Independent Project Analysis was established by Edward Merrow in 1987, after a stint at RAND where he did the first published study on megaprojects, those costing over US$1 billion. The company provides a project research capability for heavy industry and the process and extraction industries. Their database in 2011 had 318 megaprojects, of about 11,000 projects in total, from industries like oil and gas, petroleum, minerals and metals, chemicals, and power, LNG and pipelines. In his book on megaprojects Merrow found that the best examples of project-definition work reduce both project timelines and costs by roughly 20 percent.

Depending on the project, between 2,000 and 5,000 data points are collected over the initiation, development and delivery stages. From this database companies can compare their project with other, similar projects, across a wide range of performance indicators. The data gives estimates on approval, design and documentation, and delivery times for the type of project, and allows for factors like location, access and complexity in costs.

In his 2011 book Industrial Megaprojects Merrow advocates a process he calls front-end loading, the “period prior to sanction of the project”. There are three stages. In summary, the first evaluates the business case, the second is scope selection and development, and the third is detailed design. His argument is that there need to be gates between these stages that prevent less viable projects from getting to authorisation. If there is a problem in the private sector with project selection, even with the managerial structures, capital budgeting and corporate finance constraints found in profit-driven companies, then the problem in the public sector can be reasonably expected to be much worse.

A significant reason for poor decisions on project selection is unwarranted optimism about outcomes, called the planning fallacy by Kahneman and Tversky, or the tendency to underestimate the time needed for a task, even with the experience of similar tasks over-running. Thus, we have a general tendency to underestimate the time, costs, and risks of future actions and overestimate benefits of those same actions.

In their ‘Delusion and deception’ paper’ Flyvbjerg, Garbuto and Lovallo proposed a solution to optimism bias they called Reference Class Forecasting. This works the same way as the IPA database, but their database was mainly composed of public infrastructure projects, many in the transport sector. RFC involves three steps:

  1. Identification of a relevant reference class of past, similar projects;
  2. Establishing a probability distribution for the reference class;   
  3. Comparing the specific project with the reference class distribution.

In decision-making under uncertainty errors of judgment are often systematic and predictable rather than random, manifesting bias rather than confusion. RFC may limit bias just by following a procedure and by gathering relevant data for a panel of projects to be used in the comparisons. RFC may also prevent excessively large projects being preferred to more welfare-efficient projects when the political benefits are large compared to more effective projects.

To deliver better results in on-time and on-budget delivery, Merrow argues project developers or sponsors should spend 3 to 5 percent of the cost of the project on early-stage engineering and design. This is because the design process will often raise challenges that can to be resolved before construction starts, saving time and money.

If more realistic, and therefore more accurate, time and cost estimates were given for major infrastructure projects before they are approved, and during the design and development stages, there would be fewer recriminations about project performance and less incentive to find scapegoats on completion, which is typically over budget and schedule. There would be fewer of the common accusations of poor productivity, management failures or poor planning, thus lessening the atmosphere of acrimony that often surrounds major projects in their latter stages. This would also encourage more transparency about the project’s performance, in both the delivery and operational stages, particularly by public officials.

Merrow argues the owner’s job is to select the right project and the contractor’s job is to deliver the project as specified, on time and on budget. In his view contractual relationships are more tactics than strategy, and cannot address any fundamental weaknesses in the client’s management of the project, in particular the client ultimately has to own the design. This crucial point is now widely recognised by the private sector clients/owners of large engineering projects that Merrow studies.

For example, both Shell and BP established project academies in 2005 because they understood that significant risk transfer from clients to contractors is structurally impossible on the oil and gas projects they undertake. In the public sector, the UK Cabinet Office started a Major Projects Leadership Academy with the aim of reducing reliance on consultants, and in Australia a similar Leadership Academy was announced in 2013, and six MBA-type courses on procurement developed with government departments are now running at Australian universities.

A great deal is already known about the requirements for large infrastructure to be successful, based on the performance of projects over the last two decades and the many studies and reports that have been done on those projects. Better use of data from previous projects in the evaluation and definition stages of new projects would be a transformative innovation in procurement management, and a more empirical approach by clients in collecting and using data is necessary if better decisions are to be made.
 
Flyvbjerg, B., Garbuto, M. and Lovallo, D. 2009. Delusion and deception in large infrastructure projects: Two models for explaining and preventing executive disaster, California Management Review.


Kahneman, D. and Tversky, A. 1979. Intuitive prediction: biases and corrective procedures. TIMS Studies in Management Science.


Merrow. E.W. 2011. Industrial Megaprojects: Concepts, Strategies and Practices for Success, Hoboken, N.J.: Wiley.