Saturday, 20 September 2025

Recent Research on AI Effects on Employment and Work

 AI scenarios range from apocalypse to slow and gradual adoption

 




There has been and currently is a lot of discussion on the effects of artificial intelligence (AI) on employment. Across that research there is a wide range of views on the timing and extent of those effects, from a looming jobs apocalypse with high unemployment to slow and gradual adoption with low unemployment. The picture is clouded by tech firms self-serving promotion of AI solutions and the attention given to the possibility of an out-of-control unaligned AI killing everyone, or turning them into paperclips

 

A few examples will suffice. According to Goldman Sachs Research on How Will AI Affect the Global Workforce? in August 2025,  AI is unlikely to lead to a large increase in unemployment ‘because technological change tends to boost demand for workers in new occupations’, creates new jobs, and increases output and demand. They estimate unemployment will increase by half a percentage point during the AI transition period as displaced workers seek new positions and, if current AI use cases were expanded across the economy, 2.5% of US employment would be at risk of related job loss. So far, they believe the ‘vast majority of companies have not incorporated AI into regular workflows and the low adoption rate is limiting labour market effects.’ 

 

Massachusetts Institute of Technology professor Daron Acemoglu, who has written extensively on technology and work, believes only 5% of all jobs will be taken over, or at least heavily aided, by AI over the next decade. However, the World Economic Forum’s Future of Jobs Report 2025 estimates that, by 2030, new job creation and job displacement will be a ‘combined total of 22% of today’s total (formal) jobs.’ Their jobs outlook is based on the macrotrends of technology, economic uncertainty, demographics, and the energy transition, of which ‘AI and information processing technologies are expected to have the biggest impact – with 86% of respondents expecting these technologies to transform their business by 2030.’

 

On the threat of extinction, in April 2025 the AI Futures Project, a credible non-profit research group, released their AI 2027 scenario, when AI systems ‘become good enough to dramatically accelerate their research’ and start building their own superintelligent AI systems. Without human understanding of what’s happening, the system develops misaligned goals: ‘Previous AIs would lie to humans, but they weren’t systematically plotting to gain power over the humans.’ The superintelligent AI will manipulate humans and rapidly industrialise by manufacturing robots: ‘Once a sufficient number of robots have been built, the AI releases a bioweapon, killing all humans. Then, it continues the industrialization, and launches Von Neumann probes to colonize space.’ 

 

In September the US think tank RAND published a research paper on the potential for the proliferation of robotic embodiment of superintelligent AI called Averting a Robot Catastrophe, arguing for ‘the urgent need to proactively address this issue now rather than waiting until the technologies are fully deployed to ensure responsible governance and risk management.’ Another 2025 RAND paper on The Extinction Risk From AIconcluded ‘Although we could not show in any of our scenarios that AI could definitely create an extinction threat to humanity, we could not rule out the possibility… resources dedicated to extinction risk mitigation are most useful if they also contribute to mitigating global catastrophic risks and improving AI safety in general.’

 

While AI is developing rapidly, and there are examples of AI deception from Anthropic and OpenAI, a cautionary tale is US-based Builder.ai. The company claimed its product, an AI bot called Natasha, could help customers build software six times faster and 70% cheaper than humans. In 2023 it was ranked third by tech industry magazine Fast Company behind OpenAI and Google’s DeepMind in its innovative AI companies list, and was valued at $US1.5 billion. Builder.ai collapsed in May: ‘Alongside old-fashioned start-up dishonesty with dramatically overstating its revenue, allegations arose that the work of its Natasha neural network was actually the work of 700 human programmers in India.’ This is reminiscent of Elon Musk’s Optimus robots being remote controlled in a 2024 demonstration. 

 

Although it is still too early to say what the effect of AI on employment will be, there has been some useful recent research on the effect of AI on jobs and work, particularly in the US. This post surveys some of the research released over the last few months. 

 

Australian Research

 

The Productivity Commission’s August 2025 Harnessing Data and Digital Technology report said: ‘The economic potential of AI is clear, and we are still in the early stages of its development and adoption… multifactor productivity gains above 2.3% are likely over the next decade, though there is considerable uncertainty. This would translate into about 4.3% labour productivity  growth over the same period.’ The Commission argued data underpins growth and value in the digital economy. And a ‘mature data-sharing regime could add up to $10 billion to Australia’s annual economic output. Experience shows that we need a flexible approach to facilitating data access across the economy.’ In another report for the Economic Reform Roundtable on skills and employment, the Commission recommended improving education and training systems.

 

Grattan institute researchers Trent Wiltshire and Hui-Ling Chan’s September 2025 article AI is Coming: Prepare for the Worst argues ‘in the event of significant disruption, the federal government may need to consider how Australia’s safety net and retraining systems’, with better preparation and scenario planning for Australia for the possibility AI will cause mass unemployment. They suggest changes to income support should be considered, such as lifetime learning accounts, unemployment insurance (a time-limited payment linked to a person’s previous income widely used in Europe), easier access to superannuation when unemployed. They also recommend Denmark’s ‘flexicurity’ system where it is easy to retrench workers but there is a safety net that includes up to two years unemployment insurance, and education, retraining, and support programs. About 25% of Denmark’s private industry workers change jobs each year, and 67% of workers are union members. 

 

A June 2025 PwC AI jobs barometer ‘looked at close to a billion job adverts from 24 countries and 80 sectors to understand how the demand for workers is shifting in relation to AI adoption. The global study found that AI is making workers more valuable, not less. Industries most able to use AI have seen productivity growth nearly quadruple since 2022 and are seeing three times higher growth in revenue generated per employee. Jobs numbers and wages are also growing in virtually every AI-exposed occupation, with AI-skilled workers commanding a 56% wage premium, on average.’

 

The PwC survey found the Australian industry effect of AI was a surge in demand for AI skills in the overall jobs market, nearly doubling from 12,000 postings in 2020 to 23,000 in 2021. Since then there have been 23,000 postings a year, although this was only 1.8% of total job postings in 2024. As Figure 1 below from the report shows, Finance and Insurance was the leading industry, but there has been rapid growth in Construction industry AI job postings.

 

Figure 1. Job postings

 

Source: PwC

 

 

RBA Survey of Australian Businesses

 

The Reserve Bank Governor Michele Bullock gave a speech on September 3rd which included results of an RBA survey of businesses about AI, robotics and technology adoption. Although not about employment, the speech had the Figure below from the RBA survey of businesses about technology adoption, with the striking finding that 80% of firms expect to be using AI in the next three years, up from 25% today. This is probably due to the RBA survey population being skewed toward larger firms. 

 

Figure 2. Australian businesses’ technology adoption

Source: RBA

 

In her speech she said: ‘ Firms mainly expect these tools to augment labour, automating repetitive tasks and redesigning the composition of roles. Firms thought they may initially see an increase in their headcount as they design and embed new technologies, though this may be followed by a small decline as they mature in their adoption of new technologies. Lower skilled roles may decline, while demand for higher skilled roles is expected to grow, continuing (and perhaps even fast-tracking) a decades-long trend away from routine manual work. While AI may eventually automate even some higher skilled tasks, firms tell us that it is too early to fully understand what this means for their workforce beyond the next few years. Some roles may change and the demand for different or new skills may in turn increase.’

 

US Research

 

An August 2025 paper by Eckhardt and Goldschlag called AI and Jobs: The Final Word (Until the Next One),found no detectable effect of AI on recent US employment trends using five measures of job exposure to AI. For three of their five measures there was no detectable difference in unemployment between the more exposed and the less exposed workers, and only a small difference, of 0.2 or 0.3 of a percentage point for two measures. They say ‘One pattern is clear in the data: highly exposed workers are doing better in the labor market than less exposed workers. Workers more exposed to AI are better paid, more likely to have Bachelor’s or graduate degrees, and less likely to be unemployed than less exposed workers.’

 

Their conclusion was that AI isn’t taking jobs yet, or the effect is very small. Figure 3 has their unemployment rate of workers with varying degrees of predicted AI exposure (1 is the least exposed, 5 is the most exposed), where there is no correlation between t AI exposure and unemployment.

 

Figure 3. Unemployment rate by AI exposure quintile

 

Source: Eckhardt and Goldschlag 2025.

 

In their Appendix they used US Census Bureau data, which in August had 9% of surveyed firms using AI, up from 5% a year and a half earlier, although 27% of firms in the information sector said they were using AI. The Appendix had the Figure below, showing Construction having one of the lowest levels of AI usage.

 

Figure 4. Percent of businesses using AI

 

Source: Eckhardt and Goldschlag 2025.

 

 

The next word on AI and Jobs came in a paper from the Stanford Digital Economy Lab by Brynjolfsson, Chandar, and Chen Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence. This paper uses two measures of AI exposure, then compares recent employment trends for more and less exposed workers. Their conclusion is radically different to Eckhardt and Goldschlag. The abstract explains:

 

‘We find that since the widespread adoption of generative AI, early-career workers (ages 22-25) in the most AI-exposed occupations have experienced a 13 percent relative decline in employment even after controlling for firm-level shocks. In contrast, employment for workers in less exposed fields and more experienced workers in the same occupations has remained stable or continued to grow. We also find that adjustments occur primarily through employment rather than compensation. Furthermore, employment declines are concentrated in occupations where AI is more likely to automate, rather than augment, human labor. Our results are robust to alternative explanations, such as excluding technology-related firms and excluding occupations amenable to remote work. These six facts provide early, large-scale evidence consistent with the hypothesis that the AI revolution is beginning to have a significant and disproportionate impact on entry-level workers in the American labor market.’

 

Brynjolfsson et al. found ‘substantial declines in employment for early-career workers (ages 22-25) in occupations most exposed to AI’, such as software developers and customer service representatives. In jobs less exposed to AI, employment growth for young workers was comparable to older workers. Declining employment in AI-exposed jobs is driving ‘tepid overall employment growth for 22- to 25- year-olds as employment for older workers continues to grow.’ It should be noted that Eckhardt and Goldschlag use of three other metrics gives a broader perspective. Figure 5 shows growth in employment between October 2022 and July 2025 by age and GPT-4 based AI exposure where quintile 1 is least exposed and 5 the most exposed [1]. 

 

Figure 5. AI exposure group growth in employment.

 

Source: Brynjolfsson et al. 2025

 

 

Is AI a Complement or Substitute?

 

The different conclusions from this US research by throws into sharp relief what is, at this point, the core issue: is AI a substitute for workers, particularly skilled workers, or a complement? In other words, is AI replacing workers in some occupations, or is it being used as a tool to enhance productivity and pefomance? 

 

If AI is a substitute for workers, wages and employment fall, and because AI is substituting for human cognition, businesses will replace expensive humans with a skill or experience premium that is no longer valuable, probably older workers. On the other hand, if AI is a complement, wages and employment increase, and businesses will recruit humans with skill or experience, probably older workers.

 

What Figure 5 above shows is that young workers saw reductions in AI-exposed jobs, but for the other age groups it was positive. In particular, employment for older workers in AI-exposed jobs increased. Eckhardt and Goldschlag also found workers exposed to AI are doing better in the labour market than less exposed workers. This strongly suggests AI is a complement not a substitute. AI complements human skills and augments productivity of workers with the necessary skills and experience, so these people with tacit knowledge not available to an AI are the ones firms are employing. 

 

What About Construction?

 

The US Bureau of Labor Statistics (BLS) 2025 Occupational Outlook Handbook covers 600 occupations [2]. Based on that, the Employment Projections program develops US labour market estimates for 10 years in the future, based on the assumption that labour productivity and technological progress will be in line with historical experience which shows ‘technology impacts occupations, but that these changes tend to be gradual, not sudden. Occupations involve complex combinations of tasks, and even when technology advances rapidly, it can take time for employers and workers to figure out how to incorporate new technology.’ Because technological developments over the next 10 years are ‘impossible to predict with precision,’ new projections are released annually. Figure 6 has Construction employment growing at 4.4% to 2034.

 

Figure 6. US labour market 

 

Source: BLS Employment Projections, August 2025. 

 

In the BLS sector specific projections for the Infrastructure sector by 2030: ‘new job roles are expected to be created for Big Data Specialists and Organizational Development Specialists... Twenty-seven percent of employees in the sector are anticipated to be able to upskill in their current roles, with an additional 17% projected to be reskilled and redeployed. Almost 70% of respondents expect reskilling and upskilling to improve talent retention and enhance competitiveness and productivity of their company, with 50% planning to increase talent mobility through training programmes.’

 

An article in the February 2025 BLS Monthly Labor Review on Incorporating AI impacts in BLS employment projections: occupational case studies argued ‘GenAI can support many tasks involved in architecture and engineering occupations, potentially increasing worker productivity.‘ Their technical expertise and existing regulatory requirements create uncertainty about the extent and employment impact of AI adoption, and underlying demand is expected to remain strong, resulting in US employment growth of 6.8% for architects and engineers.

 

AI Development and Diffusion

 

An April 2025 paper by Arvind Narayanan and Sayash Kapoor from Princeton University’s Center for Information Technology Policy was called AI as Normal technology: An alternative to the vision of AI as a potential superintelligence. They view ‘AI as a tool that we can and should remain in control of,’ and argue this does not require drastic policy interventions. They do not think viewing AI as a humanlike intelligence is ‘currently accurate or useful for understanding its societal impacts.’ 

 

Their lengthy and sometimes digressive paper is based on the idea of a normal technology, where sudden economic impacts are implausible because ‘Innovation and diffusion happen in a feedback loop… With past general-purpose technologies such as electricity, computers, and the internet, the respective feedback loops unfolded over several decades, and we should expect the same to happen with AI.’ They dismiss catastrophic AI because it ‘relies on dubious assumptions about how technology is deployed in the real world. Long before a system would be granted access to consequential decisions, it would need to demonstrate reliable performance in less critical contexts.’

 

Narayanan and Kapoor’s ‘AI as normal technology is a worldview that stands in contrast to the worldview of AI as impending superintelligence.’ They don’t believe progress in generative AI is as fast as claimed, nor that AI diffusion will be much different to electricity or computers, because diffusion ‘occurs over decades, not years.’ This is very different to what they call the utopian and dystopian worldviews of AI, both based on the idea of superintelligence but with opposite consequences. Because the idea of immanent take-off superintelligence is so prevalent in the discussion about AI, as either the solution to many problems or as an extinction event, the suggestion that AI might just be the latest in a long series of powerful general purpose technologies and develop over time in a historically familiar way is both radical and unusual.

 

There is support for this slow adoption and diffusion view from the McKinsey 2025 State of AI report, which is somewhat ironic as McKinsey is one of the biggest boosters of corporate use of AI. Published in March 2025 but based on a mid-2024 survey sample of 1,491, it  found 75% of respondents using AI in at least one business function but only 1% ‘described their gen AI rollouts as mature.’ The survey showed a quarter of large organisations and 12% of smaller ones had an AI roadmap, 52% of large organisations but only 23% of small ones had a dedicated team to drive AI use, and only 28% and 23% respectively had effectively embedded gen AI into business processes. In McKinsey’s sample, 92% of companies plan to increase their AI investment over the next three years. However, that sample will not in any way be representative of most businesses. 

 

Figure 7. AI deployment

 

Source: McKinsey 2025 State of AI report, 42% of respondents  work for organizations with annual revenue over $500 million.

 

Chat GPT was launched in November 2022 by OpenAI. When GPT-4 was released in March 2023, AI went from being unreliable and error prone to being able to synthesise, summarise and interpret data. In August 2005 GPT-5 was released, which again improved performance but not by as much as the previous upgrades, so progress in AI models might be slowing down. The latest models still require supervision and checking of results. 

 

Conclusion

 

There are very many possible futures that could unfold over the next few decades as technologies like AI, automation and robotics develop. However, the key technology is intelligent machines operating in a connected but parallel digital world with varying degrees of autonomy. AI agents will be trained to use data in specific but limited ways, interacting with each other and working with humans. The tools, techniques and data sets needed for machine learning are becoming more accessible for experiment and model building, and as well as the cloud-based large language models like Gemini and ChatGPT, new AI systems like small language models and agentic AI are now appearing. 

 

So far, in many cases these technologies are not a substitute human labour. Generative design software does not replace architects or engineers, automated plan reading does not replace estimators, and optimization of logistics or maintenance by AI does not replace mechanics. Nevertheless, there is an immediate and important need for politicians and policy-makers to increase the urgency and attention given to the effects of AI on employment. Governments have to integrate AI literacy into school curriculums, provide learning subsidies for retraining, and ensure access to technology. 

 

The BLS employment projections show employment declines concentrated in occupations where AI is more likely to automate rather than augment human labour. The industries most affected are mining, retail, manufacturing and employment by government. For construction, between 2024 and 2035 in the US, the projection is for an increase of 4.4%, and for architects and engineers and increase of 6.8% in employment. How representative that is for other counties is impossible to know, but AI use in the US is probably more advanced than in most places.

 

Current employment data from the US shows that employment is steady or increasing for older workers with skills and experience, even in jobs that have high exposure to AI, although for younger workers with less experience there has been an increase in unemployment. At present, AI is affecting entry-level jobs but there are few wider employment effects, and the limited evidence suggests AI complements human skills and augments the productivity of workers with tacit knowledge not available to an AI. 

 

The picture is mixed. Surveys of companies, like the ones from the World Economic Forum, the RBA and McKinsey, report strong interest in AI and a high level of investment planned for the next few years. The share of job postings requiring AI skills is small but increasing. At the same time, employment in AI exposed jobs in the US is rising, not falling, with little or no difference in current unemployment levels between more exposed and less exposed workers. However, research shows  unemployment among 20 to 30 year old tech workers has risen. 

 

There are some other signs of AI effects in the US, with BLS data showing employment growth in marketing, graphic design, office administration, and telephone call centres in 2025 below trend, with reduced demand for workers attributed to AI-related efficiency gains. In Australia there are similar reports, like the use of chatbots by Origin Energy, insurer Suncorp, and banks cutting jobs (announced this week were 3,500 by ANZ and 400 by NAB). 

 

None of this data is conclusive. Survey results are primarily from large firms, micro and small size firms are missing, and surveys do not accurately capture most of the medium size ones. Employment and unemployment data is a lagging indicator that is variable and often revised over the following months, does not include many casual workers, and misses all informal workers completely. Many companies will retrain or relocate workers displaced by AI. The online jobs databases researchers use to estimate AI employment effects are a subset of the overall labour market, and they can only be partially representative of current conditions at best.   

 

On present trends and performance, more extreme AI scenarios are not plausible, such as AI superintelligence delivering annual economic growth of 20%, as a breakthrough problem solving, research and innovation bonanza, as a jobs apocalypse, or as an extinction event. Whether that means AI is a ‘normal’ general purpose technology that will take a few decades to become widely used across industries and the economy is not obvious. According to OpenAI, in mid-2025, ChatGPT had about 800 million weekly active users and 122-130 million daily active users, and 10 million paying users, including 92% of US Fortune 500 companies (N.B. these numbers are from a query on the OpenAI Research website). 

 

Another indicator is downloads of AI models. ChatGPT is averaging 45 million a month, and according to Wikipedia by ‘January 2023, ChatGPT had become the fastest-growing consumer software application in history, gaining over 100 million users in two months. As of May 2025, ChatGPT's website is among the 5 most visited websites.’ That level of uptake is a lot faster than the decades taken for previous technologies like electricity, the internal combustion engine or the internet to become widely used. This reinforces survey findings that many people use AI, including at work, but AI adoption by companies remains low, especially for small and medium size ones, and the great majority of companies have not incorporated AI into their workflows.

 

There are two key points that emerge from, what is at present, an unclear picture of the next decade. The first is that AI automates tasks not jobs, so jobs with structured workflows doing routine and repetitive tasks will be quickly and heavily affected. Examples are administration and data compilation, document processing, customer support, data management, note taking and drafting reports. Employers will do this because it is cost effective and relatively straightforward to train an AI agent for a specific task if the data is available. 

 

The second is the value of tacit knowledge and experience. One example is trade skills and tasks, where some can be automated but some will not, because of the  physical demands of the work. Construction trades will be one of the occupations least affected by AI. Another more pertinent example is in health, where AI assisted diagnostics require oversight by a knowledgeable human. For skilled workers like architects and engineers, using AI requires a high level of knowledge gained through learning by doing in the person responsible for supervision and checking the AI output, and AI could increase demand for these workers. The assumption is that some insight into how the AI works is required.

 

Ethan Mollick’s 2024 book Co-Intelligence outlined how humans can work with an AI chatbot as a co-worker, correcting its errors, checking its work, co-developing ideas, and guiding it in the right direction. This is a widely held view of the way AI will be used. However, in September 2025 he  wrote: ‘I have come to believe that co-intelligence is still important but that the nature of AI is starting to point in a different direction. We're moving from partners to audience, from collaboration to conjuring.’ Mollick suggests the newest most powerful AI models like GPT-5 Pro have ‘impressive output, opaque process’ and ‘for an increasing range of complex tasks, you get an amazing and sophisticated output in response to a vague request, but you have no part in the process….Magic gets done.’ 

 

In the twentieth century, the electrification of workplaces took several decades, well into the 1930s, as organisations restructured around the new technology, relocating and redesigning factories, creating new jobs and developing new products. Now, a hundred years later, AI is having the same effects, but it will not take decades for the restructuring of organisations and the jobs they provide. While the future is uncertain, within a decade AI will probably have become as ubiquitous as electricity and the internet, something we use all the time without thinking about where it comes from or how it works. 

 

                                                            *

 

[1] Eckhardt and Goldschlag, and Brynjolfsson  et al., use a metric based on queries to the Occupational Information Network (O*NET), an online database with hundreds of job definitions, using ChatGPT, developed by Felton, E., M. Raj and R. Seamans in Occupational, industry, and geographic exposure to artificial intelligenceStrategic Management Journal. It was also used by Eloundou, T., S. Manning, P. Mishkin, and D. Rock. 2023. GPTs are GPTs: Labor market impact potential of LLMs, arXiv. 

 

[2] The BLS 2025 Occupational Outlook Handbook includes information on about 600 detailed occupations in over 300 occupational profiles, covering about 4 out of 5 jobs in the US economy. Each profile features 2024–34 projections, along with assessments of the job outlook, work activities, wages, education and training requirements.



Subscribe on Substack https://gerarddevalence.substack.com/ 

 

Saturday, 6 September 2025

Projects, Procurement, and Complexity

 Issues and options for Australian construction 



 

There are many issues that affect construction productivity. Some are long-term, such as innovation, R&D, and education and training systems. Others are structural, like the number of micro and small firms, or institutional, like state based occupational licensing and building codes. However, for the Australian industry by far the most important factor in low productivity growth is the lack of business investment in intellectual and physical capital, the amount of machinery, equipment, buildings, structures, software and R&D, and the skills of the workforce. 

 

The construction industry has been the subject of a number of recent reports from both government and industry, the latest being  the Queensland Productivity Commission’s Opportunities to Improve Productivity of the Construction Industry, which followed the NSW Productivity and Equality Commission report Housing Supply Challenges and Policy Options in August 2024 and the Productivity Commission report Housing Construction Productivity: Can We Fix It? in February 2025. This year from industry has come the Committee for Economic Development’s Size matters: Why Construction productivity Is So Weak and the Australian Industry Group’s Australian Home Building in Crisis.

 

These reports have raised many issues and highlight their wide range. Some issues are well known and there is a broad consensus on both their importance and reform direction, such as training and skills, occupational licensing, and workplace health and safety. Others like collaborative contracting and increasing innovation and R&D are more aspirational. For better or worse, the decision has been made that updates and revisions to the National Construction Code (NCC) will be delayed and less frequent, and the code will be reviewed to make compliance easier. Including issues around government procurement and contracting allowed the Queensland Productivity Commission’s Interim Report to address some important productivity determinants that were not in the other recent reports, which has led to this post. 

 

The issues discussed in this post are in the broad categories of projects, procurement, and complexity. The post first looks at project estimates and reference class forecasting, then argues for separating design and construction. On procurement the topics covered are project sizing and access, industry capacity and BIM mandates. The last two topics are project complexity and collaborative contracting, and using target cost contracts for major projects. 

 

Project Estimates and Reference Class Forecasting

 

A significant reason for poor decisions on projects is unwarranted optimism about outcomes and the time needed to complete tasks. Planners often underestimate a project’s time, costs, and risks due to size, gestation and time taken to deliver, and overestimate the benefits, particularly for major projects. In some cases there is strategic misrepresentation of costs and benefits, where project promoters produce biased appraisals at the approvals stage. After a project has started there are the risks of escalated commitment and lock-in, scope changes, and conflicting interests.

 

Project estimates can be improved by using the performance of previous projects to inform those decisions. Clients collecting and using data from previous projects in the evaluation and definition stages of new projects makes for better decisions. Bent Flyvbjerg proposed a system called Reference Class Forecasting that has three steps:

1.                          Identification of a relevant reference class of past, similar projects;

2.                          Establishing a probability distribution for the reference class;

3.                          Comparing the specific project with the reference class distribution [1].

 

Reference Class Forecasting allows project time and cost estimates to be compared and evaluated against previous similar project outcomes and performance. The data on comparable completed projects provides a range of probable outcomes for a proposed project, with realistic and more accurate time and cost estimates for major projects.

 

Another example is Independent Project Analysis (IPA), established by Ed Merrow in 1987 for industries like oil and gas, petroleum, minerals and metals, chemicals, power, LNG and pipelinesDepending on the project, between 2,000 and 5,000 data points are collected over the initiation, development and delivery stages. From the IPA database companies can compare their project with other, similar projects, across a wide range of performance indicators. Merrow argues defining and planning a major project should cost 5% of the total, and the cost of not spending that money is much more. Merrow’s projects are mostly private sector resource developments like oil and gas projects, and he notes they have different dynamics to public sector projects [2]. 

 

Merrow argues that the owner’s job is to specify the project and the contractor’s job is to deliver the project as specified, on time and on budget. In his view contractual relationships are more tactics than strategy, and cannot address any fundamental weaknesses in the client’s management of the project. While risk can be managed by contracts, it cannot magically be made to disappear with contracts. 

 

Clients are responsible for project shaping and definition, what Merrow calls Front End Loading, which is a necessary prerequisite for creating value. There are three stages of Front End Loading, the first evaluates the business case, the second is scope selection and development, and the third is detailed design. His argument is that there needs to be gates between these stages that prevent less viable projects from getting to authorisation

 

Separating Design and Construction

 

Merrow also argues the best form of project delivery is what he calls ‘mixed’: hiring engineering design contractors on a reimbursable contract and construction contractors on a separate fixed price contract. The evidence from the IPA database is that this is the most effective form of project organization, and is basically traditional construction procurement where consultants are appointed to do the design and a competitive tender is held for one or more contractors to execute the works on site against a complete design.

 

Unbundling design and construction for major projects has a number of advantages. Breaking a project into smaller, sequential contracts spreads the cost out over time, and does not incur interest costs on finance for design work. It makes quality control easier and more effective, by being focused on each stage, an important risk management tool. Completion of design and documentation before tendering significantly reduces contractor risk and therefore total project cost. 

 

Design and construction of major projects should be contracted separately to spread the cost over time and reduce project costs and risks. As far as possible, design and documentation should be complete or nearly complete before tendering. The success or failure of the great majority of projects is determined during definition, planning and development.  

 

Project Sizing and Access 

 

Competition can be limited for major construction projects, for several reasons: procurement costs can be excessive; high technical complexity is sometimes an important factor; and for contractors outside the first tier access to finance for large projects can be difficult. Projects can benefit from economies of scale and scope, but large contracts restrict competition if potential bidders are constrained by technical skills and other resources. 

 

Therefore, dividing a large project into a number of smaller contracts is an important policy decision. Having the design complete before tendering facilitates the division of a large project into sub-projects, for example a road or highway project can be done as stages that link up on completion. This creates opportunities for local contractors, particularly in regional areas. Increased competition for work contains costs as well. 

 

Where possible, a major project should be broken into sub-projects to reduce barriers to entry for tenderers, create opportunities for local contractors and suppliers, and increase competition. This can also reduce project costs by removing a layer of management on projects where a large contractor wins the work then subcontracts it out to smaller local contractors, but charges a project management fee. 

 

Industry Capacity

 

There are significant capacity constraints in construction, as the experience of cost increases and schedule slippage with major projects in Australia shows. Industry capacity is the limit on production, a theoretical maximum of what can be produced in a single period. In some cases this is straightforward, based on the installed capacity of machinery, plant and equipment, adjusted for the utilization rate and maintenance requirements, that produce a set amount day after day, week after week. Construction is not like this, it is geographically dispersed and brings together many suppliers at many sites. Shipbuilding for example brings together many suppliers at a few sites, automobile manufacturing has a small number of specialist suppliers, often co-located. 

 

Separating design and construction allows sequencing of major projects. As the design work is completed a project can be added to a pipeline of projects and released for tender when conditions are appropriate, or when other projects are approaching completion. Suppliers and contractors can use the pipeline of projects to build capacity in the knowledge that there will be ongoing opportunities for their staff and equipment, reducing the set-up costs incurred by re-establishing project teams. 

 

Construction is much more labour intensive than industries it is typically compared to such as manufacturing or mining. This makes the number of people employed one of the key constraints on construction industry capacity. As well as a pipeline of work, developing industry capacity is a long-term strategy based on providing training and skills, improving management practices, and support for SMEs. 

 

Construction industry capacity and productivity will be improved by increased investment in the capital stock. Traditional policy instruments to increase investment are tax incentives like instant write-offs, accelerated depreciation, and financial incentives like production subsidies, grants and loan guarantees. Business investment can also be promoted by development of industry technology strategies, revising public procurement methods, and advanced market commitments for products like prefabricated buildings and services like digital twins. Investment in physical and intellectual assets is essential for building industry capacity and upgrading technology. 

 

BIM Mandates

 

BIM mandates are important because the use of BIM unlocks the potential of digital construction and affects all suppliers of materials, products and services. The ISO 19650 standards for BIM and digital twins provide a framework for creating, managing and sharing data on built assets, establishing consensus on what is to be done and how. There is evidence from surveys that BIM increases efficiency, reduces rework, and improves productivity and workload capacity [4]. In Australia, the Queensland Department of State Development and Infrastructure has had a BIM mandate for public projects over $50 million since 2019. 

 

The experience of overseas jurisdictions with BIM mandates is that BIM use increases over time. The UK is a good example. There has been a significant increase in the use of BIM in the UK since 2011 when a BIM mandate for public construction was introduced. In  2018 a BIM Framework based on ISO 19650 provided a roadmap for firms and clients, and the government developed clauses in construction contracts covering contentious issues such as intellectual property and data ownership. The UK is now a leading user of BIM, along with other early movers with BIM mandates like Singapore and Norway. 

 

In the UK BIM maturity levels are defined as: 

·      No BIM: Information generated manually by hand;

·      Level 0: 2D Computer-Aided Design (CAD) and no or minimal collaboration;

·      Level 1: 2D CAD for documentation and 3D CAD for specific elements;

·      Level 2: Collaborative 3D CAD models with a Common Data Environment, this is required for UK public projects;

·      Level 3: Shared 3D cloud-based model of the project, with the team working collaboratively in real-time.

 

Industry has a collective action problem because the cost of adopting a new technology is significant and skills are typically in short supply. Firms will invest in BIM if they believe that they will profit by it, but legitimately fear future technical progress could make today's investments unprofitable as change makes today’s technologies obsolete. Paradoxically, when innovation and technological progress is rapid, uncertainty can hold back investment by firms because there may be a better, cheaper technology available tomorrow. Why invest today if there will be a competing technology that is half the price in a few years’ time? 

 

Therefore, BIM mandates from government and private sector clients are needed to promote BIM use. For small and medium size firms the initial software and training costs are a barrier to adopting BIM. There should be grants and subsidies to provide financial support to get SMEs to level 2 BIM, with a limit of 50% of these costs. 

 

Complexity and Collaborative Contracting 

 

Contractual relationships are more tactics than strategy, and cannot address any fundamental weaknesses in the client’s management of the project. While risk can be managed by contracts, it cannot magically be made to disappear. An important point on final costs is that a fixed price contract for a project is a floor, not a ceiling. Contractors will allow for the extra risk a poorly documented tender involves, and have a range of contractual provisions available to make claims and cover cost increases during delivery. 

 

Simple or standardised projects are low risk with minimal technical requirements. These commodity-type  projects have well-known structural features and components, their design and location do not present any particular challenges and the construction methods and project management requirements are not exceptional in any way. Examples are car parks and some industrial and commercial buildings. These projects can be accurately estimated, precisely documented and have little uncertainty about what is to be produced and how it is to be done, and should be awarded through competitive tendering on a fixed-price contract.

 

Figure 1. Project characteristics and contracts


 

 

Complicated and complex projects are challenging, each in its own specific way, because of the many characteristics that can cause complexity, such as design, materials, technology, location or site issues, logistics, non-traditional project organisation, or significant coordination and integration issues. Complicated projects require significant development and will benefit from early contractor involvement or have to be well documented before tendering. 

 

Complex projects require more collaborative implementation with early involvement by designers, contractors and suppliers. These have significant uncertainty about their final form, and should be awarded through negotiation with some form of cost-plus or incentive contract.  It may also be advantageous to look for innovative ideas or design options, so for these projects an incremental approach allows contractors and suppliers the opportunity for input during the development of the design.

 

Traditional forms of project organisation and procurement are designed for delivering well documented commodity projects and making repetitive decisions in a stable, predictable environment. By contrast, complicated and complex projects are not fully documented and have significant uncertainty about their final form, and should be awarded through negotiation with a qualified supplier on some form of cost-plus or incentive contract. What will be an appropriate procurement strategy for a simple project will be inappropriate for more complicated or complex projects.

 

Target Cost Contracts

 

A target cost contract (TCC) is an incentive-based procurement strategy that rewards a contractor for savings, using an agreement on cost with an incentive fee. The three components of a TCC are the design, with reimbursable cost with an agreed margin, a lump sum amount as an incentive for the contractor to reduce construction cost below the agreed estimate, and a compensation mechanism for major design changes (not design evolution).

 

Under a TCC, the actual cost of completing the project is compared to an agreed target cost. If the actual cost exceeds the target cost, some of the cost overrun will be borne by the contractor, known as the ‘painshare’, and the rest by the client following an agreed formula. Conversely, if the actual cost is lower than the target cost, then the contractor will share the savings with the client, known as the ‘gainshare’.  This painshare/gainshare mechanism is intended to align the interests of contractors and clients, and is the distinguishing feature of these contracts.

 

Claims under a TCC can be difficult to manage if there are changes in the target cost. These can be cost reductions due to contractor input (through design revisions for example) and cost increases due to client design changes. The challenge is to preserve the incentives while resolving disagreements about the extent and effect of target cost changes. 

 

While incentives might be an effective way to reduce cost, improve project delivery and increase productivity on major projects, the actual operation of the painshare/gainshare mechanism is not straightforward. The sharing formula can vary from simple to complex systems of benefit and risk sharing, and can involve more than one supplier. 

 

Because the agreement and the painshare/gainshare mechanism is between the client and the contractor and typically does not include designers, subcontractors and other suppliers. This is a weakness in these contracts, as the contractor can attempt to shift risks down the supply chain to maximise their profit. 

 

Rather than the client sharing the gain from improved performance, this share could be used to provide an incentive through the supply chain, and thus allow subcontractors and suppliers to benefit as an incentive to increase their productivity. 

 

Target cost contracts can be used to provide incentives to reduce cost, improve project delivery and increase productivity on major projects. However, significant investment in planning, estimating, and preparing detailed designs is required. The potential of BIM and digital twins to improve project design documents is a factor. With the digitisation of design there are more opportunities for target costing and performance-based contracts. 

 

Conclusion

 

Delivery of construction projects is a vexed topic, particularly for large and/or complex projects. It brings together a range of economic, social and political issues for which there are no definitive answers, and thus poses challenges in decision-making and governance not found in procurement of many other projects and services. These are further compounded by the long time horizon of built assets and associated return on investment or value for money aspects of many large projects.

 

It is well known that the future is uncertain, where uncertainty is an unmeasurable or truly unknown outcome, often unique. Major construction projects are typically selected under conditions of uncertainty, not risk (which is identifiable and measurable) for three main reasons: costs and benefits are many years into the future; the projects are often large enough to change their economic environment, hence generate unintended consequences; and stakeholder action creates a dynamic context with the possibility of escalation of commitment driven by post hoc justification of earlier decisions.

 

A great deal is already known about the requirements for successful projects, based on the performance of projects over the last two decades and the many studies and reports that have been done on those projects. Better use of data from previous projects in the evaluation and definition stages of new projects and a more empirical approach by clients in collecting and using data is necessary if better decisions are to be made. This is what Reference Class Forecasting does. 

 

The procurement strategies and implementation processes used by clients can be improved.  Contracts manage risk, but ultimately clients are responsible for their projects, and specification, design and documentation should be completed, as far as possible, before going to tender or before work begins. Sequencing of major projects’ design allows input from contractors and suppliers and creates a pipeline of work. Major projects should be broken into sub-projects where possible, to reduce barriers to entry for tenderers, create opportunities for local contractors and suppliers, and increase competition. 

 

BIM mandates are important because the use of BIM unlocks the potential of digital construction. The ISO 19650 standards for BIM and digital twins provide a framework for creating, managing and sharing data, and the experience of overseas jurisdictions with BIM mandates is that BIM use increases over time. Industry has a collective action problem because the cost of adopting a new technology is significant and skills are typically in short supply. Therefore, BIM mandates from government and private sector clients are needed to promote BIM use, which will also increase industry capacity. 

 

While there are many straightforward projects being built, using conventional materials and well-known techniques, there are also many larger, more complex projects. Simple and standardised commodity projects are well documented with little uncertainty about what is to be produced and done, and should be awarded through competitive tendering on a fixed-price contract. 

 

By contrast, complicated and complex projects are not fully documented and will have significant uncertainty about their final form. Complicated projects are often better done on a cost-plus basis. Incentives are an effective way to reduce cost and increase productivity, and target cost contracts should be considered for complex projects that require more collaborative implementation and early involvement by designers, contractors and suppliers. 

 

 

 

[1] See Flyvbjerg, B., Bruzelius, N. and Rothengatter, W. 2003. Megaprojects and Risk: An Anatomy of Ambition, Cambridge, Cambridge University Press. A more recent and less academic book is Bent Flyvbjerg and Dan Gardner, 2023. How Big Things Get Done: The Surprising factors Behind Every Successful Project, From Home Renovations to Space Exploration. New York, Currency Press. From that book, in Flyvbjerg’s database of 16,000 projects 91.5% go over time and budget. The risk of a project going disastrously wrong (not 10%, but 100% or 400% or more over budget) is surprisingly high.

 

[2] Merrow. E.W. 2011. Industrial Megaprojects: Concepts, Strategies and Practices for Success, Hoboken, N.J.: Wiley. Second edn. 2024.

 

[3] Bajari, P. and Tadelis, S. 2006. Incentives and award procedures: Competitive tendering versus negotiations in procurement, in Dimitri, N., Piga, G. and Spagnolo, G. (Eds.) Handbook of Procurement, Cambridge UK: Cambridge University Press, 121-139.

 

[4] https://damassets.autodesk.net/content/dam/autodesk/www/industry/aec/bim/aec-bim-study-smart-market-synopsis-ebook-en.pdf