Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Saturday, 20 September 2025

Recent Research on AI Effects on Employment and Work

 AI scenarios range from apocalypse to slow and gradual adoption

 




There has been and currently is a lot of discussion on the effects of artificial intelligence (AI) on employment. Across that research there is a wide range of views on the timing and extent of those effects, from a looming jobs apocalypse with high unemployment to slow and gradual adoption with low unemployment. The picture is clouded by tech firms self-serving promotion of AI solutions and the attention given to the possibility of an out-of-control unaligned AI killing everyone, or turning them into paperclips

 

A few examples will suffice. According to Goldman Sachs Research on How Will AI Affect the Global Workforce? in August 2025,  AI is unlikely to lead to a large increase in unemployment ‘because technological change tends to boost demand for workers in new occupations’, creates new jobs, and increases output and demand. They estimate unemployment will increase by half a percentage point during the AI transition period as displaced workers seek new positions and, if current AI use cases were expanded across the economy, 2.5% of US employment would be at risk of related job loss. So far, they believe the ‘vast majority of companies have not incorporated AI into regular workflows and the low adoption rate is limiting labour market effects.’ 

 

Massachusetts Institute of Technology professor Daron Acemoglu, who has written extensively on technology and work, believes only 5% of all jobs will be taken over, or at least heavily aided, by AI over the next decade. However, the World Economic Forum’s Future of Jobs Report 2025 estimates that, by 2030, new job creation and job displacement will be a ‘combined total of 22% of today’s total (formal) jobs.’ Their jobs outlook is based on the macrotrends of technology, economic uncertainty, demographics, and the energy transition, of which ‘AI and information processing technologies are expected to have the biggest impact – with 86% of respondents expecting these technologies to transform their business by 2030.’

 

On the threat of extinction, in April 2025 the AI Futures Project, a credible non-profit research group, released their AI 2027 scenario, when AI systems ‘become good enough to dramatically accelerate their research’ and start building their own superintelligent AI systems. Without human understanding of what’s happening, the system develops misaligned goals: ‘Previous AIs would lie to humans, but they weren’t systematically plotting to gain power over the humans.’ The superintelligent AI will manipulate humans and rapidly industrialise by manufacturing robots: ‘Once a sufficient number of robots have been built, the AI releases a bioweapon, killing all humans. Then, it continues the industrialization, and launches Von Neumann probes to colonize space.’ 

 

In September the US think tank RAND published a research paper on the potential for the proliferation of robotic embodiment of superintelligent AI called Averting a Robot Catastrophe, arguing for ‘the urgent need to proactively address this issue now rather than waiting until the technologies are fully deployed to ensure responsible governance and risk management.’ Another 2025 RAND paper on The Extinction Risk From AIconcluded ‘Although we could not show in any of our scenarios that AI could definitely create an extinction threat to humanity, we could not rule out the possibility… resources dedicated to extinction risk mitigation are most useful if they also contribute to mitigating global catastrophic risks and improving AI safety in general.’

 

While AI is developing rapidly, and there are examples of AI deception from Anthropic and OpenAI, a cautionary tale is US-based Builder.ai. The company claimed its product, an AI bot called Natasha, could help customers build software six times faster and 70% cheaper than humans. In 2023 it was ranked third by tech industry magazine Fast Company behind OpenAI and Google’s DeepMind in its innovative AI companies list, and was valued at $US1.5 billion. Builder.ai collapsed in May: ‘Alongside old-fashioned start-up dishonesty with dramatically overstating its revenue, allegations arose that the work of its Natasha neural network was actually the work of 700 human programmers in India.’ This is reminiscent of Elon Musk’s Optimus robots being remote controlled in a 2024 demonstration. 

 

Although it is still too early to say what the effect of AI on employment will be, there has been some useful recent research on the effect of AI on jobs and work, particularly in the US. This post surveys some of the research released over the last few months. 

 

Australian Research

 

The Productivity Commission’s August 2025 Harnessing Data and Digital Technology report said: ‘The economic potential of AI is clear, and we are still in the early stages of its development and adoption… multifactor productivity gains above 2.3% are likely over the next decade, though there is considerable uncertainty. This would translate into about 4.3% labour productivity  growth over the same period.’ The Commission argued data underpins growth and value in the digital economy. And a ‘mature data-sharing regime could add up to $10 billion to Australia’s annual economic output. Experience shows that we need a flexible approach to facilitating data access across the economy.’ In another report for the Economic Reform Roundtable on skills and employment, the Commission recommended improving education and training systems.

 

Grattan institute researchers Trent Wiltshire and Hui-Ling Chan’s September 2025 article AI is Coming: Prepare for the Worst argues ‘in the event of significant disruption, the federal government may need to consider how Australia’s safety net and retraining systems’, with better preparation and scenario planning for Australia for the possibility AI will cause mass unemployment. They suggest changes to income support should be considered, such as lifetime learning accounts, unemployment insurance (a time-limited payment linked to a person’s previous income widely used in Europe), easier access to superannuation when unemployed. They also recommend Denmark’s ‘flexicurity’ system where it is easy to retrench workers but there is a safety net that includes up to two years unemployment insurance, and education, retraining, and support programs. About 25% of Denmark’s private industry workers change jobs each year, and 67% of workers are union members. 

 

A June 2025 PwC AI jobs barometer ‘looked at close to a billion job adverts from 24 countries and 80 sectors to understand how the demand for workers is shifting in relation to AI adoption. The global study found that AI is making workers more valuable, not less. Industries most able to use AI have seen productivity growth nearly quadruple since 2022 and are seeing three times higher growth in revenue generated per employee. Jobs numbers and wages are also growing in virtually every AI-exposed occupation, with AI-skilled workers commanding a 56% wage premium, on average.’

 

The PwC survey found the Australian industry effect of AI was a surge in demand for AI skills in the overall jobs market, nearly doubling from 12,000 postings in 2020 to 23,000 in 2021. Since then there have been 23,000 postings a year, although this was only 1.8% of total job postings in 2024. As Figure 1 below from the report shows, Finance and Insurance was the leading industry, but there has been rapid growth in Construction industry AI job postings.

 

Figure 1. Job postings

 

Source: PwC

 

 

RBA Survey of Australian Businesses

 

The Reserve Bank Governor Michele Bullock gave a speech on September 3rd which included results of an RBA survey of businesses about AI, robotics and technology adoption. Although not about employment, the speech had the Figure below from the RBA survey of businesses about technology adoption, with the striking finding that 80% of firms expect to be using AI in the next three years, up from 25% today. This is probably due to the RBA survey population being skewed toward larger firms. 

 

Figure 2. Australian businesses’ technology adoption

Source: RBA

 

In her speech she said: ‘ Firms mainly expect these tools to augment labour, automating repetitive tasks and redesigning the composition of roles. Firms thought they may initially see an increase in their headcount as they design and embed new technologies, though this may be followed by a small decline as they mature in their adoption of new technologies. Lower skilled roles may decline, while demand for higher skilled roles is expected to grow, continuing (and perhaps even fast-tracking) a decades-long trend away from routine manual work. While AI may eventually automate even some higher skilled tasks, firms tell us that it is too early to fully understand what this means for their workforce beyond the next few years. Some roles may change and the demand for different or new skills may in turn increase.’

 

US Research

 

An August 2025 paper by Eckhardt and Goldschlag called AI and Jobs: The Final Word (Until the Next One),found no detectable effect of AI on recent US employment trends using five measures of job exposure to AI. For three of their five measures there was no detectable difference in unemployment between the more exposed and the less exposed workers, and only a small difference, of 0.2 or 0.3 of a percentage point for two measures. They say ‘One pattern is clear in the data: highly exposed workers are doing better in the labor market than less exposed workers. Workers more exposed to AI are better paid, more likely to have Bachelor’s or graduate degrees, and less likely to be unemployed than less exposed workers.’

 

Their conclusion was that AI isn’t taking jobs yet, or the effect is very small. Figure 3 has their unemployment rate of workers with varying degrees of predicted AI exposure (1 is the least exposed, 5 is the most exposed), where there is no correlation between t AI exposure and unemployment.

 

Figure 3. Unemployment rate by AI exposure quintile

 

Source: Eckhardt and Goldschlag 2025.

 

In their Appendix they used US Census Bureau data, which in August had 9% of surveyed firms using AI, up from 5% a year and a half earlier, although 27% of firms in the information sector said they were using AI. The Appendix had the Figure below, showing Construction having one of the lowest levels of AI usage.

 

Figure 4. Percent of businesses using AI

 

Source: Eckhardt and Goldschlag 2025.

 

 

The next word on AI and Jobs came in a paper from the Stanford Digital Economy Lab by Brynjolfsson, Chandar, and Chen Canaries in the Coal Mine? Six Facts about the Recent Employment Effects of Artificial Intelligence. This paper uses two measures of AI exposure, then compares recent employment trends for more and less exposed workers. Their conclusion is radically different to Eckhardt and Goldschlag. The abstract explains:

 

‘We find that since the widespread adoption of generative AI, early-career workers (ages 22-25) in the most AI-exposed occupations have experienced a 13 percent relative decline in employment even after controlling for firm-level shocks. In contrast, employment for workers in less exposed fields and more experienced workers in the same occupations has remained stable or continued to grow. We also find that adjustments occur primarily through employment rather than compensation. Furthermore, employment declines are concentrated in occupations where AI is more likely to automate, rather than augment, human labor. Our results are robust to alternative explanations, such as excluding technology-related firms and excluding occupations amenable to remote work. These six facts provide early, large-scale evidence consistent with the hypothesis that the AI revolution is beginning to have a significant and disproportionate impact on entry-level workers in the American labor market.’

 

Brynjolfsson et al. found ‘substantial declines in employment for early-career workers (ages 22-25) in occupations most exposed to AI’, such as software developers and customer service representatives. In jobs less exposed to AI, employment growth for young workers was comparable to older workers. Declining employment in AI-exposed jobs is driving ‘tepid overall employment growth for 22- to 25- year-olds as employment for older workers continues to grow.’ It should be noted that Eckhardt and Goldschlag use of three other metrics gives a broader perspective. Figure 5 shows growth in employment between October 2022 and July 2025 by age and GPT-4 based AI exposure where quintile 1 is least exposed and 5 the most exposed [1]. 

 

Figure 5. AI exposure group growth in employment.

 

Source: Brynjolfsson et al. 2025

 

 

Is AI a Complement or Substitute?

 

The different conclusions from this US research by throws into sharp relief what is, at this point, the core issue: is AI a substitute for workers, particularly skilled workers, or a complement? In other words, is AI replacing workers in some occupations, or is it being used as a tool to enhance productivity and pefomance? 

 

If AI is a substitute for workers, wages and employment fall, and because AI is substituting for human cognition, businesses will replace expensive humans with a skill or experience premium that is no longer valuable, probably older workers. On the other hand, if AI is a complement, wages and employment increase, and businesses will recruit humans with skill or experience, probably older workers.

 

What Figure 5 above shows is that young workers saw reductions in AI-exposed jobs, but for the other age groups it was positive. In particular, employment for older workers in AI-exposed jobs increased. Eckhardt and Goldschlag also found workers exposed to AI are doing better in the labour market than less exposed workers. This strongly suggests AI is a complement not a substitute. AI complements human skills and augments productivity of workers with the necessary skills and experience, so these people with tacit knowledge not available to an AI are the ones firms are employing. 

 

What About Construction?

 

The US Bureau of Labor Statistics (BLS) 2025 Occupational Outlook Handbook covers 600 occupations [2]. Based on that, the Employment Projections program develops US labour market estimates for 10 years in the future, based on the assumption that labour productivity and technological progress will be in line with historical experience which shows ‘technology impacts occupations, but that these changes tend to be gradual, not sudden. Occupations involve complex combinations of tasks, and even when technology advances rapidly, it can take time for employers and workers to figure out how to incorporate new technology.’ Because technological developments over the next 10 years are ‘impossible to predict with precision,’ new projections are released annually. Figure 6 has Construction employment growing at 4.4% to 2034.

 

Figure 6. US labour market 

 

Source: BLS Employment Projections, August 2025. 

 

In the BLS sector specific projections for the Infrastructure sector by 2030: ‘new job roles are expected to be created for Big Data Specialists and Organizational Development Specialists... Twenty-seven percent of employees in the sector are anticipated to be able to upskill in their current roles, with an additional 17% projected to be reskilled and redeployed. Almost 70% of respondents expect reskilling and upskilling to improve talent retention and enhance competitiveness and productivity of their company, with 50% planning to increase talent mobility through training programmes.’

 

An article in the February 2025 BLS Monthly Labor Review on Incorporating AI impacts in BLS employment projections: occupational case studies argued ‘GenAI can support many tasks involved in architecture and engineering occupations, potentially increasing worker productivity.‘ Their technical expertise and existing regulatory requirements create uncertainty about the extent and employment impact of AI adoption, and underlying demand is expected to remain strong, resulting in US employment growth of 6.8% for architects and engineers.

 

AI Development and Diffusion

 

An April 2025 paper by Arvind Narayanan and Sayash Kapoor from Princeton University’s Center for Information Technology Policy was called AI as Normal technology: An alternative to the vision of AI as a potential superintelligence. They view ‘AI as a tool that we can and should remain in control of,’ and argue this does not require drastic policy interventions. They do not think viewing AI as a humanlike intelligence is ‘currently accurate or useful for understanding its societal impacts.’ 

 

Their lengthy and sometimes digressive paper is based on the idea of a normal technology, where sudden economic impacts are implausible because ‘Innovation and diffusion happen in a feedback loop… With past general-purpose technologies such as electricity, computers, and the internet, the respective feedback loops unfolded over several decades, and we should expect the same to happen with AI.’ They dismiss catastrophic AI because it ‘relies on dubious assumptions about how technology is deployed in the real world. Long before a system would be granted access to consequential decisions, it would need to demonstrate reliable performance in less critical contexts.’

 

Narayanan and Kapoor’s ‘AI as normal technology is a worldview that stands in contrast to the worldview of AI as impending superintelligence.’ They don’t believe progress in generative AI is as fast as claimed, nor that AI diffusion will be much different to electricity or computers, because diffusion ‘occurs over decades, not years.’ This is very different to what they call the utopian and dystopian worldviews of AI, both based on the idea of superintelligence but with opposite consequences. Because the idea of immanent take-off superintelligence is so prevalent in the discussion about AI, as either the solution to many problems or as an extinction event, the suggestion that AI might just be the latest in a long series of powerful general purpose technologies and develop over time in a historically familiar way is both radical and unusual.

 

There is support for this slow adoption and diffusion view from the McKinsey 2025 State of AI report, which is somewhat ironic as McKinsey is one of the biggest boosters of corporate use of AI. Published in March 2025 but based on a mid-2024 survey sample of 1,491, it  found 75% of respondents using AI in at least one business function but only 1% ‘described their gen AI rollouts as mature.’ The survey showed a quarter of large organisations and 12% of smaller ones had an AI roadmap, 52% of large organisations but only 23% of small ones had a dedicated team to drive AI use, and only 28% and 23% respectively had effectively embedded gen AI into business processes. In McKinsey’s sample, 92% of companies plan to increase their AI investment over the next three years. However, that sample will not in any way be representative of most businesses. 

 

Figure 7. AI deployment

 

Source: McKinsey 2025 State of AI report, 42% of respondents  work for organizations with annual revenue over $500 million.

 

Chat GPT was launched in November 2022 by OpenAI. When GPT-4 was released in March 2023, AI went from being unreliable and error prone to being able to synthesise, summarise and interpret data. In August 2005 GPT-5 was released, which again improved performance but not by as much as the previous upgrades, so progress in AI models might be slowing down. The latest models still require supervision and checking of results. 

 

Conclusion

 

There are very many possible futures that could unfold over the next few decades as technologies like AI, automation and robotics develop. However, the key technology is intelligent machines operating in a connected but parallel digital world with varying degrees of autonomy. AI agents will be trained to use data in specific but limited ways, interacting with each other and working with humans. The tools, techniques and data sets needed for machine learning are becoming more accessible for experiment and model building, and as well as the cloud-based large language models like Gemini and ChatGPT, new AI systems like small language models and agentic AI are now appearing. 

 

So far, in many cases these technologies are not a substitute human labour. Generative design software does not replace architects or engineers, automated plan reading does not replace estimators, and optimization of logistics or maintenance by AI does not replace mechanics. Nevertheless, there is an immediate and important need for politicians and policy-makers to increase the urgency and attention given to the effects of AI on employment. Governments have to integrate AI literacy into school curriculums, provide learning subsidies for retraining, and ensure access to technology. 

 

The BLS employment projections show employment declines concentrated in occupations where AI is more likely to automate rather than augment human labour. The industries most affected are mining, retail, manufacturing and employment by government. For construction, between 2024 and 2035 in the US, the projection is for an increase of 4.4%, and for architects and engineers and increase of 6.8% in employment. How representative that is for other counties is impossible to know, but AI use in the US is probably more advanced than in most places.

 

Current employment data from the US shows that employment is steady or increasing for older workers with skills and experience, even in jobs that have high exposure to AI, although for younger workers with less experience there has been an increase in unemployment. At present, AI is affecting entry-level jobs but there are few wider employment effects, and the limited evidence suggests AI complements human skills and augments the productivity of workers with tacit knowledge not available to an AI. 

 

The picture is mixed. Surveys of companies, like the ones from the World Economic Forum, the RBA and McKinsey, report strong interest in AI and a high level of investment planned for the next few years. The share of job postings requiring AI skills is small but increasing. At the same time, employment in AI exposed jobs in the US is rising, not falling, with little or no difference in current unemployment levels between more exposed and less exposed workers. However, research shows  unemployment among 20 to 30 year old tech workers has risen. 

 

There are some other signs of AI effects in the US, with BLS data showing employment growth in marketing, graphic design, office administration, and telephone call centres in 2025 below trend, with reduced demand for workers attributed to AI-related efficiency gains. In Australia there are similar reports, like the use of chatbots by Origin Energy, insurer Suncorp, and banks cutting jobs (announced this week were 3,500 by ANZ and 400 by NAB). 

 

None of this data is conclusive. Survey results are primarily from large firms, micro and small size firms are missing, and surveys do not accurately capture most of the medium size ones. Employment and unemployment data is a lagging indicator that is variable and often revised over the following months, does not include many casual workers, and misses all informal workers completely. Many companies will retrain or relocate workers displaced by AI. The online jobs databases researchers use to estimate AI employment effects are a subset of the overall labour market, and they can only be partially representative of current conditions at best.   

 

On present trends and performance, more extreme AI scenarios are not plausible, such as AI superintelligence delivering annual economic growth of 20%, as a breakthrough problem solving, research and innovation bonanza, as a jobs apocalypse, or as an extinction event. Whether that means AI is a ‘normal’ general purpose technology that will take a few decades to become widely used across industries and the economy is not obvious. According to OpenAI, in mid-2025, ChatGPT had about 800 million weekly active users and 122-130 million daily active users, and 10 million paying users, including 92% of US Fortune 500 companies (N.B. these numbers are from a query on the OpenAI Research website). 

 

Another indicator is downloads of AI models. ChatGPT is averaging 45 million a month, and according to Wikipedia by ‘January 2023, ChatGPT had become the fastest-growing consumer software application in history, gaining over 100 million users in two months. As of May 2025, ChatGPT's website is among the 5 most visited websites.’ That level of uptake is a lot faster than the decades taken for previous technologies like electricity, the internal combustion engine or the internet to become widely used. This reinforces survey findings that many people use AI, including at work, but AI adoption by companies remains low, especially for small and medium size ones, and the great majority of companies have not incorporated AI into their workflows.

 

There are two key points that emerge from, what is at present, an unclear picture of the next decade. The first is that AI automates tasks not jobs, so jobs with structured workflows doing routine and repetitive tasks will be quickly and heavily affected. Examples are administration and data compilation, document processing, customer support, data management, note taking and drafting reports. Employers will do this because it is cost effective and relatively straightforward to train an AI agent for a specific task if the data is available. 

 

The second is the value of tacit knowledge and experience. One example is trade skills and tasks, where some can be automated but some will not, because of the  physical demands of the work. Construction trades will be one of the occupations least affected by AI. Another more pertinent example is in health, where AI assisted diagnostics require oversight by a knowledgeable human. For skilled workers like architects and engineers, using AI requires a high level of knowledge gained through learning by doing in the person responsible for supervision and checking the AI output, and AI could increase demand for these workers. The assumption is that some insight into how the AI works is required.

 

Ethan Mollick’s 2024 book Co-Intelligence outlined how humans can work with an AI chatbot as a co-worker, correcting its errors, checking its work, co-developing ideas, and guiding it in the right direction. This is a widely held view of the way AI will be used. However, in September 2025 he  wrote: ‘I have come to believe that co-intelligence is still important but that the nature of AI is starting to point in a different direction. We're moving from partners to audience, from collaboration to conjuring.’ Mollick suggests the newest most powerful AI models like GPT-5 Pro have ‘impressive output, opaque process’ and ‘for an increasing range of complex tasks, you get an amazing and sophisticated output in response to a vague request, but you have no part in the process….Magic gets done.’ 

 

In the twentieth century, the electrification of workplaces took several decades, well into the 1930s, as organisations restructured around the new technology, relocating and redesigning factories, creating new jobs and developing new products. Now, a hundred years later, AI is having the same effects, but it will not take decades for the restructuring of organisations and the jobs they provide. While the future is uncertain, within a decade AI will probably have become as ubiquitous as electricity and the internet, something we use all the time without thinking about where it comes from or how it works. 

 

                                                            *

 

[1] Eckhardt and Goldschlag, and Brynjolfsson  et al., use a metric based on queries to the Occupational Information Network (O*NET), an online database with hundreds of job definitions, using ChatGPT, developed by Felton, E., M. Raj and R. Seamans in Occupational, industry, and geographic exposure to artificial intelligenceStrategic Management Journal. It was also used by Eloundou, T., S. Manning, P. Mishkin, and D. Rock. 2023. GPTs are GPTs: Labor market impact potential of LLMs, arXiv. 

 

[2] The BLS 2025 Occupational Outlook Handbook includes information on about 600 detailed occupations in over 300 occupational profiles, covering about 4 out of 5 jobs in the US economy. Each profile features 2024–34 projections, along with assessments of the job outlook, work activities, wages, education and training requirements.



Subscribe on Substack https://gerarddevalence.substack.com/ 

 

Saturday, 21 June 2025

Review of Adam Becker’s More Everything Forever

 AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity. 

 



 

There is a notion that science fiction is a form of future history, that science-based hard SF predicts future technology. This is almost entirely wrong. Issac Asimov’s robots had positronic brains and obeyed the three laws of robotics, Arthur C. Clark’s aliens hollowed out asteroids and humanity became a deathless spacefaring species of pure mind, Larry Niven’s Ringworld surrounded a star at its centre. Space empires feature in many books, as does colonisation of the solar system, generation ships travelling to distant stars, people crossing space in cryogenic stasis or as uploaded minds. After decades of reading hard SF and believing that humanity’s destiny was in space, I know that none of this has happened, and lately I have accepted with great sadness that none of it is likely to happen any time soon.

 

In his book on ideas about the future of humanity, Adam Becker describes his experience of this: ‘When I was a kid, I thought Star Trek was a documentary about the future … that this was the future that smart adults had worked out as the best one, that this was what we were going to do … We’d go to space, we’d seek out new life and new civilizations, and we’d do a lot of science. I was six, and that sounded pretty good to me.’  He then says ‘The future, I knew, ultimately lay in space, and going there would solve many - maybe even all - of the problems here on earth. I believed that for a long, long time.’

 

His book is also about a group of influential people who have taken SF as an attempt to predict the future, the tech billionaires who ‘explicitly use SF as a blueprint.’  Elon Musk (Tesla) wants to go to Mars, Jeff Bezos (Amazon) wants a trillion people in space, Sam Altman (OpenAI , the developer of Chat GPT) thinks AI will literally produce everything, Marc Andreessen (a leading Silicon Valley investor) wants a techno-capitalist machine to conquer the cosmos with AI. The list goes on. 

 

Becker argues the ‘credence that tech billionaires give to these specific SF futures validates their pursuit of more … in the name of saving humanity from a threat that doesn’t exist.’  That threat is a machine superintelligence that can rapidly improve itself, leading to an out of control Artificial General Intelligence (AGI) whose creation could, would or will be an extinction event, because an ‘unaligned AGI’ does not share human gaols or motives. 

 

Becker’s book dissects the ideas and ideology that many tech billionaires believe. He follows the money through the institutes, organisations and foundations that they fund, finding the connections and overlaps between networks of funders, researchers, philosophers, philanthropists, authors, advocates and activists. Over six lengthy chapters he discusses the history and development of eight separate but similar belief systems, that collectively I’m calling the ‘transhumanist bundle.’ Transhumanism is ‘the belief that we can and should use advanced technology to transform ourselves, transcending humanity’,  and it is the foundational set of ideas shared by the tech billionaires. 

 

Transhumanism is using technology to create a new ‘posthuman’ species with characteristics such as an indefinitely long lifespan, augmented cognitive capability, enhanced senses, and superior rationality. It was originally a mid-20th century idea associated with Pierre Teilhard de Chardin’s Omega Point, an intelligence explosion that would allow humanity to break free from time and space and ‘merge with the divine’ (he was a Catholic priest), then popularised by his friend Julian Huxley with his ideas on transcendence and using selective breeding to create the best version of our species possible. Nick Bostrom founded the World Transhumanist Association in 1998, rebranded as Humanity+ in 2008 with its mission ‘the ethical use of technology and evidence-based science to expand human capabilities.’ His Future of Humanity Institute was founded in 2005 and closed in 2024. 

 

The transhuman bundle of ideologies has become enormously influential, especially in Silicon Valley and the tech industry, and have been a motivating force behind a lot of the research and development of AGI. Followers of this movement typically believe that AGI will become capable of self-improvement and therefore create the singularity. As Becker notes, there is an element of groupthink at work here. Besides Musk, Bezos and Andreessen, billionaires associated with these ideologies include Peter Thiel, Jaan Tallinn, Sam Altman, Dustin Moskovitz, and Vitalik Buterin, whose donations finance institutes, promote researchers, and support the movement. 

 

Becker starts with effective altruism, known for the fall of Sam Bankman-Fried who funded effective altruism conferences, institutes and organisations before his conviction for fraud. Based on the ideas of utilitarian philosophers Peter Singer, William MacAskill and Toby Ord, the premise was that people should donate as much of their income as possible to causes that provide maximum benefit to mankind, the ‘earn to give’ idea. Initially the focus was on global poverty, but later morphed into a focus on AI safety, based on the assumption that  the threat of extinction from unaligned AGI is the greatest threat to humanity. 

 

MacAskill gave effective altruism an ethical perspective based on the very long-term future and a view that what is morally right is also good. Because the future could potentially contain billions or trillions of people, failing to bring these future people into existence would be morally wrong. Longtermism is therefore closely associated with effective altruism, MacAskill (‘positively influencing the longterm future is the key moral priority of our time’), and Ord  (‘longtermism is animated by a moral re-orientation toward the vast future that existential risks threaten to foreclose’). 

 

Longtermism is based on the reasoning that, if the aim is to positively affect the greatest number of people possible and if the future could contain trillions of future digital and spacefaring people, then we should focus our efforts today on enabling that far future, instead of focusing on current people and contemporary problems, except for preventing catastrophes like unaligned AGI or pandemics. The utilitarian calculation is that the low probability of this future is outweighed by the enormous number of future people. On longtermism Becker says ‘The likelihood of these futures is small, not just because they are scientifically implausible but also because they’re rather specific, depending on so many small things falling into place, things we can’t know about.’

 

Then there is Singularitarianism, a related idea that there will be a technological ‘singularity’ with the creation of AGI. This would be an ‘intelligence explosion’, a point in time when technological progress becomes recursive and so rapid it alters humanity. Associated with Ray Kurzweil and his 2005 book The Singularity is Near, when humans will merge with intelligent machines and expand into space to flood the universe with consciousness, which he predicted would happen by 2045. His new 2025 book is called The Singularity is Nearer. A different version from Nick Bostrom takes creating Superintelligence (the title of his 2014 book) as the transformative moment that enables us to become posthuman and colonise space. 

 

Eliezer Yudkowsky predicts the singularity will happen in ‘more like five years than fifty years’ from now. He believes an unaligned AGI is an existential threat and all AI research should be stopped until there is a way to ensure a future AGI will not kill us all. His Machine Intelligence Research Institute (founded 2005) website opens with: ‘The AI industry is racing toward a precipice. The default consequence of the creation of artificial superintelligence (ASI) is human extinction. Our survival depends on delaying the creation of ASI, as soon as we can, for as long as necessary’.

 

Rationalism arose around a website founded in 2009  by Yudkowsky called Less Wrong, ‘dedicated to improving human reasoning and decision-making’ and motivated by his fear of the threat of an unaligned AGI that exterminates humanity in pursuit of some obscure AI goal, like using all available matter (including humans) to create more computing capacity. This is Bostrom’s paperclip problem, where a powerful AI kills everyone and converts the planet, galaxy and eventually the universe into paperclips because that was the goal it was given, or more generally, a ‘misaligned AGI is an existential catastrophe.’

 

Extropianism was another variant of Transhumanism, with the foundation of the Extropy Institute in 1992 by Max Moore, who defined extropy as ‘the extent of a system’s intelligence, information, order, vitality and capacity for improvement’. The Institute’s magazine covered AI, nanotechnology, life extension and cryonics, neuroscience and intelligence increasing technology, and space colonisation. The Institute closed in 2006, but it successfully spread transhumanism through its conferences and email list. 

 

Cosmism combines these ideas with sentient AI and mind uploading technology, leaving biology behind by merging humans and technology to create virtual worlds and develop spacetime engineering and science. Originating with Nikolai Fedorov, a Russian Christian ‘late nineteenth century philosopher and librarian’ who believed technology would allow the dead to be resurrected, and the cosmos to be filled by ‘everyone who ever lived.’ There is a strong eschatological element to the transhumanist bundle, with the centrality of belief in transcendence and immortality.

 

Effective accelerationism is the most recent addition to this movement. Venture capitalist Marc Andreessen published his ‘Techno-Capitalist Manifesto’ in 2023 and argued ‘advancing technology is one of the most virtuous things that we can do’, technology is ‘liberatory’, opponents of AI development are the enemy, and the future will be about ‘overcoming nature.’  Andreessen writes a ‘common critique of technology is that it removes choice from our lives as machines make decisions for us. This is undoubtedly true, yet more than offset by the freedom to create our lives that flows from the material abundance created by our use of machines.’

 

There is significant overlap between these different ideologies, and the argument that an AGI will safeguard and expand humanity in the future allows believers in the transhumanist bundle to make creating AGI the most important task in the present. This utopian element of the transhumanist bundle believes a powerful enough AGI will solve problems like global warming, energy shortages and inequality. In fact, the race to develop AGI is inflicting real harm on racial and gender minorities (through profiling based on white males), the disabled (who are not included in training data), and developing countries affected by climate change and the energy consumption of AI. 

 

There are so many problems with the transhumanist bundle. Becker argues they are reductive, ‘in that they make all problems about technology’, for tech billionaires they are profitable, and they offer transcendence, ‘ignoring all limitations … conventional morality … and death itself.’ He calls this ‘collection of related concepts and philosophies … the ideology of technological salvation.’ The transhumanist bundle has presented progress toward AGI as inevitable and grounded in scientific and engineering principles. However, while science is used as a justification for these beliefs, the reality is that they are scientifically implausible.

 

First, AGI has not been achieved, and may not ever be achievable. Superintelligent machine do not exist, but the corporations developing AI have convinced policy-makers and politicians that preventing a hypothetical AI apocalypse should be taken seriously. Second, the threat of unaligned AGI is used to divert attention from the actual harms of bias and discrimination that are being done. Third, mind uploading will not be possible any time soon, and may never be possible given how little understood human intelligence, consciousness and brains are. 

 

Fourth, space colonisation is difficult. It may have to be done by robots because space is increasingly understood to be an inhospitable environment for people, given half a chance it will kill or harm anyone. Mars dust is toxic, Moon regolith is sharp splinters, Venus is hot and the moons of Jupiter cold. There is no air or water. Gravity seems to be necessary for health and growth. The technology to launch and build large space stations or hollow out and terraform asteroids is non-existent. 

 

Fifth, sustained economic or technological exponential growth is impossible, but is built into effective accelerationism, longtermism and the singularity. Kurzweil’s Law of Accelerating Returns, where technological advances feed on themselves to increase the rate of further advance, is neither supported by the history of technology, which shows diminishing returns as technologies mature, nor the laws of physics, which imposes physical limits on size, speed and power. 

 

In his conclusion Becker asks the question “if not an immortal future in space, then what? ‘He answers ‘I don't know. The futures of technological salvation are sterile impossibilities and they would be brutally destructive if they come to pass.’ He quotes George Orwell: Whoever tries to imagine perfection simply reveals his own emptiness’, and argues the problems facing humanity are social and political problems that AI is unlikely to help with. ‘Technology can't heal the world. We have to do that ourselves.’ He suggests technology can be directed and we have to make choices about what we want technology to do as part of the solution to our problems. 

 

His ’specific policy proposal’ is to tax billionaires because there is ‘no real need for anyone to have more money than half a billion dollars’, with personal wealth above that returned to society and invested in health, education and ‘everything else it takes to make a modern thriving economy.’ This would address inequality and provide political stability, and  ‘Without billionaires, fringe philosophies like rationalism and effective accelerationism would stay on the fringe, rather than being pulled into the mainstream through the reality-warping power of concentrated wealth.’ 

 

This is a really interesting book that draws together a lot of scattered threads that are not commonly or obviously connected. Becker has deeply researched these ideas and people, who he quotes extensively in their own words (there are 80 pages of references in the Notes). His analysis is sharp and the critique is insightful. The delusional futurism of the tech billionaires is exposed as self-serving and dangerous. 

 

A more structured format with shorter, more focused chapters would make all the detail easier to follow. The chapters are long, between 40 and 60 pages, and each one covers a number of different related topics, for example he covers Kurzweil’s singularity and Eric Drexler’s nanotechnology in the same chapter, but these could have had their own chapters. This doesn’t affect readability, as an experienced science writer Becker writes well, but it makes it hard to keep track of what was discussed where.  The absence of an index doesn’t help either. 

 

Science fiction is stories about possible futures that may or may not happen, that may not be physically or practically achievable within any reasonable timespan. The further into the future a story is set, the less likely it is to be realised. Becker argues to base decisions today on such future stories ignores the problems that challenge us in the present, and to substitute the hypothetical danger of AGI for the real issues of climate change, geopolitical instability, inequality and economic uncertainty is foolish. Therefore the tech billionaires and the ideology of the transhuman bundle they follow is a real threat to the future of humanity. 

 

 

Adam Becker, More Everything Forever: AI Overlords, Space Colonies and Silicon Valley’s Quest to Control the Future of Humanity. Basic Books, 2025.