There is a notion that science fiction is a form of future history, that science-based hard SF predicts future technology. This is almost entirely wrong. Issac Asimov’s robots had positronic brains and obeyed the three laws of robotics, Arthur C. Clark’s aliens hollowed out asteroids and humanity became a deathless spacefaring species of pure mind, Larry Niven’s Ringworld surrounded a star at its centre. Space empires feature in many books, as does colonisation of the solar system, generation ships travelling to distant stars, people crossing space in cryogenic stasis or as uploaded minds. After decades of reading hard SF and believing that humanity’s destiny was in space, I know that none of this has happened, and lately I have accepted with great sadness that none of it is likely to happen any time soon.
In his book on ideas about the future of humanity, Adam Becker describes his experience of this: ‘When I was a kid, I thought Star Trek was a documentary about the future … that this was the future that smart adults had worked out as the best one, that this was what we were going to do … We’d go to space, we’d seek out new life and new civilizations, and we’d do a lot of science. I was six, and that sounded pretty good to me.’ He then says ‘The future, I knew, ultimately lay in space, and going there would solve many - maybe even all - of the problems here on earth. I believed that for a long, long time.’
His book is also about a group of influential people who have taken SF as an attempt to predict the future, the tech billionaires who ‘explicitly use SF as a blueprint.’ Elon Musk (Tesla) wants to go to Mars, Jeff Bezos (Amazon) wants a trillion people in space, Sam Altman (OpenAI , the developer of Chat GPT) thinks AI will literally produce everything, Marc Andreessen (a leading Silicon Valley investor) wants a techno-capitalist machine to conquer the cosmos with AI. The list goes on.
Becker argues the ‘credence that tech billionaires give to these specific SF futures validates their pursuit of more … in the name of saving humanity from a threat that doesn’t exist.’ That threat is a machine superintelligence that can rapidly improve itself, leading to an out of control Artificial General Intelligence (AGI) whose creation could, would or will be an extinction event, because an ‘unaligned AGI’ does not share human gaols or motives.
Becker’s book dissects the ideas and ideology that many tech billionaires believe. He follows the money through the institutes, organisations and foundations that they fund, finding the connections and overlaps between networks of funders, researchers, philosophers, philanthropists, authors, advocates and activists. Over six lengthy chapters he discusses the history and development of eight separate but similar belief systems, that collectively I’m calling the ‘transhumanist bundle.’ Transhumanism is ‘the belief that we can and should use advanced technology to transform ourselves, transcending humanity’, and it is the foundational set of ideas shared by the tech billionaires.
Transhumanism is using technology to create a new ‘posthuman’ species with characteristics such as an indefinitely long lifespan, augmented cognitive capability, enhanced senses, and superior rationality. It was originally a mid-20th century idea associated with Pierre Teilhard de Chardin’s Omega Point, an intelligence explosion that would allow humanity to break free from time and space and ‘merge with the divine’ (he was a Catholic priest), then popularised by his friend Julian Huxley with his ideas on transcendence and using selective breeding to create the best version of our species possible. Nick Bostrom founded the World Transhumanist Association in 1998, rebranded as Humanity+ in 2008 with its mission ‘the ethical use of technology and evidence-based science to expand human capabilities.’ His Future of Humanity Institute was founded in 2005 and closed in 2024.
The transhuman bundle of ideologies has become enormously influential, especially in Silicon Valley and the tech industry, and have been a motivating force behind a lot of the research and development of AGI. Followers of this movement typically believe that AGI will become capable of self-improvement and therefore create the singularity. As Becker notes, there is an element of groupthink at work here. Besides Musk, Bezos and Andreessen, billionaires associated with these ideologies include Peter Thiel, Jaan Tallinn, Sam Altman, Dustin Moskovitz, and Vitalik Buterin, whose donations finance institutes, promote researchers, and support the movement.
Becker starts with effective altruism, known for the fall of Sam Bankman-Fried who funded effective altruism conferences, institutes and organisations before his conviction for fraud. Based on the ideas of utilitarian philosophers Peter Singer, William MacAskill and Toby Ord, the premise was that people should donate as much of their income as possible to causes that provide maximum benefit to mankind, the ‘earn to give’ idea. Initially the focus was on global poverty, but later morphed into a focus on AI safety, based on the assumption that the threat of extinction from unaligned AGI is the greatest threat to humanity.
MacAskill gave effective altruism an ethical perspective based on the very long-term future and a view that what is morally right is also good. Because the future could potentially contain billions or trillions of people, failing to bring these future people into existence would be morally wrong. Longtermism is therefore closely associated with effective altruism, MacAskill (‘positively influencing the longterm future is the key moral priority of our time’), and Ord (‘longtermism is animated by a moral re-orientation toward the vast future that existential risks threaten to foreclose’).
Longtermism is based on the reasoning that, if the aim is to positively affect the greatest number of people possible and if the future could contain trillions of future digital and spacefaring people, then we should focus our efforts today on enabling that far future, instead of focusing on current people and contemporary problems, except for preventing catastrophes like unaligned AGI or pandemics. The utilitarian calculation is that the low probability of this future is outweighed by the enormous number of future people. On longtermism Becker says ‘The likelihood of these futures is small, not just because they are scientifically implausible but also because they’re rather specific, depending on so many small things falling into place, things we can’t know about.’
Then there is Singularitarianism, a related idea that there will be a technological ‘singularity’ with the creation of AGI. This would be an ‘intelligence explosion’, a point in time when technological progress becomes recursive and so rapid it alters humanity. Associated with Ray Kurzweil and his 2005 book The Singularity is Near, when humans will merge with intelligent machines and expand into space to flood the universe with consciousness, which he predicted would happen by 2045. His new 2025 book is called The Singularity is Nearer. A different version from Nick Bostrom takes creating Superintelligence (the title of his 2014 book) as the transformative moment that enables us to become posthuman and colonise space.
Eliezer Yudkowsky predicts the singularity will happen in ‘more like five years than fifty years’ from now. He believes an unaligned AGI is an existential threat and all AI research should be stopped until there is a way to ensure a future AGI will not kill us all. His Machine Intelligence Research Institute (founded 2005) website opens with: ‘The AI industry is racing toward a precipice. The default consequence of the creation of artificial superintelligence (ASI) is human extinction. Our survival depends on delaying the creation of ASI, as soon as we can, for as long as necessary’.
Rationalism arose around a website founded in 2009 by Yudkowsky called Less Wrong, ‘dedicated to improving human reasoning and decision-making’ and motivated by his fear of the threat of an unaligned AGI that exterminates humanity in pursuit of some obscure AI goal, like using all available matter (including humans) to create more computing capacity. This is Bostrom’s paperclip problem, where a powerful AI kills everyone and converts the planet, galaxy and eventually the universe into paperclips because that was the goal it was given, or more generally, a ‘misaligned AGI is an existential catastrophe.’
Extropianism was another variant of Transhumanism, with the foundation of the Extropy Institute in 1992 by Max Moore, who defined extropy as ‘the extent of a system’s intelligence, information, order, vitality and capacity for improvement’. The Institute’s magazine covered AI, nanotechnology, life extension and cryonics, neuroscience and intelligence increasing technology, and space colonisation. The Institute closed in 2006, but it successfully spread transhumanism through its conferences and email list.
Cosmism combines these ideas with sentient AI and mind uploading technology, leaving biology behind by merging humans and technology to create virtual worlds and develop spacetime engineering and science. Originating with Nikolai Fedorov, a Russian Christian ‘late nineteenth century philosopher and librarian’ who believed technology would allow the dead to be resurrected, and the cosmos to be filled by ‘everyone who ever lived.’ There is a strong eschatological element to the transhumanist bundle, with the centrality of belief in transcendence and immortality.
Effective accelerationism is the most recent addition to this movement. Venture capitalist Marc Andreessen published his ‘Techno-Capitalist Manifesto’ in 2023 and argued ‘advancing technology is one of the most virtuous things that we can do’, technology is ‘liberatory’, opponents of AI development are the enemy, and the future will be about ‘overcoming nature.’ Andreessen writes a ‘common critique of technology is that it removes choice from our lives as machines make decisions for us. This is undoubtedly true, yet more than offset by the freedom to create our lives that flows from the material abundance created by our use of machines.’
There is significant overlap between these different ideologies, and the argument that an AGI will safeguard and expand humanity in the future allows believers in the transhumanist bundle to make creating AGI the most important task in the present. This utopian element of the transhumanist bundle believes a powerful enough AGI will solve problems like global warming, energy shortages and inequality. In fact, the race to develop AGI is inflicting real harm on racial and gender minorities (through profiling based on white males), the disabled (who are not included in training data), and developing countries affected by climate change and the energy consumption of AI.
There are so many problems with the transhumanist bundle. Becker argues they are reductive, ‘in that they make all problems about technology’, for tech billionaires they are profitable, and they offer transcendence, ‘ignoring all limitations … conventional morality … and death itself.’ He calls this ‘collection of related concepts and philosophies … the ideology of technological salvation.’ The transhumanist bundle has presented progress toward AGI as inevitable and grounded in scientific and engineering principles. However, while science is used as a justification for these beliefs, the reality is that they are scientifically implausible.
First, AGI has not been achieved, and may not ever be achievable. Superintelligent machine do not exist, but the corporations developing AI have convinced policy-makers and politicians that preventing a hypothetical AI apocalypse should be taken seriously. Second, the threat of unaligned AGI is used to divert attention from the actual harms of bias and discrimination that are being done. Third, mind uploading will not be possible any time soon, and may never be possible given how little understood human intelligence, consciousness and brains are.
Fourth, space colonisation is difficult. It may have to be done by robots because space is increasingly understood to be an inhospitable environment for people, given half a chance it will kill or harm anyone. Mars dust is toxic, Moon regolith is sharp splinters, Venus is hot and the moons of Jupiter cold. There is no air or water. Gravity seems to be necessary for health and growth. The technology to launch and build large space stations or hollow out and terraform asteroids is non-existent.
Fifth, sustained economic or technological exponential growth is impossible, but is built into effective accelerationism, longtermism and the singularity. Kurzweil’s Law of Accelerating Returns, where technological advances feed on themselves to increase the rate of further advance, is neither supported by the history of technology, which shows diminishing returns as technologies mature, nor the laws of physics, which imposes physical limits on size, speed and power.
In his conclusion Becker asks the question “if not an immortal future in space, then what? ‘He answers ‘I don't know. The futures of technological salvation are sterile impossibilities and they would be brutally destructive if they come to pass.’ He quotes George Orwell: Whoever tries to imagine perfection simply reveals his own emptiness’, and argues the problems facing humanity are social and political problems that AI is unlikely to help with. ‘Technology can't heal the world. We have to do that ourselves.’ He suggests technology can be directed and we have to make choices about what we want technology to do as part of the solution to our problems.
His ’specific policy proposal’ is to tax billionaires because there is ‘no real need for anyone to have more money than half a billion dollars’, with personal wealth above that returned to society and invested in health, education and ‘everything else it takes to make a modern thriving economy.’ This would address inequality and provide political stability, and ‘Without billionaires, fringe philosophies like rationalism and effective accelerationism would stay on the fringe, rather than being pulled into the mainstream through the reality-warping power of concentrated wealth.’
This is a really interesting book that draws together a lot of scattered threads that are not commonly or obviously connected. Becker has deeply researched these ideas and people, who he quotes extensively in their own words (there are 80 pages of references in the Notes). His analysis is sharp and the critique is insightful. The delusional futurism of the tech billionaires is exposed as self-serving and dangerous.
A more structured format with shorter, more focused chapters would make all the detail easier to follow. The chapters are long, between 40 and 60 pages, and each one covers a number of different related topics, for example he covers Kurzweil’s singularity and Eric Drexler’s nanotechnology in the same chapter, but these could have had their own chapters. This doesn’t affect readability, as an experienced science writer Becker writes well, but it makes it hard to keep track of what was discussed where. The absence of an index doesn’t help either.
Science fiction is stories about possible futures that may or may not happen, that may not be physically or practically achievable within any reasonable timespan. The further into the future a story is set, the less likely it is to be realised. Becker argues to base decisions today on such future stories ignores the problems that challenge us in the present, and to substitute the hypothetical danger of AGI for the real issues of climate change, geopolitical instability, inequality and economic uncertainty is foolish. Therefore the tech billionaires and the ideology of the transhuman bundle they follow is a real threat to the future of humanity.
Adam Becker, More Everything Forever: AI Overlords, Space Colonies and Silicon Valley’s Quest to Control the Future of Humanity. Basic Books, 2025.
No comments:
Post a Comment
Thank you for your comment and for reading the blog. I hope you find it interesting and useful. If you would like to subscribe the best way is through Substack here:
https://gerarddevalence.substack.com
Thank you
Gerard