Working on geoengineering and AI briefings

Last Christmas break, I wrote a detailed briefing on the existential risks to humanity from nuclear weapons.

This year I am starting two more: one on the risks from artificial intelligence, and one on the promises and perils of geoengineering, which I increasingly feel is emerging as our default response to climate change.

I have had a few geoengineering books in my book stacks for years, generally buried under the whaling books in the ‘too depressing to read’ zone. AI I have been learning a lot more about recently, including through Nick Bostrom and Toby Ord’s books and Robert Miles’ incredibly helpful YouTube series (based on Amodei et al’s instructive paper).

Related re: geoengineering:

Related re: AI:

On the potential of superfast minds

The simplest example of speed superintelligence would be a whole brain emulation running on fast hardware. An emulation operating at a speed of ten thousand times that of a biological brain would be able to read a book in a few seconds and write a PhD thesis in an afternoon. With a speedup factor of a million, an emulation could accomplish an entire millenium of intellectual work in one working day.

To such a fast mind, events in the external world appear to unfold in slow motion. Suppose your mind ran at 10,000X. If your fleshy friend should happen to drop his teacup, you could watch the porcelain slowly descend toward the carpet over the course of several hours, like a comet silently gliding through space toward an assignation with a far-off planet; and, as the anticipation of the coming crash tardily propagates through the folds of your friend’s grey matter and from thence out to his peripheral nervous system, you could observe his body gradually assuming the aspect of a frozen oops—enough time for you not only to order a replacement cup but also to read a couple of scientific papers and take a nap.

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014. p. 64

General artificial intelligences will be aliens

[A]rtificial intelligence need not much resemble a human mind. AIs could be—indeed, it is likely that most will be—extremely alien. We should expect that they will have very different cognitive architectures than biological intelligences, and in their early stages of development they have very different profiles of cognitive strengths and weaknesses (though, as we shall later argue, they could eventually overcome any initial weakness). Furthermore, the goal systems of AIs could diverge radically from those of human beings. There is no reason to expect a generic AI to be motivated by love or hate or pride or other such common human sentiments: these complex adaptations would require deliberate expensive effort to recreate in AIs. This is at once a big problem and a big opportunity.

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014. p. 35

41

While global conditions and humanity’s prospects for the future are disastrous, my own life has become a lot more stable and emotionally tolerable over the course of this past year of employment. The PhD did immense psychological damage to me. After a lifetime in a competitive education system in which I had done exceptionally well, the PhD tended to reinforce the conclusion that everything I did was bad and wrong, and that I had no control over what would happen to my life. I had serious fears about ever finding stable employment after that long and demoralizing time away from the job market (though still always working, to limit the financial damage from those extra years in school). Being out and employed — and even seeing shadows of other possibilities in the future — gives me a sense materially, psychologically, and physiologically of being able to rebuild and endure.

As noted in my pre-US-election post, having a stable home and income makes the disasters around the world seem less like personal catastrophes, though the general population are behaving foolishly when they assume that the 2020–60 period will bear any resemblance to the ‘normality’ of, say, the 1980–2020 period. Of course, there has been no such thing as intergenerational stability or normality since the Industrial Revolution; after centuries where many lives remained broadly similar, the world is now transforming every generation or faster. In the 20th century, much of that change was about technological deployment. In the years ahead, ecological disruption will be a bigger part of the story — along with the technological, sociological, and political convulsions which will accompany the collapse of systems that have supported our civilization for eons.

My own answer to living through a time of catastrophe — in many ways, literally an apocalypse and the end of humanity, as we are all thrown into a post-human future where technology and biology fuse together — is to apply myself in doing my best in everything I undertake, whether that’s photographing a conference, making sandwiches for dinner, or advocating for climate stability and reduced nuclear weapon risks.

None of us can control the world. A huge dark comet could wipe us out tomorrow. A supervolcano or a coronal mass ejection from the sun could abruptly knock us into a nuclear-winter-like world or a world where all our technology gets broken simultaneously, stopping the farm-to-citizens conveyer belt that keeps us alive. There are frighteningly grounded descriptions of how a nuclear war could throw us all into the dark simultaneously, perhaps unable to resume long-distance contact with others for months or years.

It really could happen all of a sudden, with no opportunities for takesies-backsies or improving our resilience after the fact. We live in a world on a precipice, so all we can do is share our gratitude, appreciation, and esteem with those who have enriched our lives while it is possible to do so, while retaining our determination to keep fighting for a better world, despite our species’ manifest inabilities and pathologies.

Worms or moles

It is not hyperbole to make the statement [that] if humans ever reside on the Moon, they will have to live like ants, earthworms or moles. The same is true for all round celestial bodies without a significant atmosphere or magnetic field—Mars included. —Dr. James Logan, Former NASA Chief of Flight Medicine and Chief of Medical Operations at Johnson Space Center.

Weinersmith, Kelly and Zach. A City on Mars: Can we Settle Space, Should we Settle Space, and have we Really Thought this Through? Penguin Random House, 2023. p. 192 ([that] in Weinersmith and Weinersmith)

Combinatorial math and the impossibility of rationality

A perfectly rational entity maximizes the expected satisfaction of its preferences over all possible future lives it could choose to lead. I cannot begin to write down a number that describes the complexity of this decision problem, but I find the following thought experiment helpful. First, note that the number of motor control choices that a human makes in a lifetime is about twenty trillion… Next, let’s see how far brute force will get us with the aid of Seth Lloyd’s ultimate-physics laptop, which is one billion trillion trillion times faster than the world’s fastest computer. We’ll give it the task of enumerating all possible sequences of English words (perhaps as a warmup for Jorge Luis Borges’s Library of Babel), and we’ll let it run for a year. How long are the sequences that it can enumerate in that time? A thousand pages of text? A million pages? No. Eleven words. This tells you something about the difficulty of designing the best possible life of twenty trillion actions. In short, we are much further from being rational than a slug is from overtaking the starship Enterprise traveling at warp nine. We have absolutely no idea what a rationally chosen life would be like.

Russell, Stuart. Human Compatible: Artificial Intelligence and the Problem of Control. 2019. p. 232 (italics in original)

Related: How many unique English tweets are possible? How long would it take for the population of the world to read them all out loud?

Game theory and the limits of reason

I myself suffer from a morbid sense of despair, and even now, decades after I worked with von Neumann, I still find myself questioning our central tenet: Is there really a rational course of action in every situation? Johnny proved it mathematically beyond a doubt, but only for two players with diametrically opposing goals. So there may be a vital flaw in our reasoning that any keen observer will immediately become aware of; namely, that the minimax theorem that underlies our entire framework presupposes perfectly rational and logical agents, agents who are interested only in winning, agents who pose a perfect understanding of the rules and a total recall of all their past moves, agents who also have a flawless awareness of the possible ramifications of their own actions, and of their opponents’ actions, at every single step of the game. The only person I ever met who was exactly like that was Johnny von Neumann. Normal people are not like that at all. Yes, they lie, they cheat, deceive, connive, and conspire, but they also cooperate, they can sacrifice themselves for others, or simply make decisions on a whim. Men and women follow their guts. They heed hunches and make careless mistakes. Life is so much more than a game. Its full wealth and complexity cannot be captured by equations, no matter how beautiful or perfectly balanced. And human beings are not the perfect poker players that we envisioned. They can be highly irrational, driven and swayed by their emotions, subject to all kinds of contradictions. And while this sparks off all the ungovernable chaos that we see all around us, it is also a mercy, a strange angel that protects us from the mad dreams of reason.

Labatut, Benjamin. The MANIAC. Penguin Random House, 2023. p. 144-5. (italics in original)

Reading Kahneman’s Thinking Fast and Slow recently, at several points I was struck by what seemed like the unjustified assumption that people are competent at mental arithmetic. Specifically, that you can give a person a list of probabilities and payouts and then find it legitimately surprising that they can’t or don’t pick the best one. For people constantly immersed in calculation this may be puzzling, but I also have personal experience of highly intelligent and knowledgeable people struggling at (or being unwilling to even try) calculating what a certain percentage of a number is, like for a tip. Studies on the numerical literacy of the general public reveal a worrisome inability to properly gauge millions against billions.

When mathematicians, logicians, and game theorists forget that much of the population cannot or will not calculate, they miss the obvious cause of deviations from their predictions and theories.

Travis Rector on fossil fuel abolition

About 90% of climate change is from the extraction and use of fossil fuels. We need to stop. As Chapters 6 and 7 point out, this won’t be easy—especially when fighting against industries that stand to lose trillions of dollars from the energy transition. But the rapid growth of wind and solar shows us that it’s already happening. Our role is to help it happen even faster.

Rector, Travis A. “Preface.” In: Rector, Travis A. Climate Change for Astronomers: Causes, consequences, and communication. IOP Publishing, 2024. p. xxi

Also:

We are at a crossroads in the history of our 4.5-billion-year-old planet. These days in which we are alive are precious beyond measure, especially from the perspective of Earthlings who come after us. Every day the fossil fuel industry continues to exist makes our planet hotter, taking us more deeply into irreversible catastrophe. The only way out is to end the fossil fuel industry; the faster we do, the more we will save… It is incredibly important to fight the fossil fuel industry, which has captured world leaders and international climate negotiations.

Kalmus, Peter. “Foreward.” In: Ibid p. xxii