As someone who has spent their adult life fighting for a safe and stable environment, this is a sad and frightening day. America’s disastrous choice shows how limited our ability to learn and to make self-protective choices is. As the fundamental biophysical basis of our society is undermined, it seems we only become more short-termist, self-centred, and easily fooled
Author: Milan
Ord on the precipice that faces us
If all goes well, human history is just beginning. Humanity is about two hundred thousand years old. But the Earth will remain habitable for hundreds of millions more—enough time for millions of future generations; enough to end disease, poverty and injustice forever; enough to create heights of flourishing unimaginable today. And if we could learn to reach out further into the cosmos, we could find more time yet: trillions of years, to explore billions of worlds. Such a lifespan places present-day humanity in its earliest infancy. A vast and extraordinary adulthood awaits.
…
This book argues that safeguarding humanity’s future is the defining challenge of our time. For we stand at a crucial moment in the history of our species. Fueled by technological progress, our power has grown so great that for the first time in humanity’s long history, we have the capacity to destroy ourselves—severing our entire future and everything we could become.
Yet humanity’s wisdom has grown only falteringly, if at all, and lags dangerously behind. Humanity lacks the maturity, coordination and foresight necessary to avoid making mistakes from which we could never recover. As the gap between our power and wisdom grows, our future is subject to an ever-increasing level of risk. The situation is unsustainable. So over the next few centuries, humanity will be tested: it will either act decisively to protect itself and its longterm potential, or, in all likelihood, this will be lost forever.
To survive these challenges and secure our future, we must act now: managing the risks of today, averting those of tomorrow, and becoming the kind of society that will never pose such risks to itself again.
Ord, Toby. The Precipice: Existential Risk and the Future of Humanity. Hachette Books, 2020. p. 3–4
Roberts and Savage on American progressivism after the Trump re-election
David Roberts’ Volts podcast recently had a segment with sex columnist Dan Savage about the US election and what has gone wrong in progressive politics: Dan Savage on blue America in the age of Trump
It makes me wonder: can there ever be an effective political constituency against the commodification and financialization of housing (where houses are more important as stores of wealth than as shelters)? Could there be people who want house prices to fall a lot and stay low? Or will every political party inevitably end up caring more about house owners who want to keep prices high forever, while they live alone and in couples inside big houses in neighbourhoods where nobody with a growing family can afford to live?
As young people continue to internalize that their elders would rather see all future generations die in torment than change their lifestyles, will we see generation-based political movements of young people calling to reduce funding for end-of-life healthcare and cut tax breaks and subsidies for home-owners? Is such a movement always doomed since the rich and well-connected will always be able to capture whoever gets elected?
There are a lot of other interesting points in the discussion, including how living densely together seems to make people liberal and how the purity politics of the progressive left inhibit movement-building (“conservatives chase converts, liberals hunt heretics”).
Breaking research
Working on geoengineering and AI briefings
Last Christmas break, I wrote a detailed briefing on the existential risks to humanity from nuclear weapons.
This year I am starting two more: one on the risks from artificial intelligence, and one on the promises and perils of geoengineering, which I increasingly feel is emerging as our default response to climate change.
I have had a few geoengineering books in my book stacks for years, generally buried under the whaling books in the ‘too depressing to read’ zone. AI I have been learning a lot more about recently, including through Nick Bostrom and Toby Ord’s books and Robert Miles’ incredibly helpful YouTube series (based on Amodei et al’s instructive paper).
Related re: geoengineering:
- We are sliding toward geoengineering
- Planting trees won’t solve climate change
- Open thread: shadow solutions to climate change
- Geoengineering via rock weathering
- CBC documentary on geoengineering
- Paths to geoengineering
- Who would control geoengineering?
- Ocean iron fertilization for geoengineering
- Ken Caldeira on geoengineering as contingency
- Geoengineering with lasers
- Dyson’s carbon eating trees
- Will technology save us?
- Geoengineering: wise to have a fallback option
Related re: AI:
- General artificial intelligences will be aliens
- Combinatorial math and the impossibility of rationality
- Discrimination by artificial intelligence
- Designing stoppable AIs
- Robots in agriculture
- AI + social networks + unscrupulous actors
- Automation and labour
- Ethics and autonomous robots in war
- The plausibility of driverless cars
- Increasingly clever machines
- Automation and the jobs of the future
- Googling the Cyborg
aspirations
Little good ever comes from discussing climate change or nuclear weapons socially
Our social world is ruled by the affect heuristic: what feels good seems true, and what feels bad we distance ourselves from and reject. We judge what’s true or false based on it if makes us feel good or bad.
I think I’m going to stop talking to people socially about nuclear weapons and climate change.
Almost always, what the other person really wants is reassurance that their future will be OK and that the choices they are making are OK.
The conversation tends to become a cross-examination where they look for a way to dismiss me in order to protect their hopefulness and view of themself as a good person. It’s a bit like how people feel compelled to tell me how particularly important or moral (or not enjoyed) their air travel plans are, as though I am a religious authority who can forgive them. “Confess and be forgiven” is a cheerful motto for those who refuse to change their behaviour.
These conversations tend to be miserable for both sides: for them because they are presented with evidence for why they really should be fearful, when they fervently want the opposite, and for me because it just leads to more alienation to see how utterly unwilling people are to even face the problem, much less take any commensurate action. If I am convincing and give good evidence, it makes things worse for both: for them because they are getting anxious instead of reassured and for me because it reinforces how little relationship there is between evidence and human decision-making.
It is also a fundamental error to think that if a person believes that a problem is serious and that you are working on it, they will support you. You might think the chain of logic would be “the person seems to be working on a problem which I consider real and important, so I will support them at least conversationally if not materially” when it is much more often “this person is talking about something that makes me feel bad, so I will find a way to believe that they are wrong or what they are saying is irrelevant”. The desire to feel good about ourselves and the world quickly and reliably trumps whatever desire we may have to believe true things or act in a manner consistent with out beliefs.
It seems smarter going forward just to say that I won’t discuss these subjects and whatever work I am doing on them is secret.
It’s crucial when setting such boundaries to refuse to debate or justify them. Let people through that crack, and it’s sure to become another affect-driven argument about how they prefer to imagine their future as stable, safe, and prosperous and their own conduct as wise and moral — with me cast as the meanie squashing their joys.
Related:
On the potential of superfast minds
The simplest example of speed superintelligence would be a whole brain emulation running on fast hardware. An emulation operating at a speed of ten thousand times that of a biological brain would be able to read a book in a few seconds and write a PhD thesis in an afternoon. With a speedup factor of a million, an emulation could accomplish an entire millenium of intellectual work in one working day.
To such a fast mind, events in the external world appear to unfold in slow motion. Suppose your mind ran at 10,000X. If your fleshy friend should happen to drop his teacup, you could watch the porcelain slowly descend toward the carpet over the course of several hours, like a comet silently gliding through space toward an assignation with a far-off planet; and, as the anticipation of the coming crash tardily propagates through the folds of your friend’s grey matter and from thence out to his peripheral nervous system, you could observe his body gradually assuming the aspect of a frozen oops—enough time for you not only to order a replacement cup but also to read a couple of scientific papers and take a nap.
Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014. p. 64
General artificial intelligences will be aliens
[A]rtificial intelligence need not much resemble a human mind. AIs could be—indeed, it is likely that most will be—extremely alien. We should expect that they will have very different cognitive architectures than biological intelligences, and in their early stages of development they have very different profiles of cognitive strengths and weaknesses (though, as we shall later argue, they could eventually overcome any initial weakness). Furthermore, the goal systems of AIs could diverge radically from those of human beings. There is no reason to expect a generic AI to be motivated by love or hate or pride or other such common human sentiments: these complex adaptations would require deliberate expensive effort to recreate in AIs. This is at once a big problem and a big opportunity.
Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014. p. 35
Caring and the need to preserve the status quo
It strikes me that recognizing that a great deal of work is not strictly productive but caring, and that there is always a caring aspect even to the most apparently impersonal work, does suggest one reason why it’s so difficult to create a different society with a different set of rules. Even if we don’t like what the world look like, the fact remains that the conscious aim of most of our actions, productive or otherwise, is to do well by others; often, very specific others. Our actions are caught up in relations of caring. But most caring relations require we leave the world more or less as we found it. In the same way that teenage idealists regularly abandon their dreams of creating a better world and come to accept the compromises of adult life at precisely the moment they marry and have children, caring for others, especially over the long term, requires maintaining a world that’s relatively predictable as the grounds on which caring can take place. One cannot save to ensure a college education for one’s children unless one is sure in twenty years there will still be colleges—or for that matter, money. And that, in turns, means that love for others—people, animals, landscapes—regularly requires the maintenance of institutional structures one might otherwise despise.
Graeber, David. Bullshit Jobs : A Theory. New York : Simon & Schuster, 2018. p. 219
Related: