A technical but fascinating discussion about the emergence of gravitational wave astrophysics and what it has been telling us about neutron stars, black holes, and the laws of nature:
Category: Science
New discoveries, the scientific method, and all matters relating to the scientific study of the physical world
Fiction, versus reality’s lack of resolution
In all the time while I have been concerned, and later terrified, about climate change and the future of life on Earth, I still had the narrative convention of fiction influencing my expectations: the emergence of a big problem will imperil and inspire a group of people to find solutions and eventually the people threatened by the problem will accept if not embrace the solutions. A tolerable norm is disrupted and then restored because people have the ability to perceive and reason, and the willingness and virtue to act appropriately when they see what’s wrong.
Now, I feel acutely confronted by what a bad model for human reactions this is. It seems to me now that we almost never want to understand problems or their real causes; we almost always prefer an easy answer and somebody to blame. The narrative arc of ‘problem emerges, people understand problem, people solve problem’ has a real-world equivalent more like ‘problems emerge but people usually miss or misunderstand them, and where they do perceive problems to exist they interpret them using stories where the most important purpose is to justify and protect the powerful’.
If the history happening around us were a movie, it might be one that I’d want to walk out of, between the unsatisfying plot and the unsympathetic actors. Somehow the future has come to feel more like a sentence than a promise: something which will need to be endured, watching everything good that humankind has achieved getting eroded and destroyed, and in which having the ability to understand and name what is happening just leads to those around you punishing and rejecting you by reflex.
The uncertainty principle and limits of knowledge
[Heisenberg and Bohr] left the park and plunged into the city streets while they discussed the consequences of Heisenberg’s discovery, which Bohr saw as the cornerstone upon which a truly new physics could be founded. In philosophical terms, he told him as he took his arm, this was the end of determinism. Heisenberg’s uncertainty principle shredded the hopes of all those who had put faith in the clockwork universe Newtonian physics had promised. According to the determinists, if one could reveal the laws that governed matter, one could reach back to the most archaic past and predict the most distant future. If everything that occurred was the direct consequence of a prior state, then merely by looking at the present and running the equations it would be possible to achieve a godlike knowledge of the universe. These hopes were shattered in light of Heisenberg’s discovery: what was beyond our grasp was neither the future nor the past, but the present itself. Not even the state of one miserable particle could be perfectly apprehended. However much we scrutinized the fundamentals, there would always be something vague, undetermined, uncertain, as if reality allowed us to perceive the world with crystalline clarity with one eye at a time, but never with both.
Labatut, Benjamín. When We Cease to Understand the World. New York Review of Books, 2020. p. 161–2
Carney on the carbon bubble and stranded assets
By some measures, based on science, the scale of the energy revolution required is staggering.
If we had started in 2000, we could have hit the 1.5°C objective by halving emissions every thirty years. Now, we must halve emissions every ten years. If we wait another four years, the challenge will be to halve emissions every year. If we wait another eight years, our 1.5°C carbon budget will be exhausted.
The entrepreneur and engineer Saul Griffith argues that the carbon-emitting properties of our committed physical capital mean that we are locked in to use up the residual carbon budget, even if no one buys another car with an internal combustion engine, installs a new gas-fired hot-water heater or, at a larger scale, constructs a new coal power plant. That’s because, just as we expect a new car to run for a decade or more, we expect our machines to be used until they are fully depreciated. If the committed emissions of all the machines over their useful lives will largely exhaust the 1.5°C carbon budget, going forward we will need almost all new machines, like cars, to be zero carbon. Currently, electric car sales, despite being one of the hottest segments of the market, are as a percentage in single digits. This implies that, if we are to meet society’s objective, there will be scrappage and stranded assets.
…
To meet the 1.5°C target, more than 80 per cent of current fossil fuel reserves (including three-quarters of coal, half of gas, one-third of oil) would need to stay in the ground, stranding these assets. The equivalent for less than 2°C is about 60 per cent of fossil fuel assets staying in the ground (where they would no longer be assets).
When I mentioned the prospect of stranded assets in a speech in 2015, it was met with howls of outrage from the industry. That was in part because many had refused to perform the basic reconcilliation between the objectives society had agreed in Paris (keeping temperature increases below 2°C), the carbon budgets science estimated were necessary to achieve them and the consequences this had for fossil fuel extraction. They couldn’t, or wouldn’t, undertake the basic calculations that a teenager, Greta Thunberg, would easily master and powerfully project. Now recognition is growing, even in the oil and gas industry, that some fossil fuel assets will be stranded — although, as we shall see later in the chapter, pricing in financial markets remains wholly inconsistent with the transition.
Carney, Mark. Value(s): Building a Better World for All. Penguin Random House Canada, 2021. p. 273–4, 278
Working on geoengineering and AI briefings
Last Christmas break, I wrote a detailed briefing on the existential risks to humanity from nuclear weapons.
This year I am starting two more: one on the risks from artificial intelligence, and one on the promises and perils of geoengineering, which I increasingly feel is emerging as our default response to climate change.
I have had a few geoengineering books in my book stacks for years, generally buried under the whaling books in the ‘too depressing to read’ zone. AI I have been learning a lot more about recently, including through Nick Bostrom and Toby Ord’s books and Robert Miles’ incredibly helpful YouTube series (based on Amodei et al’s instructive paper).
Related re: geoengineering:
- We are sliding toward geoengineering
- Planting trees won’t solve climate change
- Open thread: shadow solutions to climate change
- Geoengineering via rock weathering
- CBC documentary on geoengineering
- Paths to geoengineering
- Who would control geoengineering?
- Ocean iron fertilization for geoengineering
- Ken Caldeira on geoengineering as contingency
- Geoengineering with lasers
- Dyson’s carbon eating trees
- Will technology save us?
- Geoengineering: wise to have a fallback option
Related re: AI:
- General artificial intelligences will be aliens
- Combinatorial math and the impossibility of rationality
- Discrimination by artificial intelligence
- Designing stoppable AIs
- Robots in agriculture
- AI + social networks + unscrupulous actors
- Automation and labour
- Ethics and autonomous robots in war
- The plausibility of driverless cars
- Increasingly clever machines
- Automation and the jobs of the future
- Googling the Cyborg
On the potential of superfast minds
The simplest example of speed superintelligence would be a whole brain emulation running on fast hardware. An emulation operating at a speed of ten thousand times that of a biological brain would be able to read a book in a few seconds and write a PhD thesis in an afternoon. With a speedup factor of a million, an emulation could accomplish an entire millenium of intellectual work in one working day.
To such a fast mind, events in the external world appear to unfold in slow motion. Suppose your mind ran at 10,000X. If your fleshy friend should happen to drop his teacup, you could watch the porcelain slowly descend toward the carpet over the course of several hours, like a comet silently gliding through space toward an assignation with a far-off planet; and, as the anticipation of the coming crash tardily propagates through the folds of your friend’s grey matter and from thence out to his peripheral nervous system, you could observe his body gradually assuming the aspect of a frozen oops—enough time for you not only to order a replacement cup but also to read a couple of scientific papers and take a nap.
Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014. p. 64
General artificial intelligences will be aliens
[A]rtificial intelligence need not much resemble a human mind. AIs could be—indeed, it is likely that most will be—extremely alien. We should expect that they will have very different cognitive architectures than biological intelligences, and in their early stages of development they have very different profiles of cognitive strengths and weaknesses (though, as we shall later argue, they could eventually overcome any initial weakness). Furthermore, the goal systems of AIs could diverge radically from those of human beings. There is no reason to expect a generic AI to be motivated by love or hate or pride or other such common human sentiments: these complex adaptations would require deliberate expensive effort to recreate in AIs. This is at once a big problem and a big opportunity.
Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014. p. 35
41
While global conditions and humanity’s prospects for the future are disastrous, my own life has become a lot more stable and emotionally tolerable over the course of this past year of employment. The PhD did immense psychological damage to me. After a lifetime in a competitive education system in which I had done exceptionally well, the PhD tended to reinforce the conclusion that everything I did was bad and wrong, and that I had no control over what would happen to my life. I had serious fears about ever finding stable employment after that long and demoralizing time away from the job market (though still always working, to limit the financial damage from those extra years in school). Being out and employed — and even seeing shadows of other possibilities in the future — gives me a sense materially, psychologically, and physiologically of being able to rebuild and endure.
As noted in my pre-US-election post, having a stable home and income makes the disasters around the world seem less like personal catastrophes, though the general population are behaving foolishly when they assume that the 2020–60 period will bear any resemblance to the ‘normality’ of, say, the 1980–2020 period. Of course, there has been no such thing as intergenerational stability or normality since the Industrial Revolution; after centuries where many lives remained broadly similar, the world is now transforming every generation or faster. In the 20th century, much of that change was about technological deployment. In the years ahead, ecological disruption will be a bigger part of the story — along with the technological, sociological, and political convulsions which will accompany the collapse of systems that have supported our civilization for eons.
My own answer to living through a time of catastrophe — in many ways, literally an apocalypse and the end of humanity, as we are all thrown into a post-human future where technology and biology fuse together — is to apply myself in doing my best in everything I undertake, whether that’s photographing a conference, making sandwiches for dinner, or advocating for climate stability and reduced nuclear weapon risks.
None of us can control the world. A huge dark comet could wipe us out tomorrow. A supervolcano or a coronal mass ejection from the sun could abruptly knock us into a nuclear-winter-like world or a world where all our technology gets broken simultaneously, stopping the farm-to-citizens conveyer belt that keeps us alive. There are frighteningly grounded descriptions of how a nuclear war could throw us all into the dark simultaneously, perhaps unable to resume long-distance contact with others for months or years.
It really could happen all of a sudden, with no opportunities for takesies-backsies or improving our resilience after the fact. We live in a world on a precipice, so all we can do is share our gratitude, appreciation, and esteem with those who have enriched our lives while it is possible to do so, while retaining our determination to keep fighting for a better world, despite our species’ manifest inabilities and pathologies.
Worms or moles
It is not hyperbole to make the statement [that] if humans ever reside on the Moon, they will have to live like ants, earthworms or moles. The same is true for all round celestial bodies without a significant atmosphere or magnetic field—Mars included. —Dr. James Logan, Former NASA Chief of Flight Medicine and Chief of Medical Operations at Johnson Space Center.
Weinersmith, Kelly and Zach. A City on Mars: Can we Settle Space, Should we Settle Space, and have we Really Thought this Through? Penguin Random House, 2023. p. 192 ([that] in Weinersmith and Weinersmith)