Little good ever comes from discussing climate change or nuclear weapons socially

Our social world is ruled by the affect heuristic: what feels good seems true, and what feels bad we distance ourselves from and reject. We judge what’s true or false based on it if makes us feel good or bad.

I think I’m going to stop talking to people socially about nuclear weapons and climate change.

Almost always, what the other person really wants is reassurance that their future will be OK and that the choices they are making are OK.

The conversation tends to become a cross-examination where they look for a way to dismiss me in order to protect their hopefulness and view of themself as a good person. It’s a bit like how people feel compelled to tell me how particularly important or moral (or not enjoyed) their air travel plans are, as though I am a religious authority who can forgive them. “Confess and be forgiven” is a cheerful motto for those who refuse to change their behaviour.

These conversations tend to be miserable for both sides: for them because they are presented with evidence for why they really should be fearful, when they fervently want the opposite, and for me because it just leads to more alienation to see how utterly unwilling people are to even face the problem, much less take any commensurate action. If I am convincing and give good evidence, it makes things worse for both: for them because they are getting anxious instead of reassured and for me because it reinforces how little relationship there is between evidence and human decision-making.

It is also a fundamental error to think that if a person believes that a problem is serious and that you are working on it, they will support you. You might think the chain of logic would be “the person seems to be working on a problem which I consider real and important, so I will support them at least conversationally if not materially” when it is much more often “this person is talking about something that makes me feel bad, so I will find a way to believe that they are wrong or what they are saying is irrelevant”. The desire to feel good about ourselves and the world quickly and reliably trumps whatever desire we may have to believe true things or act in a manner consistent with out beliefs.

It seems smarter going forward just to say that I won’t discuss these subjects and whatever work I am doing on them is secret.

It’s crucial when setting such boundaries to refuse to debate or justify them. Let people through that crack, and it’s sure to become another affect-driven argument about how they prefer to imagine their future as stable, safe, and prosperous and their own conduct as wise and moral — with me cast as the meanie squashing their joys.

Related:

On the potential of superfast minds

The simplest example of speed superintelligence would be a whole brain emulation running on fast hardware. An emulation operating at a speed of ten thousand times that of a biological brain would be able to read a book in a few seconds and write a PhD thesis in an afternoon. With a speedup factor of a million, an emulation could accomplish an entire millenium of intellectual work in one working day.

To such a fast mind, events in the external world appear to unfold in slow motion. Suppose your mind ran at 10,000X. If your fleshy friend should happen to drop his teacup, you could watch the porcelain slowly descend toward the carpet over the course of several hours, like a comet silently gliding through space toward an assignation with a far-off planet; and, as the anticipation of the coming crash tardily propagates through the folds of your friend’s grey matter and from thence out to his peripheral nervous system, you could observe his body gradually assuming the aspect of a frozen oops—enough time for you not only to order a replacement cup but also to read a couple of scientific papers and take a nap.

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014. p. 64

General artificial intelligences will be aliens

[A]rtificial intelligence need not much resemble a human mind. AIs could be—indeed, it is likely that most will be—extremely alien. We should expect that they will have very different cognitive architectures than biological intelligences, and in their early stages of development they have very different profiles of cognitive strengths and weaknesses (though, as we shall later argue, they could eventually overcome any initial weakness). Furthermore, the goal systems of AIs could diverge radically from those of human beings. There is no reason to expect a generic AI to be motivated by love or hate or pride or other such common human sentiments: these complex adaptations would require deliberate expensive effort to recreate in AIs. This is at once a big problem and a big opportunity.

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014. p. 35

Caring and the need to preserve the status quo

It strikes me that recognizing that a great deal of work is not strictly productive but caring, and that there is always a caring aspect even to the most apparently impersonal work, does suggest one reason why it’s so difficult to create a different society with a different set of rules. Even if we don’t like what the world look like, the fact remains that the conscious aim of most of our actions, productive or otherwise, is to do well by others; often, very specific others. Our actions are caught up in relations of caring. But most caring relations require we leave the world more or less as we found it. In the same way that teenage idealists regularly abandon their dreams of creating a better world and come to accept the compromises of adult life at precisely the moment they marry and have children, caring for others, especially over the long term, requires maintaining a world that’s relatively predictable as the grounds on which caring can take place. One cannot save to ensure a college education for one’s children unless one is sure in twenty years there will still be colleges—or for that matter, money. And that, in turns, means that love for others—people, animals, landscapes—regularly requires the maintenance of institutional structures one might otherwise despise.

Graeber, David. Bullshit Jobs : A Theory. New York : Simon & Schuster, 2018. p. 219

Related:

The obligation for cheerful dishonesty in much of working life

Here, though, I want to focus on what students forced into these make-work jobs actually learn from them—lessons that they do not learn from more traditional student occupations and pursuits such as studying for tests, planning parties, and so on. Even judging by Brendan’s and Patrick’s accounts (and I could easily reference many others), I think we can conclude that from these jobs, students learn at least five things:

  1. how to operate under others’ direct supervision;
  2. how to pretend to work even when nothing needs to be done;
  3. that one is not paid money to do things, however useful or important, that one actually enjoys;
  4. that one is paid money to do things that are in no way useful or important and that one does not enjoy; and
  5. that at least in jobs requiring interaction with the public, even when one is being paid to carry out tasks that one does not enjoy, one also has to pretend to be enjoying it.

Yet at the same time, it is precisely the make-believe aspect of their work that student workers like Patrick and Brendan find the most infuriating—indeed, that just about anyone who’s ever had a wage-labor job that was closely supervised finds the most maddening aspect of the job. Working serves a purpose, or is meant to do so. Being forced to pretend to work just for the sake of working is an indignity, since the demand is perceived—rightly—as the pure exercise of power for its own sake. If make-believe play is the purest expression of human freedom, make-believe work imposed by others is the purest expression of lack of freedom. It’s not entirely surprising, then, that the first historical evidence we have for the notion that certain categories of people really ought to be working at all times, even if there’s nothing to do, and that work needs to be made to fill their time, even if there’s nothing that really needs doing, refers to people who are not free: prisoners and slaves, two categories that historically have largely overlapped.

Of course, we learned our lesson: if you’re on the clock, do not be too efficient. You will not be rewarded, not even by a gruff nod of acknowledgement (which is all we were really expecting). Instead, you’ll be punished with meaningless busywork. And being forced to pretend to work, we discovered, was the most absolute indignity—because it was impossible to pretend it was anything but what it was: pure degradation, a sheer exercise of the boss’s power for its own sake. It didn’t matter that we were only pretending to scrub the baseboard. Every moment spent pretending to scour the baseboard felt like some schoolyard bully gloating at us over our shoulders—except, of course, this time, the bully had the full force of law and custom on his side.

Graeber, David. Bullshit Jobs : A Theory. New York : Simon & Schuster, 2018. p. 86, 92, 99

NotebookLM on CFFD scholarship

I would have expected that by now someone would have written a comparative analysis on pieces of scholarly writing on the Canadian campus fossil fuel divestment movement: for instance, engaging with both Joe Curnow’s 2017 dissertation and mine from 2022.

So, I gave both public texts to NotebookLM to have it generate an audio overview. It wrongly assumes that Joe Curnow is a man throughout, and mangles the pronunciation of “Ilnyckyj” in a few different ways — but at least it acts like it has read about the texts and cares about their content.

It is certainly muddled in places (though perhaps in ways I have also seen in scholarly literature). For example, it treats the “enemy naming” strategy as something that arose through the functioning of CFFD campaigns, whereas it was really part of 350.org’s “campaign in a box” from the beginning.

This hints to me at how large language models are going to be transformative for writers. Finding an audience is hard, and finding an engaged audience willing to share their thoughts back is nigh-impossible, especially if you are dealing with scholarly texts hundreds of pages long. NotebookLM will happily read your whole blog and then have a conversation about your psychology and interpersonal style, or read an unfinished manuscript and provide detailed advice on how to move forward. The AI isn’t doing the writing, but providing a sort of sounding board which has never existed before: almost infinitely patient, and not inclined to make its comments all about its social relationship with the author.

I wonder what effect this sort of criticism will have on writing. Will it encourage people to hew more closely to the mainstream view, but providing a critique that comes from a general-purpose LLM? Or will it help people dig ever-deeper into a perspective that almost nobody shares, because the feedback comes from systems which are always artificially chirpy and positive, and because getting feedback this way removes real people from the process?

And, of course, what happens when the flawed output of these sorts of tools becomes public material that other tools are trained on?

41

While global conditions and humanity’s prospects for the future are disastrous, my own life has become a lot more stable and emotionally tolerable over the course of this past year of employment. The PhD did immense psychological damage to me. After a lifetime in a competitive education system in which I had done exceptionally well, the PhD tended to reinforce the conclusion that everything I did was bad and wrong, and that I had no control over what would happen to my life. I had serious fears about ever finding stable employment after that long and demoralizing time away from the job market (though still always working, to limit the financial damage from those extra years in school). Being out and employed — and even seeing shadows of other possibilities in the future — gives me a sense materially, psychologically, and physiologically of being able to rebuild and endure.

As noted in my pre-US-election post, having a stable home and income makes the disasters around the world seem less like personal catastrophes, though the general population are behaving foolishly when they assume that the 2020–60 period will bear any resemblance to the ‘normality’ of, say, the 1980–2020 period. Of course, there has been no such thing as intergenerational stability or normality since the Industrial Revolution; after centuries where many lives remained broadly similar, the world is now transforming every generation or faster. In the 20th century, much of that change was about technological deployment. In the years ahead, ecological disruption will be a bigger part of the story — along with the technological, sociological, and political convulsions which will accompany the collapse of systems that have supported our civilization for eons.

My own answer to living through a time of catastrophe — in many ways, literally an apocalypse and the end of humanity, as we are all thrown into a post-human future where technology and biology fuse together — is to apply myself in doing my best in everything I undertake, whether that’s photographing a conference, making sandwiches for dinner, or advocating for climate stability and reduced nuclear weapon risks.

None of us can control the world. A huge dark comet could wipe us out tomorrow. A supervolcano or a coronal mass ejection from the sun could abruptly knock us into a nuclear-winter-like world or a world where all our technology gets broken simultaneously, stopping the farm-to-citizens conveyer belt that keeps us alive. There are frighteningly grounded descriptions of how a nuclear war could throw us all into the dark simultaneously, perhaps unable to resume long-distance contact with others for months or years.

It really could happen all of a sudden, with no opportunities for takesies-backsies or improving our resilience after the fact. We live in a world on a precipice, so all we can do is share our gratitude, appreciation, and esteem with those who have enriched our lives while it is possible to do so, while retaining our determination to keep fighting for a better world, despite our species’ manifest inabilities and pathologies.

Worms or moles

It is not hyperbole to make the statement [that] if humans ever reside on the Moon, they will have to live like ants, earthworms or moles. The same is true for all round celestial bodies without a significant atmosphere or magnetic field—Mars included. —Dr. James Logan, Former NASA Chief of Flight Medicine and Chief of Medical Operations at Johnson Space Center.

Weinersmith, Kelly and Zach. A City on Mars: Can we Settle Space, Should we Settle Space, and have we Really Thought this Through? Penguin Random House, 2023. p. 192 ([that] in Weinersmith and Weinersmith)