On the potential of superfast minds

The simplest example of speed superintelligence would be a whole brain emulation running on fast hardware. An emulation operating at a speed of ten thousand times that of a biological brain would be able to read a book in a few seconds and write a PhD thesis in an afternoon. With a speedup factor of a million, an emulation could accomplish an entire millenium of intellectual work in one working day.

To such a fast mind, events in the external world appear to unfold in slow motion. Suppose your mind ran at 10,000X. If your fleshy friend should happen to drop his teacup, you could watch the porcelain slowly descend toward the carpet over the course of several hours, like a comet silently gliding through space toward an assignation with a far-off planet; and, as the anticipation of the coming crash tardily propagates through the folds of your friend’s grey matter and from thence out to his peripheral nervous system, you could observe his body gradually assuming the aspect of a frozen oops—enough time for you not only to order a replacement cup but also to read a couple of scientific papers and take a nap.

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014. p. 64

General artificial intelligences will be aliens

[A]rtificial intelligence need not much resemble a human mind. AIs could be—indeed, it is likely that most will be—extremely alien. We should expect that they will have very different cognitive architectures than biological intelligences, and in their early stages of development they have very different profiles of cognitive strengths and weaknesses (though, as we shall later argue, they could eventually overcome any initial weakness). Furthermore, the goal systems of AIs could diverge radically from those of human beings. There is no reason to expect a generic AI to be motivated by love or hate or pride or other such common human sentiments: these complex adaptations would require deliberate expensive effort to recreate in AIs. This is at once a big problem and a big opportunity.

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014. p. 35

Caring and the need to preserve the status quo

It strikes me that recognizing that a great deal of work is not strictly productive but caring, and that there is always a caring aspect even to the most apparently impersonal work, does suggest one reason why it’s so difficult to create a different society with a different set of rules. Even if we don’t like what the world look like, the fact remains that the conscious aim of most of our actions, productive or otherwise, is to do well by others; often, very specific others. Our actions are caught up in relations of caring. But most caring relations require we leave the world more or less as we found it. In the same way that teenage idealists regularly abandon their dreams of creating a better world and come to accept the compromises of adult life at precisely the moment they marry and have children, caring for others, especially over the long term, requires maintaining a world that’s relatively predictable as the grounds on which caring can take place. One cannot save to ensure a college education for one’s children unless one is sure in twenty years there will still be colleges—or for that matter, money. And that, in turns, means that love for others—people, animals, landscapes—regularly requires the maintenance of institutional structures one might otherwise despise.

Graeber, David. Bullshit Jobs : A Theory. New York : Simon & Schuster, 2018. p. 219

Related:

The obligation for cheerful dishonesty in much of working life

Here, though, I want to focus on what students forced into these make-work jobs actually learn from them—lessons that they do not learn from more traditional student occupations and pursuits such as studying for tests, planning parties, and so on. Even judging by Brendan’s and Patrick’s accounts (and I could easily reference many others), I think we can conclude that from these jobs, students learn at least five things:

  1. how to operate under others’ direct supervision;
  2. how to pretend to work even when nothing needs to be done;
  3. that one is not paid money to do things, however useful or important, that one actually enjoys;
  4. that one is paid money to do things that are in no way useful or important and that one does not enjoy; and
  5. that at least in jobs requiring interaction with the public, even when one is being paid to carry out tasks that one does not enjoy, one also has to pretend to be enjoying it.

Yet at the same time, it is precisely the make-believe aspect of their work that student workers like Patrick and Brendan find the most infuriating—indeed, that just about anyone who’s ever had a wage-labor job that was closely supervised finds the most maddening aspect of the job. Working serves a purpose, or is meant to do so. Being forced to pretend to work just for the sake of working is an indignity, since the demand is perceived—rightly—as the pure exercise of power for its own sake. If make-believe play is the purest expression of human freedom, make-believe work imposed by others is the purest expression of lack of freedom. It’s not entirely surprising, then, that the first historical evidence we have for the notion that certain categories of people really ought to be working at all times, even if there’s nothing to do, and that work needs to be made to fill their time, even if there’s nothing that really needs doing, refers to people who are not free: prisoners and slaves, two categories that historically have largely overlapped.

Of course, we learned our lesson: if you’re on the clock, do not be too efficient. You will not be rewarded, not even by a gruff nod of acknowledgement (which is all we were really expecting). Instead, you’ll be punished with meaningless busywork. And being forced to pretend to work, we discovered, was the most absolute indignity—because it was impossible to pretend it was anything but what it was: pure degradation, a sheer exercise of the boss’s power for its own sake. It didn’t matter that we were only pretending to scrub the baseboard. Every moment spent pretending to scour the baseboard felt like some schoolyard bully gloating at us over our shoulders—except, of course, this time, the bully had the full force of law and custom on his side.

Graeber, David. Bullshit Jobs : A Theory. New York : Simon & Schuster, 2018. p. 86, 92, 99

Self-deception prevents learning

A deliberate deception (misleading one’s colleagues, or a patient, or a boss) has at least one clear benefit. The person doing the deceiving will, by definition, recognize the deceit and will inwardly acknowledge the failure. Perhaps he will amend the way he does his job to avoid such a failure in the future. Self-justification is more insidious. Lying to oneself destroys the very possibility of learning. How can one learn from failure if one has convinced oneself —through endlessly subtle means of self-justification, narrative manipulation, and the wider psychological arsenal of dissonance-reduction — that a failure didn’t actually occur?

Syed, Matthew. Black Box Thinking: Why Most People Never Learn from Their Mistakes—But Some Do. Portfolio, 2015.

Conformity versus competence

[I]n most hierarchies, super-competence is more objectionable than incompetence.

Ordinary incompetence, as we have seen, is no cause for dismissal: it is simply a bar to promotion. Super-competence often leads to dismissal, because it disrupts the hierarchy, and thereby violates the first commandment of hierarchal life: the hierarchy must be preserved.

Employees in the two extreme classes—the super-competent and the super-incompetent—are alike subject to dismissal. They are usually fired soon after being hired, for the same reason: that they tend to disrupt the hierarchy.

Peter, Laurence J. and Hull, Raymond. The Peter Principle. Buccaneer Books, 1969. p. 45-6

Related: Whose agenda are you devoted to?

Combinatorial math and the impossibility of rationality

A perfectly rational entity maximizes the expected satisfaction of its preferences over all possible future lives it could choose to lead. I cannot begin to write down a number that describes the complexity of this decision problem, but I find the following thought experiment helpful. First, note that the number of motor control choices that a human makes in a lifetime is about twenty trillion… Next, let’s see how far brute force will get us with the aid of Seth Lloyd’s ultimate-physics laptop, which is one billion trillion trillion times faster than the world’s fastest computer. We’ll give it the task of enumerating all possible sequences of English words (perhaps as a warmup for Jorge Luis Borges’s Library of Babel), and we’ll let it run for a year. How long are the sequences that it can enumerate in that time? A thousand pages of text? A million pages? No. Eleven words. This tells you something about the difficulty of designing the best possible life of twenty trillion actions. In short, we are much further from being rational than a slug is from overtaking the starship Enterprise traveling at warp nine. We have absolutely no idea what a rationally chosen life would be like.

Russell, Stuart. Human Compatible: Artificial Intelligence and the Problem of Control. 2019. p. 232 (italics in original)

Related: How many unique English tweets are possible? How long would it take for the population of the world to read them all out loud?

Game theory and the limits of reason

I myself suffer from a morbid sense of despair, and even now, decades after I worked with von Neumann, I still find myself questioning our central tenet: Is there really a rational course of action in every situation? Johnny proved it mathematically beyond a doubt, but only for two players with diametrically opposing goals. So there may be a vital flaw in our reasoning that any keen observer will immediately become aware of; namely, that the minimax theorem that underlies our entire framework presupposes perfectly rational and logical agents, agents who are interested only in winning, agents who pose a perfect understanding of the rules and a total recall of all their past moves, agents who also have a flawless awareness of the possible ramifications of their own actions, and of their opponents’ actions, at every single step of the game. The only person I ever met who was exactly like that was Johnny von Neumann. Normal people are not like that at all. Yes, they lie, they cheat, deceive, connive, and conspire, but they also cooperate, they can sacrifice themselves for others, or simply make decisions on a whim. Men and women follow their guts. They heed hunches and make careless mistakes. Life is so much more than a game. Its full wealth and complexity cannot be captured by equations, no matter how beautiful or perfectly balanced. And human beings are not the perfect poker players that we envisioned. They can be highly irrational, driven and swayed by their emotions, subject to all kinds of contradictions. And while this sparks off all the ungovernable chaos that we see all around us, it is also a mercy, a strange angel that protects us from the mad dreams of reason.

Labatut, Benjamin. The MANIAC. Penguin Random House, 2023. p. 144-5. (italics in original)

Reading Kahneman’s Thinking Fast and Slow recently, at several points I was struck by what seemed like the unjustified assumption that people are competent at mental arithmetic. Specifically, that you can give a person a list of probabilities and payouts and then find it legitimately surprising that they can’t or don’t pick the best one. For people constantly immersed in calculation this may be puzzling, but I also have personal experience of highly intelligent and knowledgeable people struggling at (or being unwilling to even try) calculating what a certain percentage of a number is, like for a tip. Studies on the numerical literacy of the general public reveal a worrisome inability to properly gauge millions against billions.

When mathematicians, logicians, and game theorists forget that much of the population cannot or will not calculate, they miss the obvious cause of deviations from their predictions and theories.

Cleese for the record

But in other areas I was becoming less diffident—or, in St. Peter’s parlance, less “wet.” Indeed, on one occassion, I actually got into a fight with a boy who was teasing me. There I was, lying on the floor, grappling with him, like a proper schoolboy; I even banged his head on the floor, at which point I thought, “Oh my God! If I start losing, he’ll do this to me,” and then, of course, started losing. Fortunately my form master, Mr. Howdle, arrived and broke the fight up. Funnily enough, it was about then that the bullying stopped. This first fight also proved to be my last. I had thought so, anyway, until I read in the Sunday Times recently that I had a fight with Terry Gilliam in the ’80s. I think this is unlikely: owing to the relatively rare occurrence of fisticuffs in the Cleese life it must be statistically probable that I would remember such uncommon events; they would tend to stand out sharply from the rather less pugilistic tone of the rest of my life. And I definitely don’t recall having a fight with Terry Gilliam. May I also point out that if I had, I would almost certainly have killed him. I think the only possible explanation for the Sunday Times article—if it was true—was that Terry attacked me, but that I failed to notice he was doing so. Terry is very short, due to his bandy legs, so when he scuttles around, he stays so close to the floor that it can be difficult to see what he is up to down there.

Cleese, John. So, Anyway… Penguin Random House, 2014. p. 43 (italics in original)