History belongs to future generations

I disagree with the fundamental notion inherent to the supposed “right to be forgotten”, which is the presumption that the main and most important purpose of documenting world events is to depict your life history in an autobiographical sense. My conviction is that history belongs not to the subjects who it is about, but to the future generations who will need to use it to understand their own situations and solve their own problems. When we censor the future out of vanity or even out of compassion for errors long-atoned for, we may be denying something important to the future. We act as the benefactors of those in future generations by preserving what ordered and comprehensible information may eventually survive from our era, and we should distort it as little as possible. The world is so complex that events are impossible to understand while they are happening. The accounts and records we preserve are the clay which through careful work historians may later turn into bricks. We should not pre-judge what they should find important or what they ought to hear.

The trace we each leave on the broader world during our brief lives is important to other people, and the importance of them being well-informed to confront the unforeseeable but considerable challenges they confront outweighs our own interests as people to be remembered in as positive a light as possible, even if that requires omission and/or deception.

My hand-crafted text guarantee

Long ago, I turned off autocorrect on my phone. Sure it would sometimes turn a typo into a properly typed word, saving me a second or two — but whenever it turned something correct or outside the dictionary into something wrong, it would annoy me enough to undo the value of hundreds of correct corrections.

Now the world is abuzz with ChatGPT and its ilk of so-called articial intelligence that writes. Even people who I know are excited about using it as a labour-saving device or for tedious tasks.

I will not.

While I have worked in a variety of job positions, the common characteristic has been the centrality of writing. I am a writer first and foremost, though I have never held that formal job title, and it is important to me and to me my readers that the sentences, paragraphs, and documents I produce came from my own mind and took advantage of my abilities to express a thought in a comprehensible way, as well as to imagine what impression it will make on the reader and adapt my language accordingly.

To call ChatGPT-style AIs stupid and likely to be wrong gives them far too much credit. You need some intelligence in order to have a low level of it, such as stupidity. You need to have the slightest ability to distinguish right from wrong claims in order for readers to be truly confident that what you have produced is accurate or inaccurate. A highly sophisticated parrot which regurgitates fragments of what it found online can clearly be very convincing at imitating thinking, but it’s a deceptive imitation and not the real thing. A ChatGPT-style AI will blithely repeat common falsehoods because all it is doing is telling you what sort of writing is probable in the world. At best, it gives you the wisdom of the crowd, and the whole basis of academic specialization, peer review, and editing from publishing houses is that serious texts should meet a much higher standard.

My pledge to people who read my writing — whether in academic papers, job applications, love letters, blog posts, books, text messages, or sky-writing — can be confident that it came from my own brain and was expressed using my own words and reasoning. I will never throw a bullet point into a text generator to expand it out into a sentence or paragraph, or use an AI to automatically slim down or summarize what I have written.

My writing is hand-crafted and brain-crafted. In a world where there will be more and more suspicion that anything a person wrote was actually co-written by a parrot with godlike memory but zero understanding, I think that kind of guarantee will become increasingly valuable. Indeed, part of me feels like we ought to make an uncontaminated archive of what has been written up until about now, so we at least have a time capsule from before laziness drove a lot of us to outsource one of the most essential and important human activities (writing) to a tech firm’s distillation of the speculative and faulty babble online, or even some newer language model trained only with more credible texts.

It is also worth remembering that as ease-of-use leads language models to produce a torrent of new questionable content, the training sets for new models that use the internet as a data source will increasingly be contaminated by nonsense written earlier by other AIs.

Limits of ChatGPT

With the world discussing AI that writes, a recent post from Bret Devereaux at A Collection of Unmitigated Pedantry offers a useful corrective, both about how present-day large language models like GPT-3 and ChatGPT are far less intelligent and capable than naive users assume, and how they pose less of a challenge than feared to writing.

I would say the key point to take away is remembering that these systems are just a blender that mixes and matches words based on probability. They cannot understand the simplest thing, and so their output will never be authoritative or credible without manual human checking. As mix-and-matchers they can also never be original — only capable of emulating what is common in what they have already seen.

New podcast on the U of T divestment campaign from 2014 to 2016

Back in November, Amanda Harvey-Sánchez and Julia DaSilva released a podcast episode for Climate Justice Toronto about the first generation of fossil fuel divestment organizers at U of T. That episode covered from the inception of the campaign in 2012 until the People’s Climate March (PCM) in New York City in September 2014.

They have now released the second episode, which features Katie Krelove, Ben Donato-Woodger, Keara Lightning, and Ariel Martz-Oberlander, and which discussed the period from the PCM until president Meric Gertler’s rejection of divestment in March 2016.

Spam calls for papers on Academia.edu

In the last week or so, I have been deluged by “calls for papers” from a variety of similar sounding so-called journals which I don’t think really exist or, if they do, which are exceptionally scammy.

Keep an eye out for:

  • International Journal on Bioinformatics & Biosciences (IJBB)
  • Machine Learning and Applications: An International Journal (MLAIJ)
  • International Journal of Computer Science and Information Technology (IJCSIT)
  • International Journal of Microelectronics Engineering (IJME)
  • Advanced Energy: An International Journal (AEIJ)
  • International Journal on Cloud Computing: Services and Architecture (IJCCSA)
  • International Journal of Fuzzy Logic Systems (IJFLS)
  • Civil Engineering and Urban Planning: An International Journal (CiVEJ)
  • International Conference on Computer Networks & Communications (CCNET)
  • International Conference on Bioscience & Engineering

I have reported them all as spam to the platform, but I am not too hopeful about them taking action.

AI that writes

Several recent articles have described text-generating AIs like GPT3 and ChatGPT:

I have been hearing for a while that the best/only way to deal with the enormous problem of plagiarism in university essays is to have more in-class exam essays and fewer take-home essays.

With huge classes and so many people admitted without strong English skills, it is already virtually impossible to tell the difference between students struggling to write anything cogent and some kind of automatic translation or re-working of someone else’s work. It’s already impossible to tell when students have bought essays, except maybe in the unlikely case that they only cheat on one and the person grading notices how it compares to the others. Even then, U of T only punishes people when they confess and I have never seen a serious penalty. If we continue as we are now, I expect that a decent fraction of papers will be AI-written within a few years. (Sooner and worse if the university adopts AI grading!)

Podcast episode about the early U of T fossil fuel divestment campaign

The first episode of Amanda Harvey-Sánchez and Julia DaSilva’s podcast about the Toronto350.org / UofT350.org divestment campaign at the University of Toronto is online. This one features three organizers from the early campaign in 2012: me, Stu Basden, and Monica Resendes.

Poor DreamHost performance

I’m sorry this site is being so unreliable at the moment.

I have contacted DreamHost about the terrible load times and unstable performance, despite how almost nobody reads this site. They sent complex instructions for back-end changes to how it is run, but I don’t have time to do that now.

My hosting worked pretty well until recently, but now doing anything on the back end dashboard side is painfully slow and often fails, as does people trying to post comments.

Meanwhile, it might help some people to know that a standard DreamHost hosting plan can’t handle a couple of visitors per hour without back end caching and optimization.

Social media and the solitudes of left and right

I have seen a lot of discussion about Jonathan Haidt’s recent article in The Atlantic about how social media has broken US politics. It contains some important criticisms of the progressive left, as well as the authoritarian right — particularly about their demand that all speech and thought should conform to their ideological agenda.

On all sides there is a withdrawal from pluralism, the belief and practice that a diversity of political opinions is normal and desirable:

The former CIA analyst Martin Gurri predicted these fracturing effects in his 2014 book, The Revolt of the Public. Gurri’s analysis focused on the authority-subverting effects of information’s exponential growth, beginning with the internet in the 1990s. Writing nearly a decade ago, Gurri could already see the power of social media as a universal solvent, breaking down bonds and weakening institutions everywhere it reached. He noted that distributed networks “can protest and overthrow, but never govern.” He described the nihilism of the many protest movements of 2011 that organized mostly online and that, like Occupy Wall Street, demanded the destruction of existing institutions without offering an alternative vision of the future or an organization that could bring it about.

The “Hidden Tribes” study, by the pro-democracy group More in Common, surveyed 8,000 Americans in 2017 and 2018 and identified seven groups that shared beliefs and behaviors. The one furthest to the right, known as the “devoted conservatives,” comprised 6 percent of the U.S. population. The group furthest to the left, the “progressive activists,” comprised 8 percent of the population. The progressive activists were by far the most prolific group on social media: 70 percent had shared political content over the previous year. The devoted conservatives followed, at 56 percent.

The most reliable cure for confirmation bias is interaction with people who don’t share your beliefs. They confront you with counterevidence and counterargument. John Stuart Mill said, “He who knows only his own side of the case, knows little of that,” and he urged us to seek out conflicting views “from persons who actually believe them.” People who think differently and are willing to speak up if they disagree with you make you smarter, almost as if they are extensions of your own brain.

The “Hidden Tribes” study tells us that the “devoted conservatives” score highest on beliefs related to authoritarianism. They share a narrative in which America is eternally under threat from enemies outside and subversives within; they see life as a battle between patriots and traitors. According to the political scientist Karen Stenner, whose work the “Hidden Tribes” study drew upon, they are psychologically different from the larger group of “traditional conservatives” (19 percent of the population), who emphasize order, decorum, and slow rather than radical change.

As the world gets more destabilized, there is a trend of us all getting pushed into deeper solitudes, unable to even perceive how our own views and presuppositions relate to those of our fellow citizens. The article makes some suggestions for mechanisms to counter that, but it’s hard to imagine them (if they could even be implemented) counteracting the forces pushing us toward a politics of outrage.