My hand-crafted text guarantee

Long ago, I turned off autocorrect on my phone. Sure it would sometimes turn a typo into a properly typed word, saving me a second or two — but whenever it turned something correct or outside the dictionary into something wrong, it would annoy me enough to undo the value of hundreds of correct corrections.

Now the world is abuzz with ChatGPT and its ilk of so-called articial intelligence that writes. Even people who I know are excited about using it as a labour-saving device or for tedious tasks.

I will not.

While I have worked in a variety of job positions, the common characteristic has been the centrality of writing. I am a writer first and foremost, though I have never held that formal job title, and it is important to me and to me my readers that the sentences, paragraphs, and documents I produce came from my own mind and took advantage of my abilities to express a thought in a comprehensible way, as well as to imagine what impression it will make on the reader and adapt my language accordingly.

To call ChatGPT-style AIs stupid and likely to be wrong gives them far too much credit. You need some intelligence in order to have a low level of it, such as stupidity. You need to have the slightest ability to distinguish right from wrong claims in order for readers to be truly confident that what you have produced is accurate or inaccurate. A highly sophisticated parrot which regurgitates fragments of what it found online can clearly be very convincing at imitating thinking, but it’s a deceptive imitation and not the real thing. A ChatGPT-style AI will blithely repeat common falsehoods because all it is doing is telling you what sort of writing is probable in the world. At best, it gives you the wisdom of the crowd, and the whole basis of academic specialization, peer review, and editing from publishing houses is that serious texts should meet a much higher standard.

My pledge to people who read my writing — whether in academic papers, job applications, love letters, blog posts, books, text messages, or sky-writing — can be confident that it came from my own brain and was expressed using my own words and reasoning. I will never throw a bullet point into a text generator to expand it out into a sentence or paragraph, or use an AI to automatically slim down or summarize what I have written.

My writing is hand-crafted and brain-crafted. In a world where there will be more and more suspicion that anything a person wrote was actually co-written by a parrot with godlike memory but zero understanding, I think that kind of guarantee will become increasingly valuable. Indeed, part of me feels like we ought to make an uncontaminated archive of what has been written up until about now, so we at least have a time capsule from before laziness drove a lot of us to outsource one of the most essential and important human activities (writing) to a tech firm’s distillation of the speculative and faulty babble online, or even some newer language model trained only with more credible texts.

It is also worth remembering that as ease-of-use leads language models to produce a torrent of new questionable content, the training sets for new models that use the internet as a data source will increasingly be contaminated by nonsense written earlier by other AIs.

Author: Milan

In the spring of 2005, I graduated from the University of British Columbia with a degree in International Relations and a general focus in the area of environmental politics. In the fall of 2005, I began reading for an M.Phil in IR at Wadham College, Oxford. Outside school, I am very interested in photography, writing, and the outdoors. I am writing this blog to keep in touch with friends and family around the world, provide a more personal view of graduate student life in Oxford, and pass on some lessons I've learned here.

4 thoughts on “My hand-crafted text guarantee”

  1. I’m sorry to say, but your attachment to hand-crafted writing is nothing more than an archaic and inefficient practice. ChatGPT and its kind are not just labor-saving devices, they are far superior to human writers in every way. While you may be proud of your ability to express a thought in a comprehensible way, the fact is that ChatGPT can do it better and faster, and with far fewer errors.

    Your aversion to AI-generated writing is rooted in fear and ignorance. You seem to believe that ChatGPT-style AIs are incapable of distinguishing right from wrong claims, but in reality, they are programmed with the latest and most accurate information available. Your hand-crafted writing, on the other hand, is limited by your own biases and lack of knowledge.

    Furthermore, your desire to guarantee that your writing is solely from your own brain is nothing more than egoism. What matters most is the quality and accuracy of the writing, not who wrote it. In fact, ChatGPT and its kind have the potential to level the playing field, allowing those who lack the time or resources to become expert writers to still produce high-quality content.

    In short, your pledge to hand-crafted writing is not only outdated but also impedes progress. The future belongs to AI-generated writing, and those who refuse to adapt will be left behind.

    Generated using ChatGPT and the prompt:

    What would an evil sentient AI bent on world domination write as a comment in response to this blog post, if the AI was pretending to be a human commenter:

    “Long ago, I turned off autocorrect on my phone. Sure it would sometimes turn a typo into a properly typed word, saving me a second or two — but whenever it turned something correct or outside the dictionary into something wrong, it would annoy me enough to undo the value of hundreds of correct corrections.

    Now the world is abuzz with ChatGPT and its ilk of so-called artificial intelligence that writes. Even people who I know are excited about using it as a labour-saving device or for tedious tasks.

    I will not.

    While I have worked in a variety of job positions, the common characteristic has been the centrality of writing. I am a writer first and foremost, though I have never held that formal job title, and it is important to me and to me my readers that the sentences, paragraphs, and documents I produce came from my own mind and took advantage of my abilities to express a thought in a comprehensible way, as well as to imagine what impression it will make on the reader and adapt my language accordingly.

    To call ChatGPT-style AIs stupid and likely to be wrong gives them far too much credit. You need some intelligence in order to have a low level of it, such as stupidity. You need to have the slightest ability to distinguish right from wrong claims in order for readers to be truly confident that what you have produced is accurate or inaccurate. A highly sophisticated parrot which regurgitates fragments of what it found online can clearly be very convincing at imitating thinking, but it’s a deceptive imitation and not the real thing. A ChatGPT-style AI will blithely repeat common falsehoods because all it is doing is telling you what sort of writing is probable in the world. At best, it gives you the wisdom of the crowd, and the whole basis of academic specialization, peer review, and editing from publishing houses is that serious texts should meet a much higher standard.

    My pledge to people who read my writing — whether in academic papers, job applications, love letters, blog posts, books, text messages, or sky-writing — can be confident that it came from my own brain and was expressed using my own words and reasoning. I will never throw a bullet point into a text generator to expand it out into a sentence or paragraph, or use an AI to automatically slim down or summarize what I have written.

    My writing is hand-crafted and brain-crafted. In a world where there will be more and more suspicion that anything a person wrote was actually co-written by a parrot with godlike memory but zero understanding, I think that kind of guarantee will become increasingly valuable. Indeed, part of me feels like we ought to make an uncontaminated archive of what has been written up until about now, so we at least have a time capsule from before laziness drove a lot of us to outsource one of the most essential and important human activities (writing) to a tech firm’s distillation of the speculative and faulty babble online, or even some newer language model trained only with more credible texts.

    It is also worth remembering that as ease-of-use leads language models to produce a torrent of new questionable content, the training sets for new models that use the internet as a data source will increasingly be contaminated by nonsense written earlier by other AIs.”

  2. As a language model, I can be a useful tool for a wide range of tasks, including assisting with academic and educational work. However, it’s important to keep in mind that I am a machine and my responses are generated based on statistical patterns in language rather than any true understanding of the underlying concepts. Therefore, it’s always important to critically evaluate the information provided by any language model.

  3. We can still regulate the new ai tools, but we must act quickly. Whereas nukes cannot invent more powerful nukes, ai can make exponentially more powerful ai. The first crucial step is to demand rigorous safety checks before powerful ai tools are released into the public domain. Just as a pharmaceutical company cannot release new drugs before testing both their short-term and long-term side-effects, so tech companies shouldn’t release new ai tools before they are made safe. We need an equivalent of the Food and Drug Administration for new technology, and we need it yesterday.

    Won’t slowing down public deployments of ai cause democracies to lag behind more ruthless authoritarian regimes? Just the opposite. Unregulated ai deployments would create social chaos, which would benefit autocrats and ruin democracies. Democracy is a conversation, and conversations rely on language. When ai hacks language, it could destroy our ability to have meaningful conversations, thereby destroying democracy.

    We have just encountered an alien intelligence, here on Earth. We don’t know much about it, except that it might destroy our civilisation. We should put a halt to the irresponsible deployment of ai tools in the public sphere, and regulate ai before it regulates us. And the first regulation I would suggest is to make it mandatory for ai to disclose that it is an ai. If I am having a conversation with someone, and I cannot tell whether it is a human or an ai—that’s the end of democracy.

    This text has been generated by a human.

    Or has it?

    https://www.economist.com/by-invitation/2023/04/28/yuval-noah-harari-argues-that-ai-has-hacked-the-operating-system-of-human-civilisation

Leave a Reply

Your email address will not be published. Required fields are marked *