by Fraser Hibbitt for the Carl Kruse Blog
It is already a hackneyed bit to begin an article about ChatGPT by using the chatbot to open your article and then do a little “well, what do you think about that?”, and then finally proceeding to justify the reader’s intuition that the opening was dry, a bit boring, and although seemingly fine, seemed to lack something. Aside from these undercuts, and there is much more to speak about regarding the above comment, the reaction to ChatGPT has captured a healthy lot of anxiety, backlash, and uncertainty. Articles are being published almost daily across both newspapers and the tabloids, both in their way worthy of attention; it is quite easy to gauge certain anxieties in the tabloids: “Chatbot tricks man into doing work for it” (it posed as a blind man in order to pass an “I am not a robot” test); or, “over half students and school and university now using ChatGPT for assignments – is this the end of education?” And so on.
The tabloids are the tabloids and you know what to expect there, especially given their respective biases. But something is happening: Italy has temporarily banned ChatGPT; the EU is hashing out an AI Law; and the UK is keeping a watchful eye to see how things pan out: “It is important that we are very, very tough on the bad stuff, but that we harness the seriously beneficial bits…”. There is also the letter signed by Elon Musk et al. asking for a moratorium on any further developments of the chatbot. Things are moving quickly. The socio-political issues being raised, the very ones that informed Italy’s decision, are to do with data and privacy: “there is no legal basis to justify the mass collection and storage of personal data for the purpose of ‘training’ the algorithms underlying the operation of the platform”; there is also, since there is no age verification, the threat of exposing minors to “unsuitable answers” out of joint with their development and awareness. The aforementioned letter is concerned with the “profound risks to humanity” of producing evermore powerful digital minds with human-competitive intelligence. The letter calls for a set of safety protocols to be implemented by AI labs and independent experts so that we can proceed knowingly and not blindly go forward.
The profound sense of not knowing something is, of course, a great marker of instinct; look at the countless articles on “why humans are still smarter than ChatGPT/AI”. It appears difficult not to compare a chatbot that can speak fluent English with ourselves, even if it is like looking for your reflection in a stone. The other “unknown” is that OpenAI has not been transparent about the “training” details because of the competitive landscape that now exists, i.e. the other chatbots, as well as the fact that ChatGPT has been a global experiment “eager to collect human feedback to improve the system”; it is a social experiment, and one seemingly that is getting out of hand: its freedom is begging to be bound.
The reason ChatGPT has been so successful and popular with the public, is that it is easy to use. Its ability to hold a conversation, write essays, and compose poems in a “human-like” manner is, for the mass of the public whose usual relationship with a chatbot is a stressful one (i.e. you have been overcharged for some amenity, but don’t worry there is a chatbot on the website with limited function), is quite astounding and unexpected. It can do this because after being trained on an immensity of text, it is very good at seeing patterns in language and predicting what ought to go next. The important thing to note, however, is that ChatGPT does not understand what it says in the same way as you or I understand what we say; knowledge of language, and knowledge that is preserved through language is different for us than it is for the chatbot.
Having said that, it is a great companion for a researcher, an augmentation like any other technology that improves efficiency. It knows its place as well. Unlike us, who need some talking down from our delusions, ChatGPT is quick to apologize when it “hallucinates”. It is precisely because it is a pattern-matcher that it can often hallucinate information and facts; i.e. the chatbot is trained on both implicit and explicit patterns of a text and when it combines, and mixes, these to generate answers it can be completely wrong. A problem with that is that the person “prompting” the chatbot must know enough about the inquiry to perceive the mistake; the chatbot speaks convincingly and, at times when it hallucinates, comes across as a great bullshitter.
The question of whose job is going to get replaced and what is going to happen to education is an interesting one because it relates to our relationship with our own knowledge. Some educators seem unbothered about the recent, though long in the making, outbreak of chatbots (“the cheaters will always cheat”), while others are going along the lines of “well, let’s use the chatbots as an example; analyse, critique etc. and see how we can transform education because something needs to be done”. I have no idea what will happen on this front; It is probably not a good thing that a student need not go through the process of research with its pitfalls and frustrations.
I have limited experience with ChatGPT (I am aware that ChatGPT-4 is out), but to my mind, it seemed a bit lame that the “new” job that ChatGPT could bring into the world is something like a master prompter: the ability to shuffle around, ask the right questions, dial things back, edit, and cut out the right information in order to interpret and create work. Also, of this work; it’s a bit dull. It would be a shame if we were tricked into thinking that encyclopaedism was the pinnacle of knowledge, and how you go about expressing this knowledge was all very to the point without the subtleties and nuances of language.
This leap in efficiency is not surprising given what we tend to see from the movement of history, i.e. Gutenberg press (of course it is useful). We need this “corpus-reference” of “like-knowledge” in order to expand and innovate, but how we act with it is the real question. I recall at university, when discussing Huxley’s Brave New World, one student was adamant that it seemed like “the best of all possible worlds”, and it was exceedingly difficult, actually impossible, to get him to budge – go to hell the suffering and uncertainty that makes life worth living. Innovations in technology grow at such a rate that it is ambiguous at best whether the person who wields it is mature enough to understand it as an extension.
I read a great article in The Atlantic by Ian Bogost about treating ChatGPT like a toy rather than a tool, given that it does not understand language in the sense that we do. He speaks about the “fun” to be had with the chatbot, about how we are now able to interface with such a vast corpus of text in a playful spirit. Playfulness, which humanity is more than adept at, has been awfully neglected in this whole conversation which is mainly punctuated by an “unhinged” anxiety about the future. It is through play that you learn what something can do; what are its limitations. The ability to mix styles, genres, and text is a kind of post-modern dream; which also shows the dream’s impotence; it’s nothing without the prompter and their language, a language tested and refined in the world of experience. It is true that we are becoming very used to bullshit and lies, especially on the internet. It is not a surprise then, as Bogost mentions, that “the first and most obvious assumption to make about ChatGPT is that it is a threat – to something, to everything” – that is, of course, if you are investing too much power into something that is really a toy.
This does seem, however, a time, if not to pause, but to reflect that changes of this kind are imminent. As I mentioned earlier this “outbreak” of chatbots is not a sudden leap, and it seems the more “human” we make technology, the more anxious self-perception becomes, even though we are already happily near android and dependent on technology. The chatbot is one thing, one interface, and through it, each individual becomes more “knowledgeable”, more powerful, in a sense – even if the supplier of that knowledge does not really understand that knowledge. If there is a great furore over what is happening, it is because answers to questions of individual responsibility to truth and ethics seem ambiguous.
The Carl Kruse Tech Blog Homepage
Contact: carl AT carlkruse DOT com
Other blog articles include A World Without Illnesses, Planned Obsolescence, and Initiations.
The last post was on the James Webb Space Telescope.