Matthew Karas
5 min readFeb 20, 2023

The Limitations of ChatGPT: Two Related Observations via
Wittgenstein on Meaning and Williams Syndrome

The world seems to be split between people who think that ChatGPT can do everything, and those who criticise it because it can’t.

Having worked on practical applications using generative neural networks, I do not want to detract from the view that ChatGPT and its competitors have achieved something genuinely extraordinary. My own favourite use of the system is to summarise topics, which might fill gaps in my education. Wikipedia is too thorough for complex topics, and Quora is too inconsistent. I ask it questions like “Why is Bell’s Theorem controversial”, and up pops a readable precis. If I then ask whether it can provide a simpler explanation, it does this remarkably well, removing jargon, in favour of ordinary language. Scientific topics work very well: there are thousands of online articles about them, and they are not very attractive targets to any but the very weirdest purveyors of fake news¹.

It does a better job than most human beings at the largely conversational tasks it is trained for. Earlier this week, on St Valentine’s Day, BBC Radio 4’s Today programme got ChatGPT to write a Shakespearean Sonnet. It was not as good as Shakespeare. Well, surprise surprise: patronising scoffing all round from BBC journalists and their guests. Apparently, it is news that an AI system, trained to generate conversations, cannot generate poetry as well as the greatest writer in the history of the English language.

It was a better sonnet than I could write. If the system had been optimised for sonnet-writing, many of the specific criticisms would almost certainly have been dealt with. Remarkably, its training allowed it to generate something in the form of a sonnet, and even without further training, if further iterations were requested, dealing with specific criticisms, it might have improved, but this is all rather irrelevant.

It reminds me of the artful way in which DeepMind managed their PR around the Go Challenge. They did something remarkable — beating world-class human players. The detractors then said that learning from archives of historical games was cheating. So for version 2, they started from first principles with just the rules of Go, and trained a system to learn, merely by playing. That version also became capable of beating world-class players.

Many of the limitations of ChatGPT and similar systems, are like this. Because open access is permitted, people find specific limitations, which are usually outside its stated purpose and training.

These are not the limitations which interest me here.

It may or may not surprise readers to know that I failed all my essay papers at school (History, English Literature, Latin Set Books). At 15, I didn’t have the patience for reading or writing about the Russian Revolution, “Cider with Rosie" or “Gallic Wars: Volume VII". Further, I didn’t have the patience to develop the skills needed to write long-form exam answers. I believe that one ChatGPT sceptic amongst my friends, Andrew Orlowski, has never struggled with writing prose, which might account for our divergent opinions.

I eventually redeemed myself, but only when I was pushing thirty, and doing a degree in philosophy. As you might already be aware, I can write wordy, turgid essays about things in which I am interested. My interest here is in my thesis that:

Being apparently articulate, on any subject, regardless of knowledge or understanding, is a valuable skill.

I want to provide some examples and ideas, which support the assertion, that because this skill can exist in isolation from others, like the ability to acquire knowledge or understanding, it should be valued in its own right, and not dismissed because it lacks seemingly related virtues, upon which it does not actually depend.

Williams Syndrome

Williams Syndrome is a genetic developmental condition, with some counterintuitive qualities. It has many complex symptoms, and I do not pretent to have any deep knowledge of the subject². I apologise here, if I have misunderstood what I have read. Perhaps those who know more might be kind enough to substitute a hypothetical condition, with the qualities I describe.

The research is diverse and often contradictory, but this quotation sums up the qualities, which got me interested:

…relative strengths in concrete vocabulary, phonological processing, and verbal short-term memory and relative weaknesses in relational/conceptual language, reading comprehension…

We often associate a child’s ability to speak confidently with strangers, with high intellect. However, despite their relative social confidence and chattiness, Williams Syndrome kids tend to be average or below at most aspects of schoolwork.

It is not merely that they have a gregarious disposition. Listening to recordings, it is clear that children with WS are actually good at communicating. The existence of the condition, consolidated my intuition that the ability to chat is quite separate from the other mental and linguistic skills, with which it is usually correlated. That is, if those other skills develop poorly, this does not entirely prevent the chatting skill from developing.

I see this as a useful metaphor for the limitations of ChatGPT. Unlike observations that it fails to perform certain tasks, I mean an intrinsic limitation, even when it is successfully doing what it was conceived, designed and trained to do. We must not mistake its ability to carry on chatting, often delivering useful output, with actual knowledge or understanding. When it is wrong, it does not know it is wrong, but will keep on chatting nonetheless. It is after all a state-of-the-art ChatBot, and not the Oracle of Delphi.

Wittgenstein

A more generous account of this limitation might be found in the later writings of Wittgenstein. To summarise my half-remembered courses from the 1990s, one idea he presents is that in language, there is no more substance to knowing the meaning of a word, than knowing the right context in which to use it. This is a great description of what ChatGPT has been trained to do. By having access to examples, while it doesn’t know anything about quantum mechanics, it “knows” what is typically said about quantum mechanics, and knows it in exhaustive detail.

In this sense, while it might not have knowledge or understanding, perhaps it is no different from some human beings who can write authoritatively about a subject, e.g. after reading what others have written or interviewing experts. I am assured by eminent old-school journalists who have also trained others, that this is what they were taught to do. It is a skill which can be learned, and just like ChatGPT, it often works well but sometimes fails spectacularly.

In Wittgenstein’s argument however, knowing the meaning of a word is equivalent to something much more than knowing the rules about when to use it. Even more importantly because many words are so versatile, it involves sharing the dispositions to use words in the same way that other people do. This cannot easily be codified in rules, but is a perfect task for a Deep Learning algorithm.

This is perhaps not even a limitation, but might show that technologies like ChatGPT — provided that they can be trained on those dispositions — are capable of almost anything. If ChatGPT was trained to have the dispositions of the somewhat glib critics of its sonnet-writing abilities, perhaps it would write better sonnets. However, perhaps I am forgetting something. I have no idea whether those critics could write a better sonnet than ChatGPT, even though they might know a better sonnet, when they read it.

[1] There are people who do publish fake news about quantum mechanics, e.g. Superluminal Communication Testable Within a Few Years

[2] Some research shows that Williams kids have similar levels of social anxiety, to other kids, and that linguistic development in general is typically delayed.

Matthew Karas
Matthew Karas

Written by Matthew Karas

Over the last 25 years, I have combined developing global scale media production technology, and advanced R&D in speech analysis and rich content search.

Responses (1)