December 23, 2024

7 words that worry me in AI

Author: Donald Clark
Source

I’m normally sanguine about the use of language in technology and avoid the definition game. Dictionary definitions are derived from actual use, not the other way around – meaning is use. Like those who want to hair split about pedagogy, didactics, androgogy, heutology… whatever. I prefer practical pedagogy to pedantry.

Another is the age old dispute about whether learning can be used as both a verb and a noun, some claiming the noun ‘learning’ is nonsense. ‘A little learning is a dangerous thing’ said Alexander Pope. In the same work he also wrote ‘To err is human: to forgive divine” and ” Fools rush in where angels fear to tred.’  Pretty impressive for a 23 year old!

 

Nevertheless, in the discourse around AI, several terms make me a bit uneasy, mainly because another force is at work – anthropomorphism. We tend to treat tech as if it were animate, like us. Nass & Reeves did 35 brilliant studies on this, published in their wonderful work ‘The Media Equation’, well worth a read. But this also leads to all sorts of problems as we tend to make abrupt contextual shifts, for example, between different meanings of words like hallucinations, bias and learning. 

 

Another form of anthropomorphism is to bring utopian expectations to the table, such as the idea that software is or should be a perfect single source of truth. There can be no such thing if it includes different opinions and perspectives. Epistemology, the science of knowledge, quickly disabuses you of this idea as do theories of ‘truth’. Take science, where the findings are not ‘true’ in any absolute sense’ but provide the best theories for the given evidence at that time. In other words, avoid absolutes.

 

These category errors, the attribution of a concept from one context to something very different, are common in AI. First, the technology is often difficult to understand – how it is trained, the nature of knowledge propagation from Large Language Models (LLMs), what is known about how they work, the degree which emergent qualities such as reasoning behaviour that arises from such models and so on. But that make us more, not less, wary of anthropomorphism.

 

Ethics

The one area around AI that produces most ‘noise’, in my opinion, is a fundamental lack of understanding about what ‘ethics’ is as an activity. Ethics is not activism, that is a post-ethical activity that comes from those who have come to a fixed and certain ethical conclusion. It tends to be shrill, accusatory and anyone who blunts their certainty becomes their enemy. This faux ethics, replaces actual ethics, the deep consideration of what is right and wrong, upsides and downsides, known and unknown, the domain of real moral philosophy and ethics. One could do worse that start with Aristotle who recommends some moderation on such issues or Hume who understood the difficulty of deriving an ‘ought’ from an ‘’is’, even an appreciation of Mill whose Utilitarian views can be a useful lens through which we can identify whether there is a net benefit or deficit in a technology. It is not as if ethics is some kind of new kid on the block.

 

Hallucinations

Large Language Models tend to ‘hallucinate’. This is an odd semantic shift. Humans hallucinate, usually when they’re seriously ill, mentally ill or on psychedelic drugs, so it brings with it, a lot of pejorative baggage. Large Language Models do NOT hallucinate in the sense of imagining in its consciousness. It is competent without comprehension. They optimise, calculate weightings, probabilities and all sorts of mathematical wonders and generate data – they are NOT conscious, therefore cannot hallucinate. That word exaggerates the problem, suggesting that it is dysfunctional. In a sense it is a feature not a bug, one that can be altered and improved. ChatGPT produces poems and stories – are those hallucinations because they are fiction?

 

Bias

Another word that seems loaded with bias, is ‘bias’! Meaning is use and the meaning of bias has been largely around human bias, whether innate or social. Daniel Kahneman got a Nobel Prize for uncovering many of these. This makes the use of the word difficult when applied to technology. It needs to be used carefully. There is statistical bias, indeed statistics is the science of identifying and dealing with bias. Statistics is a field that aims to provide objective and accurate information about a population or a phenomenon based on data. Bias can occur in software and statistical analysis, which can lead to incorrect conclusions or misleading results. There is also a mathematical definition of bias, the difference between the expected value and the real value of the parameter. The entire science of statistics, where statisticians have developed a ton of methods to deal with bias; random sampling, stratified sampling, adjusting for bias, blind and double-blind studies and so on, help to ensure that the results are as objective and accurate as possible. As AI uses mathematical and statical techniques it works hard to eliminate or at least identify bias. We need to be careful not to turn the ethical discussion of bias into a simple expression of bias. Above it needs objectivity and careful consideration.

 

Reason

Another terms that worries me is ‘reason’. We say that AI cannot reason. This is not true. Some forms of AI specialise in this skill, as they employ formal logic in the tool itself. Others, like ChatGPT show emergent reasoning. We, as humans are actually rather bad at logical reasoning, we second guess and bring all sort of cognitive biases to the table almost every time we think. It is correct to question the reasoning ability of AI but this is a complex issue, and if you want it to behave like humans, reason cannot be your benchmark. 

 

Intelligence

John McCarthy, who invented the phrase ‘Artificial Intelligence’ regretted his definition. Again, taking a word used largely in a human or animal content, is an odd benchmark, given that humans have limited working memories, are forgetful, inattentive, sleep eight hours a day, can’t network and die. We have evolutionary baggage; genetic, instincts, emotions and cognitive abilities. In other words, human ‘intelligence’ is rather limited.

 

Learning

This word, in common parlance, was something unique to human and many animals. When we talk about learning in AI, it has many different meanings, all involving very technical methods. We should not confuse the two, yet the basic idea that a mind or machine is improved by the learning process is right. What is not right is the straight comparison. Both may have neural networks but the similarity is more metaphorical than real. AI has gained a lot from looking at the brain as a network and I have traced this in detail from Hebb onwards through McUlloch & Pitts, Rosenblatt, Rumelhart and Hinton. Yet the differences are huge – in terms of the types of learning, inputs, natures of memory and so on.

 

Alignment

This word worries me as many assume there is such a thing as a single set of human values to which we can align AI systems. That is not true. The value and moral domain is rife with differences and disputes. The danger is in thinking that the US, EU or some body like UNESCO knows what this magical set of human values is – they don’t. This can quickly turn I to the imposition of one set of values on others. Alignment may be a conceit – be like us they implore. No – be better.

 

Conclusion

These are just a few examples of the pitfalls of sloganeering and throwing words around as if we were certain of their meaning when what we are actually doing is slight of hand, using a word in one context and taking that meaning into another. Conceptual clarification is a good thing in this domain.

Read more