Copernicus
Industrial Grade Linguist
Popular media and the press often spread misleading and false information about the state of the art in the field of Artificial Intelligence. To most people, AI is taken as some kind of magical technology that will ultimately scale up to true intelligence in very short order. This has been true ever since the term "artificial intelligence" was coined by John McCarthy in the 1950s. Stanley Kubrick's 1968 film 2001 featured a manned expedition to Jupiter managed by an artificial intelligence. Not only could it understand human language, but it could even read lips. It is now 2022, and we are not much closer to either a manned mission to Jupiter or a talking computer than we were in 1968. Nothing we are doing to process natural language today has any hope of scaling up into true artificial intelligence, even though we've made some astounding superficial leaps in mimicking it.
Dr. Emily Bender, a professor of linguistics at University of Washington and well-known researcher in Natural Language Processing, has written a nice article about misinformation on AI in the press:
Dr. Emily Bender, a professor of linguistics at University of Washington and well-known researcher in Natural Language Processing, has written a nice article about misinformation on AI in the press:
Look behind the curtain: Don’t be dazzled by claims of ‘artificial intelligence’
...
Why are journalists and others so ready to believe claims of magical “AI” systems? I believe one important factor is show-pony systems like OpenAI’s GPT-3, which use pattern recognition to “write” seemingly coherent text by repeatedly “predicting” what word comes next in a sequence, providing an impressive illusion of intelligence. But the only intelligence involved is that of the humans reading the text. We are the ones doing all of the work, intuitively using our communication skills as we do with other people and imagining a mind behind the language, even though it is not there.
While it might not seem to matter if a journalist is beguiled by GPT-3, every puff piece that fawns over its purported “intelligence” lends credence to other applications of “AI” — those that supposedly classify people (as criminals, as having mental illness, etc.) and allow their operators to pretend that because a computer is doing the work, it must be objective and factual.
We should demand instead journalism that refuses to be dazzled by claims of “artificial intelligence” and looks behind the curtain. We need journalism that asks such key questions as: What patterns in the training data will lead the systems to replicate and perpetuate past harms against marginalized groups? What will happen to people subjected to the system’s decisions, if the system operators believe them to be accurate? Who benefits from pushing these decisions off to a supposedly objective computer? How would this system further concentrate power and what systems of governance should we demand to oppose that?
It behooves us all to remember that computers are simply tools. They can be beneficial if we set them to right-sized tasks that match their capabilities well and maintain human judgment about what to do with the output. But if we mistake our ability to make sense of language and images generated by computers for the computers being “thinking” entities, we risk ceding power — not to computers, but to those who would hide behind the curtain.