[ad_1]
It’s fair to say that I am a child of the Information Age. I may have missed the critical precursors, but I am old enough to have watched email go from exotic to commonplace to passé.
While many advances have been beneficial — or at least convenient — the positives were not unalloyed. Invasive apps that track your whereabouts in the real and virtual worlds, websites that suck up our data and sell it, fights over net neutrality and the highly inequitable way in which information technology is distributed all come to mind.
Against this backdrop, we should be greeting large language models (often called, “AI”) with deep skepticism. No, the dangers of AI that Hollywood has warned us about repeatedly are not on the horizon. For now, we need not worry about machines that are capable of acting with self-determination and, like the HAL 9000, deciding that the best solution to a problem is to remove the humans (much less the dystopian worlds depicted in “The Terminator” or “The Matrix”).
As a real estate appraiser, I have been watching automated valuation models, or AVMs, evolve. They harvest data from tax assessors and multiple listing services, then produce estimates of individual property values.
However, AVMs have never been to any of the properties they’re valuing. Anyone who has ever had neighbors knows that two houses on a street can be completely different, no matter how similar they appear on paper.
I am resistant to calling current large language models “AI” because there is a strong parallel between them and AVMs. While I have no doubt these models will continue to grow in sophistication, for now they are input-output machines. Some might observe that, at a philosophical level, so are humans. But for now, the differences are substantial.
Language model AIs learn to construct sentences based on the frequencies and orders in which words are used in millions of human-generated texts. They are searching for rules that will guide the assembly of human-sounding sentences. However, for the computer, the words are meaningless variables. The computer is comparing how often this six-letter-string appears in front of this particular three-letter string. It does not know what a “yellow cat” is when it generates those words. The model just knows that, statistically speaking, that combination of words comes up sometimes in certain contexts.
Current AI has no way to test anything it generates against reality. At present, these are the big differences between “AI” and human intelligence.
Sometimes AI seems to invent answers to questions, including inventing footnoted sources. In reality, AI is always inventing answers to questions. All AI does is assemble sentences from highly probable word combinations. Sometimes the answers match reality, and that tricks us into thinking the machine understands something about the world. But the machine is no better at understanding the sentences it creates than an AVM is at understanding why someone would spend money on a home (or what money is in the first place).
No, I’m not a Luddite or waxing sentimental. The fact is that current “AI” can generate human-sounding strings of words, but they are completely meaningless to the computers that generate them and should be treated as meaningless by us as well. Any resemblance these sentences might have to reality is purely a matter of good statistics and clever coding.
While we should have discussions about whether AI is stealing jobs from writers or whether it might accidentally start a global catastrophe, what we really need to worry about is that AI can be used to create huge piles of false information. Even worse, this can be done both intentionally and unintentionally.
The better AI gets, the more convincing its misinformation will sound. These problems will compound as AI-generated falsehoods are cited by people, lending credence where there is none and making fact-checking more difficult.
For thousands of years we have solved problems by expanding our knowledge and passing it on. But when AI starts adding gibberish into the mix, we will be standing on the shoulders of imaginary giants.
Before AI progresses from occasionally enabling students to cheat on writing assignments to corrupting our shared, collective knowledge, we need to reexamine how useful these tools are and weigh that against the narrow benefits they currently yield.
Will Wood is a small business owner, veteran, and half-decent runner. He lives, works, and writes in West Chester.
[ad_2]
Source_link