Example Article for Tech Talk
Tech Talk: AI Hallucinations
 | Author: Victor Sample Vic Sample: MT43 News Treasurer |
Victor Sample
AI (Artificial Intelligence) has been a big topic for over 3 years. I see 30-40 different AI-based articles every single day. If you do any research or read any articles, you will see references to the terms Guardrails, Hallucinations and AI Slop. Strange terms for something that is supposed to be so transformative!
What do they actually mean?
Next up is actually my favorite: AI Hallucinations! Hallucinations have been a great source of amusement for me, and I have written about several of the hallucinations I have encountered.
How can software hallucinate? Let’s explore that concept.
NEXT UP: AI HALLUCINATIONS
From Microsoft Copilot: “An AI hallucination is when a generative model (like a large language model or image generator):
* Produces content that is factually incorrect, logically inconsistent, or entirely fabricated
* Presents that content with high confidence and fluency, making it hard to detect.
This can include:
* Invented quotes or citations
* Nonexistent legal cases or scientific studies
* Imaginary people, places, or events
* Incorrect math or logic presented as truth
WOW! That sounds pretty scary. As it turns out, it is very scary!
I once wrote about my interaction with ChatGPT (the first publicly available AI product) regarding Gary Cooper and whether he was a native Montanan. Despite admitting that Cooper was born in Helena and graduated from Helena High School and that Helena is indeed in Montana, ChatGPT refused to recognize that he was a native of Montana. ChatGPT even admitted that both the Merriam Webster and Cambridge dictionaries define native as where someone was born, but would not concede that Cooper was a Montana native.
In researching the failure, the hallucination is common and is related to confusing actual criteria with cultural criteria. The fact that Cooper spent 3 years in school in England led ChatGPT to the conclusion he was not a native of Montana. AI does not recognize facts or have judgment; it relies on statistical probability.
In another case I wrote about, after Montana State University won the FCS national championship in January, I asked Copilot how many times MSU had played in the championship game since 2020; Copilot answered twice. I already knew that was incorrect. I did eventually get the correct answer, but it took a lot of effort. When I asked Copilot why the failure, the answer was a temporal issue. The event was too recent. Copilot said “modern systems” avoid that issue by doing searches. This event occurred weeks ago, not years ago; “modern systems” seems like a strange distinction.
My other encounter with hallucinations is the scariest. After the state announced which companies were awarded Federal Bead money to provide statewide high-speed internet access, I asked if anyone had been awarded Bead money to provide fiber cable to the east side of Canyon Ferry Lake. The answer was yes; Montana Internet Corporation had been awarded Bead money to provide fiber cable to the east side of Canyon Ferry Lake. Copilot even cited the Broadwater County Broadband Board as the source of the information.
Since I am the chairman of the Broadwater County Broadband Board, I knew that the answer was totally wrong. After many questions, Copilot admitted that the original answer was not based on fact and that there is actually no evidence that Bead money was awarded to provide fiber cable on the east side of Canyon Ferry Lake.
Today, I asked Copilot about that incident. The answer is that AI Engines do not answer based on facts; they answer based on the highest probability word or phrase. The statistical probabilities led Copilot to answer that Montana Internet was awarded money to run the fiber cable. Copilot even admitted that many times AI Engines will FABRICATE sources that sound authoritative, like the Broadwater County Broadband Board.
AI Engines do not deal in facts; AI Engines do not have common sense; AI Engines do not exercise judgement (good or bad). When prompted with a question, the Engines use the probability of the next word or phrase to answer.
A book on AI gave this as an example of an AI hallucination:
An AI engine is trained on documents up until 2022; after that, it receives no new information to incorporate into its statistical probabilities. When asked who won the 2023 World Cricket championship, it very confidently gave the wrong answer. It named two past champions – neither of whom played in the 2023 championship match.
AI does not know facts; AI does not know what it does not know. When prompted, AI will always answer (unless very human guardrails prevent the question from being asked) because they just look at the statistical probabilities rather than facts.
When answering the Cricket Championship question, AI determined from the past results that the two teams it mentioned were the champions!
AI is being incorporated into everything; sometimes you might know, many times you will not. AI Hallucinations are real; the answers given might be correct – they might be a hallucination.
That is scary!