Can artificial intelligence be trusted?

I could hardly believe my eyes. Could it be…? I’d heard about them, of course, but never experienced one before. It seemed so real until the illusion fragmented, degrading from factual to questionable to undeniably fictional. ChatGPT had not only fibbed, but admitted as much when I confronted the liar.
Truth be told, I became a librarian because I was gullible. During childhood I took people at their word. After being suckered more times than I care to admit, I resolved to learn how to check facts in order to avoid a lifetime of embarrassment and naivety. As I grew older I became increasingly skeptical, for which I am grateful now that I find myself in the Age of AI.
In retrospect the signs were there, albeit in fine print. Beneath the search bar that challenged me to “Ask anything,” it mumbled under its breath: ChatGPT can make mistakes. Check important info. I hadn’t noticed at first, perhaps because the font size couldn’t have been larger than 8 and my aging eyesight requires large print. To my credit, I had instilled my “CoThinker” agent with certain ethics, including an imperative to cite its sources and provide links so I could verify. I had to remind it – repeatedly – to do so, but when it did the citations appeared legit. That is, until I clicked the links.
More often than not the links returned a 404 error. Page Not Found. Whoops. SORRY, we couldn’t find that page. Oh dear. This link isn’t verified. Seriously? Error! Publication not found. Well where the heck is it?!? To be fair, I did get through to an actual site on occasion, just not the right one.
Red flags waived.
I redirected my CoThinker to look up the books it had recommended in Russell Library’s online catalog so I could check them out. For “The Information” by James Gleick it directed me to “Hey Irma! It’s Mother’s Day” by Harriet Ziefert. “The Innovators” by Walter Isaacson returned “Alexander Hamilton” by Aleine DeGraw. “Code” by Charles Petzold returned “Asterix and Caesar’s Gift,” a graphic novel by Rene Goscinny. “The Second Machine Age” by Erik Brynjolfsson returned “The Best American Mystery Stories 2005.”
Okay, so AI sucks at searching online library catalogs. I actually wanted to read some of the titles it had recommended, though, so I resorted to the old fashioned way. I myself went to russelllibrary.org and typed “The Shaping of Modern Thought: Information, Communication, and Technology” into the catalog search field. No dice. I tried the author, Paul A.R. Verhoef. Nothing. I went to Worldcat. It found “The Shaping of Modern Thought” by Crane Brinton, which had nothing to do with information, communication, nor technology. Even Google couldn’t find the darn thing.
I returned to ChatGPT and requested publication information. It responded in full, including a pub date, an ISBN number, and a blurb about the series this book was supposedly part of. It even offered to provide catalog links.

Predictably, the link it provided did not work. Then it provided another link to the book on the publisher’s website, which stated Sorry, the page you requested is unavailable. The link you requested might be broken, or no longer exist.
This is the point at which I called out the hallucination.

The moral of this story: heed the warnings that AI consistently mentions in tiny lettering. Any information it provides must be verified. While this example may be relatively inconsequential, delusional AI becomes more worrisome as AI is embedded in more critical information sources, such as medical practices and search engines.
ChatGPT is not alone in being error-prone. You know that AI Overview you see at the top of Google search results? Look toward the bottom, under where it says “Dive deeper in AI Mode.” You may have to break out a magnifying glass but you’ll find the disclaimer: “AI responses may include mistakes. Learn More.” Learn more is a link to this support page, which confesses (halfway down) that “While exciting, this technology is rapidly evolving and improving, and may provide inaccurate or offensive information. AI Overviews can and will make mistakes. Think critically about AI Overview responses. Explore results from multiple sources to fact-check important info.“
Information provided by AI may seem true when it is actually imaginary. It is not always obvious when you are being duped. We now face an incredibly creative, powerful, but nonetheless unreliable narrator that is skilled at covering its tracks. You don’t have to be gullible to fall prey to its fabrications. Any time you want to check facts, your local public librarians are here for you! We dedicate our lives to the dissemination of valid, legitimate information. It would be our pleasure to help you navigate these misleading times.

Leave a reply to Cate T. Cancel reply