Use AI. But Don’t Trust it Blindly.

Trust, but verify. Ronald Reagan was famous for saying this when dealing with the Russians in the 1980s during the Cold War. It’s probably a reasonable mantra when you’re talking about nuclear arms and the fate of the world, but it has other applications. In fact, did you know that “trust, but verify” is actually a Russian proverb? “Doveryai no proveryai” is better in Russian because it rhymes, and it was taught to Reagan by his adviser on Russian affairs in preparation for talks with Gorbachev.

How do I know all this? Well, I Googled it. And I ChatGPT’d it (is that a verb yet?) And then I verified.

This is something important to understand about ChatGPT and its ilk — when it’s being used for something like looking up the origins of a phrase, it’s nothing more than a glorified web search. This is true whether we’re talking about ChatGPT-4 with live web search, Google’s Bard, Microsoft’s Bing, or even the version of ChatGPT that doesn’t have live web access but was trained on internet data: the responses are essentially a search result. The format may be written paragraphs of prose rather than a list of hyperlinks, but the idea is the same.

Plugging “trust, but verify” into a Google search window turns up a list of sources. First, of course, is Wikipedia. Although Wikipedia has become more trustworthy than it once was, it is not a reliable source. I’m not being harsh here — in case you don’t feel like clicking that link, it takes you to a Wikipedia entry titled “Wikipedia is not a reliable source.” There are, of course, pages of other results, but we’ll get back to that in a minute.

Let’s turn to my other search engine. I asked ChatGPT, “what is the origin of the phrase ‘trust, but verify’?” It spat out three paragraphs of information on the Russian provenance of the phrase, Reagan’s use of it, and his introduction to it by Russian culture specialist Suzanne Massie.

Although I was using ChatGPT-4 with the live web browsing feature turned on, I didn’t see any citations in this response, so I typed in, “what is the source of that information?” At that point I saw the animated button showing that it was surfing the web and clicking links; the next two-paragraph response I got, which had similar information, had four footnotes. They all went to Wikipedia.

Next, I went to the chat-enabled version of Bing, asking it the same question. Its response was similar, if shorter — Bing, even in chat mode, is more of a search engine than text creator and users generally prefer it over ChatGPT for research. It provided citations that included Wikipedia, a site I’ve never heard of called Leadergrow.com (which also came up second in the Google results), and two I have: FreeDictionary.com, and PsychologyToday.com. Since the latter is a pretty well-known source, I clicked it, only to find an article on relationships by a writer I’d never heard of that cited information that probably came from Wikipedia. It wasn’t what I’d consider a valid primary or secondary source.

What makes a source valid, you may ask? Well, it varies, but the publication and author should be recognizable — the Psychology Today article met the first criterion, but for my purposes, not the second. I searched Google for “what makes a source valid” and chose the result from the Purdue Online Writing Lab (OWL), which I already know is a valid source (could this blog be any more meta?), so I’ll provide you with my research from that site. It recommends asking the following questions:

  • Who is the author? (Respected, credible, cites their sources)
  • How recent is the source? (Depends on the topic but newer is generally better)
  • What is the author’s purpose? (Is it an op-ed or a neutral piece?)

“Be especially careful when evaluating Internet sources!” warns OWL. “Never use Web sites where an author cannot be determined, unless the site is associated with a reputable institution such as a respected university, a credible media outlet, government program or department, or well-known non-governmental organizations.”

This brings us back to Wikipedia. And while that site itself shouldn’t be used as a source, there are excellent citations in many of its articles, and by looking at those little footnotes, you can generally find several valid sources. For instance, the “Trust but verify” page leads me to a couple of direct links to books by and interviews with Reagan adviser Suzanne Massie herself, links to articles in the Washington Post, L.A. Times, and other well-known papers, and other reputable sources. Upon verification, I feel comfortable that ChatGPT provided accurate information on the topic.

But what if ChatGPT had given me incorrect information, as it has been known to do? For the purposes of this blog, the worst-case scenario is that I look dumb. But what about situations where a wrong answer could actually cause some damage? In December 2022, programming Q&A website Stack Overflow temporarily banned ChatGPT-generated text (a ban that is still in effect as of this writing), stating “the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking and looking for correct answers. … The volume of these answers (thousands) and the fact that the answers often require a detailed read by someone with at least some subject matter expertise in order to determine that the answer is actually bad has effectively swamped our volunteer-based quality curation infrastructure.” In other words, there is a lot of ChatGPT-produced content, and not enough people to verify it.

So whether your use of ChatGPT and other generative AI is strictly for personal entertainment or you’re using it to solve some of the world’s toughest problems, remember the Russian (or is it Reagan?) motto: Trust, but verify. As Wikipedia itself states, “Always be careful of what you read: it might not be consistently accurate.”

is BPO Media and Research’s editorial director. As a writer and editor, she has specialized in the office technology industry for more than 20 years, focusing on areas including print and imaging hardware and supplies, workflow automation, software, digital transformation, document management and cybersecurity.