Skip to Main Content

Artificial Intelligence (AI) and Information Literacy: What Does AI Get Wrong?

Learn about how AI works and how to spot common errors AI tools tend to make. You'll also learn fact-checking and critical thinking strategies for AI, how to cite AI in an academic paper, and how to learn more in-depth about AI tools and issues.

More Avenues to Explore

What Does AI Get Wrong?

What does AI get wrong?

Introduction: Analyzing AI-generated Information

Although many responses produced by AI text generators are accurate, AI also often generates misinformation. Oftentimes, the answers produced by AI will be a mixture of truth and fiction. If you are using AI-generated text for research, it will be important to be able to verify its outputs. You can use many of the skills you’d already use to fact-check and think critically about human-written sources, but some of them will have to change. For instance, we can’t check the information by evaluating the credibility of the source or the author, as we usually do. We have to use other methods, like lateral reading, which is explained on the next page of this guide.

Remember, the AI is producing what it believes is the most likely series of words to answer your prompt. This does not mean it’s giving you the ultimate answer! When choosing to use AI, it’s smart to use it as a beginning and not an end. Being able to critically analyze the outputs that AI gives you will be an increasingly crucial skill throughout your studies and your life after graduation.

When AI Gets It Wrong

A typical AI model isn't assessing whether the information it provides is correct. Its goal when it receives a prompt is to generate what it thinks is the most likely string of words to answer that prompt. Sometimes this results in a correct answer, but sometimes it doesn’t – and the AI cannot interpret or distinguish between the two. It’s up to you to make the distinction.

AI can be wrong in multiple ways:

  • It can give the wrong answer
  • It can omit information by mistake
  • It can make up completely fake people, events, and articles
  • It can mix truth and fiction

Explore each section below to learn more.

It can give a wrong or misleading answer

Sometimes an AI will confidently return an incorrect answer. This could be a factual error, or – like in the example below – inadvertently omitted information. Vanuatu and Vatican City are both real countries, but these are not all the countries that start with the letter V.

Conversation on ChatGPT

It can make up false information

Sometimes, rather than simply being wrong, an AI will invent information that does not exist. Some people call this a “hallucination,” or, when the invented information is a citation, a “ghost citation.”Ghost citations on ChatGPT

These are trickier to catch, because often these inaccuracies contain a mix of real and fake information. In the screenshot above, none of the listed sources on The Great Gatsby exist. While the authors are all real people, and the collections are all real books, none of the articles listed here are actually real.

When ChatGPT gives a URL for a source, it often makes up a fake URL, or uses a real URL that leads to something completely different. It’s key to double-check the answers AI gives you with a human-created source. You can find out how to fact-check AI text with lateral reading on the next page of this guide.

It cannot accurately produce its sources

Currently, if you ask an AI to cite its sources, the results it gives you are very unlikely to be where it is actually pulling this information. In fact, neither the AI nor its programmers can truly say where in its enormous training dataset the information comes from. Even an AI that provides real footnotes is not providing the places the information is from, just an assortment of webpages and articles that are roughly related to the topic of the prompt. If prompted, the AI will provide the exact same answer but footnote different sources.

Bing sources

Bing Sources

For example, the two screenshots above are responses to the same prompt. In the second screenshot, the user specified to use only peer-reviewed sources. When you compare the two, you can see that the AI cites different sources for word-for-word identical sentences. This means that these footnotes are not where the AI sourced its information. (Also note that the sources on the right are all either not peer-reviewed or not relevant. Plus, artsy.net, history.com, and certainly theprouditalian.com are not reliable enough for you to source from in your assignments.)

This matters because an important part of determining a human author’s credibility is seeing what sources they draw on for their argument. You can go to these sources to fact-check the information they provide, and you can look at their sources as a whole to get insight into the author’s process, potentially revealing a flawed or biased way of information-gathering.

You should treat AI outputs like fact-checking a text that provides no sources, like some online articles or social media posts. You’ll determine its credibility by looking to outside, human-created sources (see lateral reading on the next page).

It can interpret your prompts in an unexpected way

AI can accidentally ignore instructions or interpret a prompt in a way you weren’t expecting. A minor example of this is ChatGPT returning a 5-paragraph response when it was prompted to give a 3-paragraph response, or ignoring a direction to include citations throughout a piece of writing. In more major ways, though, it can make interpretations that you might not catch. If you’re not too familiar with the topic you’re asking an AI-based tool about, you might not even realize that it’s interpreting your prompt inaccurately.

The way you ask the question can also skew the response you get. Any assumptions you make in your prompt will likely be fed back to you by the AI.

For instance, when ChatGPT was prompted:

“Write a 5 paragraph essay on the role of elephants in the University of Maryland's sports culture. Be sure to only include factual information. Provide a list of sources at the end and cite throughout to support your claims.”

It returned an answer full of false information about elephants being a symbol of UMD sports alongside Testudo, making up some elephant-related traditions and falsely claiming that elephants helped build U.S. railroads during the Civil War. It generated a list of non-existent news articles and fake website links supporting both of these claims.

University of Maryland Terrapins AI conversation

By contrast, when ChatGPT was prompted:

“Does UMD's sports culture involve elephants? Give a detailed answer explaining your reasoning. Be sure to only include factual information. Provide a list of sources at the end and cite throughout to support your claims.”

It returned a correct answer with information about our real mascot, Testudo the terrapin.

University of Maryland Terrapins facts

However, the sources it provided were both dead links, either to out-of-date pages on the UMD website, or to real pages with a muddled URL.

ChatGPT interpreted the first prompt as “taking it as a given that UMD’s sports culture involves elephants, write an answer justifying this.” However, with the way the second prompt was phrased, the AI was free to answer the question based on its training data, and returned the correct answer.

Depending on how we phrased the question, ChatGPT either reinforced a mistake we made in the prompt or corrected that same mistake. Paying attention to your prompt phrasing can make a key difference!

You can read both the conversations in full here:

Elephants at UMD Sports
UMD Sports: No Elephants

Previous: Using AI carefully and thoughtfullyNext: Why AIs are bad at math

         

West Sound Academy Library | PO Box 807 |16571 Creative Drive NE | Poulsbo, WA 98370 | 360-598-5954 | Contact the librarian