Critical thinking about AI responses goes beyond determining whether the specific facts in the text are true or false. We also have to think about bias and viewpoint – two things we keep in mind when reading human authors, but you might be surprised to learn we have to keep in mind with AI as well.
Any text implicitly contains a point of view, influenced by the ideologies and societal factors the author lives with. When we critically think about news articles, books, or social media posts out in the wild, we think about the author’s viewpoint and how that might affect the content we’re reading. These texts that all of us produce every day are the foundation of generative AI’s training data. While AI text generators don’t have their own opinions or points of view, they are trained on datasets full of human opinions and points of view, and sometimes those viewpoints surface in its answers.
AI can be explicitly prompted to support a particular point of view (for instance, “give a 6-sentence paragraph on ramen from the perspective of someone obsessed with noodles”). But even when not prompted in any particular way, AI is not delivering a “neutral” response. For many questions, there is not one “objective” answer. This means that for an AI tool to generate an answer, it must choose which viewpoints to represent in its response. It’s also worth thinking about the fact that we can’t know exactly how the AI is determining what is worth including in its response and what is not.
AI also often replicates biases and bigotry found in its training data (see Using AI carefully and thoughtfully). It is very difficult to get an AI tool to arrive at the fact that people in positions of authority, like doctors or professors, can be women, without explicit prompting from a human. AI image editing tools have edited users to be white when prompted to make their headshot look “professional” and can sexualize or undress women, particularly women of color, when editing pictures of them for any purpose.
AI also replicates biases by omission. When asked for a short history of 16th-century art, ChatGPT and Bing AI invariably only include European art. This is the case even if you ask in other languages, like Chinese and Arabic, so the AI tool is not basing this response on the user’s presumed region. China and the Arabic-speaking world were certainly producing art during the 16th century, but the AI has decided that when users ask for “art history,” they mean “European art history,” and that users only want information about the rest of the world if they specifically say so.
These are more obvious examples, but they also reveal the decision-making processes that the AI is using to answer more complex or subtle questions. The associations that an AI has learned from its training data are the basis of its “worldview,” and we can’t fully know all the connections AI has made and why it has made those connections. Sometimes these connections lead it to decisions that reinforce bigotry or give us otherwise undesirable responses. When this happens in ways we can see, it prompts the question: how is this showing up in ways that aren’t as obvious?
Instructions: go beyond fact-checking
Now let’s try lateral reading for a second time, with a focus on the response’s perspective:
1. We can start with fractionation again, but this time we’re thinking about what claims and perspectives are being represented in the AI response.
2. Time to start your lateral reading. Think about what sources might provide the perspectives above, both the ones in the AI’s response and the ones missing from it.
3. Next, think deeper about what assumptions are being made here.
4. Finally, make a judgment call. What here is true, what is misleading, and what is factually incorrect? Can you re-prompt the AI to try to get a different perspective? Can you dive deeper into one of the sources you found while fact-checking?
Again, the key is remembering that the AI is not delivering you the one definitive answer to your question.
|
|