Skip to Main Content

Artificial Intelligence (AI) and Information Literacy: Using AI carefully and thoughtfully

Learn about how AI works and how to spot common errors AI tools tend to make. You'll also learn fact-checking and critical thinking strategies for AI, how to cite AI in an academic paper, and how to learn more in-depth about AI tools and issues.

More Avenues to Explore

Machine Learning
AI Tools
Copyright and Labor Issues

Using AI Carefully and Thoughtfully

Using AI carefully and thoughtfully

Introduction

Alongside the new possibilities of these AI-based tools, there are a good number of things to be careful about as you assess if and when you want to use them. Start thinking through major considerations in this two-minute overview video from Katie Shilton, Associate Professor in the College of Information Studies and Co-Director of the BS in Social Data Science at the University of Maryland, Co-PI of The Institute for Trustworthy AI in Law & Society (TRAILS).

Accuracy

Can you trust that the information you receive from these AI-based tools is correct? Not without double checking. Many chatbots such as ChatGPT were designed to produce content that seems realistic, so they will produce inaccurate content with the same level of confidence as accurate content and it's up to you to determine which is which. You'll need to employ a variety of strategies to double check information before assuming it is correct. Check out the Lateral Reading page to learn more.

Copyright and labor

Where does the content come from? Because machine learning takes huge inputs of data sets, many models use information from the internet in their training. Artists and authors have criticized AI-based tools for using their work without compensation or credit. If an AI-based image generator can produce work in the style of a certain artist, should that be seen as stealing, or paying homage?

Bias

While it may be tempting to think of an output from an AI-based tool as neutral when it comes to bias, that is not the case. Since machine learning models are trained on real world datasets, and since the world contains bias, outputs from these models may replicate or even exacerbate biases we see in the world around us.

Security and privacy

It is safe to assume that -- in some way or another -- any information you put into a AI-based tool is being used to further train the machine learning model. If you choose to use these tools, you'll want to make sure you're never putting personal or secure information about you or anyone else in your chats. You should also read through any user agreements if you sign up to use a particular service and make your own decision if you are comfortable agreeing to the terms. If one of your class projects requires the use of a particular technology that you do not wish to create an account for, you can ask your teacher for an alternative way to complete the assignment.

Learn About Bias in AI

This video is a walk-through of an MIT Media Lab study of how well popular face-recognition software identifies people of different genders and skin types.

"Gender Shades is a preliminary excavation of inadvertent negligence that will cripple the age of automation and further exacerbate inequality if left to fester. The deeper we dig, the more remnants of bias we will find in our technology. We cannot afford to look away this time, because the stakes are simply too high. We risk losing the gains made with the civil rights movement and women's movement under the false assumption of machine neutrality. Automated systems are not inherently neutral. They reflect the priorities, preferences, and prejudices—the coded gaze—of those who have the power to mold artificial intelligence." Video produced by Joy Buolamwini and Jimmy Day

Harm Considerations of Large Language Models

The interactive image above identifies some of the harms and risks that have been described in the literature. We say “some” deliberately, as more and more research and analyses are being published by those most impacted by them. It is worth mentioning that there exists considerable overlap or intersectionality (compounded harm) between the categories mentioned above. From "ChatGPT? We need to talk about LLMs", by Rebecca Sweetman and Yasmine Djerbal, University Affairs Affaires universitaires, May 25, 2023.

Previous: Explore different types of AINext: What does AI get wrong?

         

West Sound Academy Library | PO Box 807 |16571 Creative Drive NE | Poulsbo, WA 98370 | 360-598-5954 | Contact the librarian