Teaching students to think critically to help them on the road to AI Literacy
Does it pass the smell test?
UPDATED February 2, 2024 to include UCLA WI+RE’s “Introduction to AI Chatbots” tutorial.
UPDATED May 5, 2023 to include UCLA Library’s Evaluating Information guide and WI+RE’s Appreciating the Value of Different Resources Types and Critically Evaluating Resources tutorials. April 25, 2023 to include the CRAAP Test (Thanks for the tip, HumTech!).
I wanted to see if ChatGPT learned anything new or at least provided a newer response to a question I had asked before: “Is teaching with ChatGPT inclusive?”. The response was identical to the one from my February post.
So I worded the question differently: “How can I leverage teaching with ChatGPT so that it is inclusive?”
I really like Response #5: “Provide guidance on ethical AI use … including issues related to bias and discrimination.” Let’s unpack this response with an earlier response from ChatGPT: “my training data is derived from a vast amount of text available on the internet, which can reflect social biases and inequities that exist in society.” In UCLA’s “What is ChatGPT, and how does it relate to UCLA’s academic mission?” Virtual Town Hall March 3, 2023,” UCLA faculty Dr. Safiya Noble noted that the content from AI engines is dominant English-speaking knowledge that has been flattened by content owners, who might have their own biases.
One other thing to be mindful of is that ChatGPT will produce a well-written, and very confident response, which at first glance, doesn’t signal someone to question the AI’s response. Dr. Noble also shared the importance of teaching students to critically engage with these kinds of technology as new and updated ones emerge. We can encourage responsible and ethical applications of ChatGPT.
Dr. John Villasenor pointed out that text from ChatGPT might pass the Turing test, but we can inquire if what we’re looking at is the truth (based on experiences with pattern matching, reasoning skills, etc). As educators, we have an obligation to help guide our students through many types of literacy, including digital media and AI literacy.
We might not be experts in artificial intelligence or large language models, but we know a thing or two about sniffing out information that may be false. The very nature of a researcher is to discover and verify the information.
UCLA’s WI+RE created the Understanding Misinformation: A Lesson Plan Toolkit to help students dig through many news sources, including social media. You can use WI+RE’s readings and activities to prepare students to learn about misinformation. Their Appreciating the Value of Different Resource Types and Critically Evaluating Resources tutorials help expose students to different ways information is shared and to practice critical self-awareness and thinking while reviewing the information.
While we live in a time where information (real or fake) is plentiful, Edutopia shares a couple of tips. First, encourage students to read laterally and open many browsers to cross-check information. They highlight a study that reveals that students are agile and can vet information like experts! Additionally, consider the CRAAP Test and teach students to use it before they apply the resource to their own work. Lastly, check out UCLA Library’s Evaluating Information guide for other tools and tricks to cross-check information.
I’m going to close with this quote from Ben Nelson and Diana El-Azar, “a person can transfer knowledge from a known, learned context to another which is unknown and unlearned.”
Additional Readings and Resources on Generative AI Technologies
UCLA Writing Instruction + Research Education (WI+RE)’s “Introduction to AI Chatbots” tutorial
This is aimed to introduce students to generative AI, which includes ChatGPT, and how it can be used outside of academics. You can also add WI+RE tutorials to your Bruin Learn site as learning modules for students to engage with.
“AI Eroding AI? A New Era for Artificial Intelligence and Academic Integrity,” from Faculty Focus.
This article includes tips on how to encourage students to ethically and responsibly select generative AI tools. I also appreciate this statement: “Critical teaching and learning, after all, is about knowing the rules so that you can apply them in your own way, very much developing independence from and by way of imitation.”
“How Generative AI Will Enable Personalized Learning Experiences,” from Campus Technology.
Listen (or read the transcript) to Campus Technology Editor in Chief Rhea Kelly and Dr. Kim Round, associate dean of the Western Governors University School of Education, discuss how learning experience designers can leverage generative AI to create personalized learning experiences for students. Here are some notable quotes:
[Generative AI can] “help learners become better researchers, better curators, and better decision-makers” … “removing that unnecessary friction in the mechanics of learning, so the student can truly focus on the task at hand.”
On the topic of responsible use of AI: “learners will need to know where the AI begins and ends and where they begin and end in that partnership.”
“Why a Reset is Not Enough to Save Higher Education,” by Ben Nelson and Diana El-Azar.
This piece reminds us why lifelong learning is important. We need to teach students to be resilient and adapt because the content teaching them today might not be relevant 50 years from now. Instead, we can focus on skills to be better decision-makers for the unknown.
“AI Bots Can Seem Sentient. Students Need Guardrails,” from Inside Higher Ed.
Early engagement with technology helps students (and others) become familiar with ways to navigate the use of technology. This article shares a University of Mississippi instructor’s experience co-creating a course policy around AI writing tools with her class. “I’m still trying to be flexible for when new versions of this technology emerge or as we adapt to it ourselves.”
“OpenAI announces GPT-4 - the next generation of its AI language model,” from the Verge (March 14, 2023).
Though this piece highlights the new large multimodal model (allowing the tool to parse text and images), they also do a good job highlighting pitfalls, as one should be mindful of with the emergence of newer technologies.
“ChatGPT will fundamentally change how we teach writing; that’s a good thing,” from EdSource.
ChatGPT is very good at summarizing its vast data. This article shares strategies to humanize writing, to make connections with current events, and emphasize the writing process as a mode of learning.
“5 Techniques to Promote Divergent Thinking,” from Edutopia.
An approach to a problem doesn’t always have one route, this is where divergent thinking comes in. “Divergent thinking is the process of generating many different ideas and possibilities in an open-ended, spontaneous, and free-flowing manner.” Edutopia shares 5 strategies you can employ in your classroom to encourage students to see a problem from many perspectives and to come up with a creative and authentic solution.
“ChatGPT, Schools, and Non-Artificial Intelligence,” from Next Generation Learning Challenges (NGLC).
NGLC proposes an interesting point: “Asking the right question over getting the right answer.” This article also calls out that the AI might give us a clear and well written response, but it might not be the response (or tone) we’re looking for. Re-read the sample text at the beginning of the article and think about the implications of tone and meaning.
The Use of Generative Artificial Intelligence in Teaching and Learning: Guidance for Instructors
UCLA teaching and learning partners have curated a document meant as a guideline for faculty and graduate students instructors on what to consider as artificial intelligence (AI) technologies appear and evolve and ways to use them in a responsible, ethical manner, and innovative within each discipline. You will find strategies for adopting AI technologies and ideas for adjusting activities, opportunities to communicate with your students about AI, and examples and resources for syllabus language and activities.