Should we teach our students to use AI?
Answering foundational questions to set the trajectory for classroom success
As I mentioned in my last post, I recently gave a presentation to my fellow faculty here at Biola University on the implications of AI for the classroom and teaching. My co-presenter focused on some of the higher level concerns around AI and its implications to our humanity and our Imago Dei. Near the end of the presentation we had time for some interactions with the audience. Two of the questions struck me as particularly important and I wanted to address them here.
Here are the two questions, worded as closely as I can remember to the actual question stated:
If our students are already coming to class knowing how to use AI, and most of them know it even better than we (professors) do, what is the point of us taking the time to teach them how to use it?
If AI is dangerous and is a potential threat to human flourishing, why would we want to encourage our students to use it at all?
I appreciated these questions because they force me to face my own assumptions about the place of artificial intelligence in the academic experience, namely that the proper uses of AI need to be taught.
Why teach students something they already know how to do?
The premise of the first question is that most all of our students will come to our classroom already having learned how to use AI and, in many cases, will have more experience and expertise than we do. Of course, the assumption here is that we are talking specifically about generative AI and not other forms of AI. Beyond this initial question, the questioner continued by making the points that we didn’t necessarily feel the need to teach students how to use the Internet, their smartphones, or social media, so what makes this different?
I began my answer by pointing out that, at least in some cases, we did teach students proper uses of the Internet and social media, and we still do. But the question remains, and it is worth answering.
Left to our own devices, we humans will gravitate towards reducing friction in our lives. The companies behind ChatGPT and other generative AI applications know this and they want to make their tools the place to go to “make us more efficient.” But being efficient should not be our goal. There are times we need to struggle: with ideas, with expressing ourselves, with working through problems. There are also times we need to fail so that we will be motivated to improve and mature. Using AI to replace this friction is like going to the gym and watching someone else workout - the muscles will never grow.
And this is why we need to help our students by creating boundaries around the use of AI. Of course, this starts with us, as faculty, doing this in our own lives. But this is a new technology, and we will be learning right along with them in some cases. This process of curating the proper use cases for AI is something I will be focusing on in a class I am teaching at Biola this fall; you can read more about that in my post “When should we use AI?” linked below.
So the real answer then is to identify and prioritize the learning elements in our class with which we want our students to struggle and grow. For example, in a composition class, AI would most likely not be used to write drafts or potentially even outlines. But perhaps it could be used to research sources and help with citation formatting. Alternately, in a business class, AI could be used to compare and contrast data in a particular industry. In both cases, the instructors should first create a clear policy and then demonstrate that policy in class. I discussed some ideas for assignments like this as well as other related ideas in my post earlier this week, “AI in the Classroom,” linked below.
Why teach something potentially dangerous?
This question has bigger implications but is also is easier to answer. Artificial intelligence is now part of the public consciousness. It is installed on every smartphone and every Internet-connect computer has access to it. It is all around us. We must confront it head-on: as with previous technologies, it can create as many problems as it can solve. And it is not neutral: it is designed to entice us and make us dependent upon it. We must not let it.
So, as with the answer to the first question, we must become wise and learn when it is appropriate to use and when it is not. And further, when we do use it, we need to understand how to use it well so we are using it to our benefit, not just to get things done more quickly. And then transfer that knowledge to our students.
Here at Biola University, this is one of the primary reasons we started the Artificial Intelligence Lab. We want to provide our students, faculty, and staff to have a place to get answers about AI, a place where we can apply our values to its use.
Start the conversation!
Let me know how you would add to these answers or if you have a different take. I’m listening.
"And it is not neutral: it is designed to entice us and make us dependent upon it. We must not let it."
Love this. Understand how automation works so we can recognize the effects.