Tomorrow I will be part of a breakout session as part of our Fall Faculty Conference here at Biola University. The title of our session is Our Humanity in the AI Age; we will be focusing on the idea of how the use (and potential abuse) of AI as a tool may be addressed in positive and productive ways. We have moved beyond the question of “should we use AI” and are moving to the questions of “when” and “how.”
These days, when someone mentions using AI in an educational setting, they are almost always referring to ChatGPT and its less popular siblings, enchanted with the prompt-and-response conversations that feel miraculous. But this is distracting us from the real breakthroughs that AI (of all types) will bring us. From medical breakthroughs to self-driving vehicles, we will all benefit from where this science is taking us. So when I start my presentation, I will first remind the audience that figuring out how to use the chat interfaces is not the end game of proper AI use.
But we’ve been here before: new technologies of the past have caused a stir in academic settings: calculators, the Internet, and smartphones have all presented challenges of one type or another, but we have figured out how to deal with them. Even Plato warned against the new technology of writing, worried that it would weaken our ability to memorize (he was correct). In all of these cases, and many others, we have adapted and adopted with varied measures of success. Now, with the current challenge of generative AI, we have to adapt and adopt yet again, which is where the practical part of my presentation takes us.
Generative AI brings several challenges to the classroom environment, which I lay out as follows:
Learning process circumvention. AI tools create illusions of productivity while undermining essential cognitive development. Students are tempted to bypass cognitive struggle, leading to a loss of critical thinking skills.
Assessment validity crisis. There's a growing difficulty in evaluating genuine student learning and understanding. Traditional assessment methods are losing effectiveness, making it challenging to accurately measure authentic comprehension and knowledge acquisition.
Professional readiness gap. There are significant long-term implications for workforce preparation and resilience. Graduates may be unprepared for workplace realities, exhibiting over-reliance on AI tools (or the opposite) and a lack of essential resilience.
So then what is an instructor to do? How do we address these issues? There are many, many solutions to this problem. But again, we must adapt and adopt. I start by explaining what I consider the basic strategies: a clear syllabus statement, demonstrating proper use in the classroom, creating assignments that are “AI-resistant” and “AI-enhanced” and, finally, developing new forms of assessment.
I go on to provide several examples of the assignments and assessments which they may find useful.
AI-Resistant Assignments
These assignments are designed to require skills that AI cannot easily replicate, such as real-world interaction, personal reflection, and multimodal creativity.
In-Class Timed Writing: Use personalized prompts based on class discussions. This requires real-time, original thought that is difficult for AI to pre-generate.
Multimodal Projects: Ask students to combine writing with presentations or artwork. AI struggles to seamlessly integrate diverse creative outputs and understand human aesthetic choices.
Process Documentation: Require students to submit drafts, annotations, and reflections on their work. This demands insight into the human learning journey and self-assessment, which AI cannot genuinely perform.
Fieldwork with Observation: Have students engage in direct observation and data collection in the real world. This necessitates sensory input and the interpretation of nuanced, contextual data that AI cannot access.
AI-Enhanced Assignments
These assignments treat AI as a tool to be used critically and strategically, helping students develop essential skills for the modern workplace.
AI-Assisted Research: Allow students to use AI for tasks like summarization or brainstorming, but require them to critically evaluate the output for bias and accuracy, document their process, and integrate the findings with their own original analysis.
Critical Evaluation of AI Content: Have students analyze AI-generated text, images, or code against specific standards to identify strengths, weaknesses, and ethical implications, thereby developing their critical thinking and media literacy.
Prompt Engineering Exercises: Teach students to design and refine prompts to get high-quality outputs from AI models. This cultivates problem-solving skills and strategic thinking for effective human-AI collaboration.
Comparative Analysis: Ask students to compare AI-generated work with human-created examples to analyze differences in style, depth, and nuance. This deepens their understanding of both human and artificial capabilities.
AI-Resistant Assessments
To ensure academic integrity in the age of AI, assessments must evolve. These strategies focus on evaluating a student's genuine understanding and learning process.
Design Authentic Assessments: Focus on tasks that require direct application of knowledge or skills, such as presentations, oral exams, or practical demonstrations. This makes AI generation of the final output impossible and ensures genuine learning outcomes.
Implement Process-Focused Grading: Emphasize evaluating the developmental journey of an assignment by looking at drafts, research notes, and reflections, rather than only the final product. This approach promotes original thought and makes it more difficult for AI to bypass the learning process.
Allow AI-Assisted Submissions: Provide clear frameworks for students to use AI tools transparently. This can be done by requiring documentation of their AI interactions, such as prompt logs or usage summaries, which fosters responsible AI literacy while maintaining academic integrity.
Develop Clear Rubrics: Create detailed rubrics that explicitly outline acceptable and unacceptable uses of AI. This clarifies expectations for students and provides a structured way to assess how responsibly they have integrated AI into their academic work.
I want to be clear that I did not come up with all of these. Many of these were provided from sources I’ve found online, most prominently University of Waterloo Centre for Teaching Excellence: Guide to Assessment in the Generative AI Era and Cornell Center for Teaching Innovation: AI in Assignment Design. I highly recommend these two resources!
The presentation will conclude with some demonstrations and a faculty discussion. I expect some lively discussion, some good questions, and ideas for the direction of our AI Lab for the coming year.
What about you? I would love to get feedback on these ideas, your own ideas, and even links to more resources. I encourage you to share this post and start a discussion in the comments.