A Critical Approach to Teaching AI

2025-02-20 5:20 PM

#teaching #technology #llm #large language model #science

This week, I was talking with some colleagues about the rate of students using AI to comlpete classwork. The short story is that their students are turning to AI tools for every writing assignment, regardless of topic or genre. A stark - and discouragin - instance was a free-writing assignment where students were asked to write reviews for five of their favorite things. It could be movies, music, tech, food...anything that they found so good that they just had to tell someone else about.

Most went to AI and then copy and pasted it's thoughts.

Another teacher in the group said she had spoken to some recent graduates who said they have varying expectations in their college courses. Some professors have a blanket ban, others require students to use AI tools. I am firmly in the camp of teaching my students as they are now, not necessarily where they'll be in the future, but it really made us wonder if we're neglecting something important by not teaching students explicit skills in using the systems.

They asked how I handle AI in my chemistry classes and my short answer was that I've shifted heavily into labs this year. I don't have them doing much online research and, when I do reach for some kind of writing task, it's linked very tightly to papers which are coupled to what we're doing through instruction. I ask open-ended questions, but they're following specific procedures and protocols that are unique to my room.

Earlier this semester, we were working on unit conversions. I did a little exploration with students considering carbon dioxide release in combustion, measuring the amount of carbon released for every gallon of gas burned in cars. After we realized that we release a ton of carbon dioxide just from driving, I moved the discussion to AI tools. I taught about why it's more expensive per search than a traditional search. We also looked at water use for data centers and looked at the cost - economically - of the two data centers Amazon and Microsoft are building. Students were shocked that both companies were given billions of dollars worth of tax breaks to come and, ultimately, pour CO2 into the air unchecked.

I've written about my discomfort because of known issues as well as some of my exploring local-only models which had mixed results (soon to be revisited). I don't think this is zero-sum where I have to jump in wholesale, but I cannot - and should not - ignore the culture shift that is happening with my students. I think my approach in the short term will be to pair up the task with a specific, targeted analysis of what LLM tools can actually do and contrast it with what students think they can do.

Here's an example:

I'm going to do a March Madness-style element bracket with classes this year to break up the spring monotonmy. If I were to ask them to research an element, every student would take out their phone and copy/paste the first paragraph from the model into their slide or whatever.

This year, I can invite them to do that - use the model to generate some information about the lethality of the element. But then, anything they use must be backed up with a citation. So, use the model to start the process, but dive into actual verification of information and make that the practice. The model becomes an assitant.

Really, what I want to teach students is that by relying on a model to gather ansewrs, it is supplanting their own voice. I want to know what they think, what they care about, what they're frustrated with. I remind my classes that answering questions and taking time to talk is literally my job. I am there to make sure they are learning, not just that they're able to find the right words to answer a question.

Copying and pasting from an AI model might get the answer right, but it's removing the most important part of the answer - their own voice. The social and political climate right now is pushing to remove voices and I want to make sure that every teenager that comes through my room has - and can use - theirs.

Share this post

A thought from Alan Levine

2025-02-21 14:41:08

I think this is one of the sanest responses and teaching strategy I have read. While it does feel broadly like we are awash in a wave of crap and choices most people are reaching for, there is much room to be an exception. I’d love to be in your room and just observe. And I can’t guarantee but I bet you will be one of those teachers some of the kids hold long in their memories, like I do with my 10th grade Chemistry teacher, Blooma Friedman, who we all thought was a bit crazy at the time, but to this day I do and use the way she taught us unit conversion. #CogDogPbzzragOybttvat2025

A thought from Brian

2025-02-22 00:25:17

Thanks Alan, I hope so. I tell them - frequently - that very few will probably go into science, let alone chemistry, and that's fine with me. I want them to see connections and be aware of all the ways science touches their lives at minimum. This one seems like a ripe opportunity if it can be framed correctly.

A thought from Tom

2025-02-21 19:39:00

I can tell you that at Middlebury College and the Middlebury Institute for International Studies things are similar to what you're describing. Each faculty member takes their own position and they are strongly encouraged to put that position in their syllabus. That may be a course-wide position or it might vary by assignment. If you haven't seen this crowd-sourced AI syllabus statement from Tory Trust, it's worth looking at in terms of a model that fouses on explaining different choices about AI usage made by the teacher and how they relate to learning and other course goals. There's also my general struggle with AI as a term. I think it's come to mean LLM chatbots but there's so many other paths and patterns that fall under AI and are discipline specific that any general statements fall apart for me. #notAllAis? I could also just be a pain. I agree with you regarding lack of voice and just middle of the road ideas. The Res Obscura guy has a pretty solid take on that. It's hard to argue against AI if people don't care about the product/process or, worse, actively dislike both. I see that in education and in work. Lots of people dislike what they're doing in exchange for grades/money and many of these relationships are very transactional.

A thought from Brian

2025-02-22 00:22:57

Yeah, we looked at that document together (I think you sent it during my AI this morning rant?) and I think that's where we'll end up, but mixing policies just makes it harder for students to keep track of what. I think we're all heading toward some mix of allowable use, but the overwhelming concern is still over what we're losing in the tradeoff, particularly some of the more recent studies on losing retention over time.

A thought from Tom

2025-02-25 16:12:30

Sorry for redundancy. I loose track of things with the tangled webs of AI-entwined conversations these days. It may also be I'm just losing my mind. Regarding complexity, I think that's good? There really shouldn't be universal answers to most things and, while irritating, struggling with these decisions in specific contexts is useful work (both for students and faculty). I know that's not a popular line of thought and in times when people are maxed out it's one more thing. At the base of all of it I think is care/interest in the work. If people don't care, I think they will offload as much as they can possibly get away with. How our system decides to deal with that will determine a lot of things.