Posts
I grew up visiting the central Adirondak Mountains with my family. A family friend had a cabin that we would visit at least once each year for a vacation. We didn't have much money and this was an affordable way to get out of our house on a trip.
My last visit was more than 15 years ago. My wife (before we were married) made a 12 hour Amtrak trip from Indiana to Utica, NY where we picked her up and then drove the last hour and a half to the house. Since then, we moved overseas and back, bought a house, had kids, built a house, and settled into midwest life. This year, my brother took the initiative get our schedules worked out to make a trip back to the house together.
If you've never been to the region, it's the largest publicly protected area in the lower 48, but it is neither a national or a state park. There is private and public land intermixed which means there is a lot of space to both stay and explore.

The house is just past Long Lake, the nearest town. Long Lake itself is unique because...well...it's really long. It's the closest town to the house and, as a kid, was the last landmark before we made it to "The Mountains." We didn't spend much time there on this trip but I enjoyed driving through with our kids and then making one grocery run back to the small store right on the shore. Most towns along the way are small, but they maintain their own distinctive vibe. I'm a particular fan of Long Lake's because they have great mugs and an even better logo.

The house is on a smaller lake just up the road: Lake Eaton. We spent time in the canoe and on the paddle boat, enjoying the silky smooth water and listening to loon calls echoing off of Owl's Head Mountain to the southeast. We took several hikes up local mountains and even completed the Tupper Lake Hiking Triad with the older girls. It was a proud dad moment when we made it to the summit of that last peak.


This house has a very special place in my heart. This trip was punctuated by the probabilty that it was our last trip to this particular house. We're not in close contact with the owners any more...this trip worked out because of a funeral gathering back in New York earlier in the year and my brother floated the idea. While I love this house and the Adirondacks in general, it helped my wife and I solidify our resolve to take our kids to new places as much as we can. We took our first family camping trip back in 2017 to Wilderness State Park and then a big adventure to Cumberland Island National Seashore in 2023. We're looking forward to our next adventures.
More books read this month, which felt good. I'm a little bit closer to being back on track to finishing my goal of 40 this year.
Dune: Messiah - Frank Herbert
This was a difficult book. Much more of the story was internal - characters thinking about time and its effects on their own particular plot. I felt confused frequently and wondered how everything came together. The end was satisfying with the resolution making sense, though a lot was relegated to characters just talking about what happened to others who were mostly absent for the last third of the book.
Working in Public: The Making and Maintenance of Open Source Software - Nadia Eghbal
This was a great look at open source from a philosophical and practical perspective. I appreciated the distinctions made between the product - the code - and the people behind the code. I got it thinking that it would be a book I could use in a new course I'm teaching this year, but I don't think it really applies. Maybe some snippets here and there, but this is much more focused on libraries rather than products.
The Handmaid's Tale - Margaret Atwood
I picked this up not really knowing what to expect. This was an incredibly dark and horrifying look at the extreme ends of patterns which are clearly visible today.
As always, I log books as I finish on LibraryThing.
We're about halfway through 2024 and I haven't blogged in a while, so here's a simple check in on the goals I set at the start of the year, particularly the ones that are still in progress:
- Read 40 books: I've officially finished 16 books so far this year. I'm behind schedule, so there's still a chance to get to 40, but I need to make more time to read. My progress is on LibraryThing as well as on monthly posts here.
- Add blog comments: I did this back in January. Say hi!
- Cut phone use: Frankly, I'm failing miserably at this one. I'm setting more physical boundaries for myself. I'm also going to be asking my wife to help out by putting my phone away while we're at home and there are things to be done. Hopefully the barrier of having to ask for it will help me not reach for it.
- Ride my bike 150 miles: This is probably what I'm most proud of - I've ridden well over the 150 mile goal and I'm not slowing down. I've ridden 191 miles so far this year and I'm making progress in both length and pace of my rides. I've moved from OpenTracks to Strava, so feel free to connect with me there.
- Get two new courses approved: Both were approved this spring and both are full for next semester! I'll be teaching an advanced chemistry class as well as a web programming class for the first time. I've started some basic outlines, but I need to buckle down before the start of school to make sure I start strong in both.
Halfway through and I'm feeling pretty good.
I actually read some books this month. School finished a couple of weeks ago and I was able to devote more leisure time to finishing some books. All three were repeats this month, but they're some old favorites.
Dune - Frank Herbert
I went to see Dune Part 2 in May and that prompted me to re-read the book. It's not my favorite brand of science fiction, particularly the prescience stuff that really ramps up at the end. I found it hard to understand some of the prose describing Paul's spice dreams. The ending was also much more abrupt than I remember it being. I started Dune: Messiah right after finishing and I'm curious to see if a part three of the movie series gets made and how they tie the stories togther.
Out of the Silent Planet - C.S. Lewis
This is an old favorite and either my second or third time through the book in the last ten years. This time I listened to it while we drove from Michigan to New York. The audiobook is only a few hours long, so it's very accessible. The imagery of exploring another planet is fascinating and the interactions between the human characters, the Malachandrans, and the Eldils are always thought-provoking.
On the Edge of the Dark Sea of Darkness - Andrew Peterson
This is also my second time through this book, but this time it's as a read-aloud with my third daughter. She is generally aware of the story, but asked me to read it out loud each night before bed. This is another wonderful adventure and redemption story (over the course of four books) that challenges some of the traditional strutures of fantasy stories.
I didn't finish any books in May. Shame on me.
I'm in the middle of a couple of books, but reading is slow, mainly because we're up later during the long days and I go to bed that much more tired in the evenings. School ends tomorrow so my reading time will expand significantly and I can make more progress toward my reading goal.
In the past, I had written notes of encouragement to each student. They'd taken different forms from handwritten to printed slips to small conversations. This year, I decided to decide on a theme that would apply to my classes. This year's recommendation:
Set boundaries for yourself.
Frankly it stems from phone use, but it goes into many more aspects. There is value in hard work, and learning is hard work. There is dignity in working at things that are difficult instead of trying to take an easy road out. There is growth that can only happen when you work through a situation or problem.
I want to encourage my students to set boundaries for next year - limits they won't cross while they're in school. For some, it might be putting a phone in a backpack or locker. For others, it may be not opening an AI tool to get a quick answer. Or it's choosing not to engage in conflict.
Whatever the situation, having a boundary or a commitment to act in a particular way is an asset. Hopefully, some of them will put that to practice next year.
We celebrated our 15th anniversary this weekend. The kids got to stay with their grandparents for two nights so Lindsey and I could have a quiet weekend together. We got to take our bikes to Chicago and ride the Lakefront Trail along the shore of Lake Michigan. It starts down near 71th St south of the city and follows Lake Shore Drive for 19 miles.

The ride north had a beautiful sunny sky and a nice tailwind. We made a quick pit stop at Navy Pier and then made our way to a couple miles past Lincoln Park. We ended up turning around and stopping at the Oak Street Beach to rest our legs.

Rain ended up moving in (we thought it would wait until later in the day) and had to race from Millennium Park - where we had some fantastic food truck tacos for lunch - toward 67th street where we parked. We managed to make it to a small beach pavillion with a roof as a heavy thunderstorm moved in and absolutely dumped rain.

In all, we rode about 31 miles today.
After getting home, I ran back out for some take out Thai food while she made homemade ice cream for dessert. We both agreed that this might need to be an annual trip.
I made a post on my Mastodon account which struck a chord and kind of went big (for me, anyways). It had to do with new resources coming out on how to cite AI-created materials in an attempt to teach students how to at least own up to when they use an AI to do something.
The specific use case would be for students who go to ChatGPT (or similar) and use the prompt either directly or to paraphrase what it spits out. This is the opposite of how citations work and we're doing a disservice if we don't teach students that fundamental truth.
ChatGPT and most other chat-based LLMs are built on siphoned material from the internet and then remixed into something that sounds plausible when you ask it a question. There is no new knowledge. There is no thought into what it says.
It can't give citations because it doesn't know where the information came from. ChatGPT and other chat-AI systems do not care about origin and only see the Internet as content to consume. This has already kind of come up for me. In an attempt to try and make their writing look "more researched," students have started prompting it to include citations in responses. This often backfires because the articles listed are either closed publications we don't have access to or just plain don't exist. Most seem to be in a weird middle ground where the article could have the quote, but doesn't.
And yes, all of these things have actually happened to me.
It's also very telling that When OpenAI has been asked to cite their training data, they wouldn't even do that. And now we want our students to cite those guys.
Citations are specifically designed to point and the actual person and say, "Look - look at what they did. It helped me form my ideas and this part is from them." Citations build upon knowledge and create new knowledge as a result. You cannot cite an AI because it is not the creator of the idea. It is in fact a plaigarism machine and cannot be used as an authoratative, citable source.
Now, it was brought up that there are necessary distinctions between LLM-based generative text AI and things like machine learning and computer vision systems which can make differences. I'm particularly interested in the research going into machine learning models to help predict cancers. The main difference between this application and ChatGPT is that it is assisting researches from known data sources which can be verified via other means. The data sets are known and the models are built to perform a specific task. The human element is also critical. When doctors are using machine vision to spot early cancers, they're actively involved in the process and verify before moving forward.
The task of ChatGPT and other chat systems is to give plausible sounding answers - no requirement that the answers are accurate. Learning how to search, evaluate, and the use information has only gotten harder as students are exposed to computers earlier and earlier in their lifetimes. Many only experience a computer (I'm including phones and tablets in this category) as an entertainment system and they've never had to develop the skills to find and present information. Google is making it harder with their addition of "AI Overviews" on their search page that I will definitely be avoiding, but I have no control over students being exposed to these overviews, so I need to do more teaching.
And that's the job, isn't it? To educate myself on these new systems so I can inform and educate my students. I had some fantastic teachers who taught me the explicit value and the importance of verifying information and then pointing back to it when developing my own ideas. That hasn't changed - we need students to continue to make new knowledge. Who we credit for making the knowledge has just become that much more important.
This is the time of year when the mist settles into the folds of the back pasture. The sun hasn't come up enough to burn it off and the woods and wetlands are echoing with birdsong. It's a special time of year.

The spring here is about two weeks ahead of schedule. My bees have been building up and I've already split a couple of hives because they were bursting by the end of April. Last night, I figured it was time to get my swarm traps out because the weather has been so good and I didn't want to miss an early swarm or two if I could help it.
I went out with my son and we were prepping a box when he said, "Look at all the bees!" I didn't think much of it because I keep spare equipment in my barn right now and there have been visitors taking leeftover pollen and honey out of old frames.
Then I heard the roar.
This is the third swarm I've actually been present for. It is exhilarating listening to thousands of bees swirl up like a tornado and all congregate in one place. They came down and into the barn through the gable end (which is yet to get siding) and started settling into the empty boxes. Unfortunately, I don't have a picture this time - it all happened so quickly that I didn't have time to go back inside and grab my phone.
A swarm is a natural result of a colony outgrowing their home. The workers will build queen cells to raise a new queen and the old will leave with 50-60% of the colony and find a new home. Bees want a nice, sheltered place to set up the new colony and my empty boxes and frames fit the bill.
I was able to find the queen and get her moved into a swarm box, which was a huge relief because it was much easier to move them into the apiary last night. I have the entrance disk set to keep the queen inside. The goal is to have her start laying eggs and workers start foraging. I'll give them a week or so before transferring the colony into their permanent home.
I was worried I had caught a swarm from one of my own colonies, but on checking, all of my boxes were still full - it didn't look like they'd lost half their populations, at least. So, I think this is a new catch, which will increase the diversity of my own apiary, and that's a good thing for the long-term health of the colonies. Over the last four seasons, I've now caught nine swarms, but this is by far the earliest. My previous earliest was May 18th in 2022, which was also a warm spring.
We have a barred owl who summers in the woods near our house. I'm not sure where he goes in the winter, but he was back this morning. His distinct call bounced around outside while I drank my morning tea.
The barred owl is back
He does not know I hear him
His call echoes by
Another marker of the seasons marching forward.
Another month, another set of books read. I finished more in April than I did in March and I'm going to attribute that to spring break at the start of the month.
Material World: The Six Raw Materials That Shape Modern Civilization - Ed Conway
This is a thoroughly researched, in depth look at the substances which make modern life possible. It's full of on-site descriptions of places the public can't go which turn the raw stuff of the earth into the products we buy. It's a little staggering how much of what we rely on day to day depends on just a few minerals coming out of the ground.
The Mountain in the Sea: A Novel - Ray Nayler
The cover art for this book is striking (seriously, go look at it). Set decades into the future, this is a dystopian(ish) view of humans with advanced AI technology trying to understand emergent consciousness in different organisms. One is a culture developing in octopi, the other is a highly advanced android.
Overall, I enjoyed the book which includes a lot of exploration of the idea of consciousness as well as a couple twists I didn't see coming.
Remarkably Bright Creatures: A Novel - Shelby Van Pelt
I did not mean to read two books featuring octopi in the same month, but things happen, ya know? This is very different from the books I normally read, but I enjoyed it a lot. It explores hurt - and healing - from three different stories in parallel.
An article came across my timeline this afternoon titled "Do teachers spot AI? Evaluating the detectability of AI-generated texts among student essays" (Fleckenstein, J. et. al, 2024) (open access link). I'm in the middle of a large writing project with students where AI has popped up, so I took time to read through it.
The study is split into two parts, evaluating pre-service teachers' (N=89) abilities to spot AI writing as well as experienced teachers (N=200) ability. Both groups also measured their confidence. From the summary,
Preservice teachers were unable to identify the source of the texts correctly, independent of whether they were student-written or AI-generated and independent of the text quality level. Experienced teachers were unable to correctly identify low-quality texts but more successful when it came to high-quality texts. This is at least partly due to the fact that they assigned most of the low-quality texts to being student-written.
Both groups were more confident when they assume texts to be written by students. This finding indicates a certain level of uncertainty when it comes to identifying AI-generated texts that holds true for both novice and experienced teachers.
I know I've been second-guessing a lot more this year and I would consider myself versed in what AI can (and can't) do reliably. I take a little bit of issue in the implication that this disqualifies teachers from being able to identify non-student writing because this is testing the ability of teachers who have no experience with the source material.
I teach the same set of students for 180 days - at this point in the year, I've seen their writing in many different contexts. It isn't difficult to identify "augmented" writing because I know what their authentic work looks like. I would be interested in seeing studies which place writing in front of experienced teachers labelled as student vs AI but for pupils in their classes rather than a general dataset. The context matters.
That said, I'm 100% on board with some of their recommendations based on the results of the study:
Educators may need to rethink their teaching and assessment strategies in light of the availability of AI-based tools. Whenever possible, instead of focusing on reproduction, educators might emphasize skills that AI cannot easily replicate (e.g., critical thinking, literature review).
They also note that AI detection tooling is still really bad at detecting this kind of stuff, so this can't be technologied away. It's going to take education, forethought, and more teaching students about what is - and isn't - okay when it comes to using these tools. Their summary makes the point much better than I can:
In summary, the finding that teachers cannot differentiate between student-written texts and AI-generated texts underscores the need for a thoughtful and ethical integration of AI in education. It calls for a reevaluation of assessment practices, increased awareness of AI's capabilities and limitations, and a focus on student skills that AI cannot easily replace.
It's worth taking time to read.
I've started to miss being able to search my blog. I've got nearly 700 posts over 12 years of writing. When it was running on Wordpress, search came for free because it's all in a database. When I moved to static, I lost that option. I think my categories and tags are pretty good, but if I want to reference something I've already written (or even check if I've already written something) I need to to Google, use the site:blog.ohheybrian.com trick, and then go through results. Not ideal.
I came across webdix a few months ago and this looks like a workable option. I like that it uses SQLite and that it's processed on the server and that I could write some custom functions against the database.
Another option is an official pelican-search plugin which uses Stork to generate a search file that's then indexed and searched with some Javascript. They also have a WASM option, so that might be fun to try out.
I may also be overthinking all of this.
I saw a post, I don't remember where, about how to gauge the usefulness of a model. Instead of asking it something you don't know about (and, as a result, cannot verify the results), ask the model something about which you know a lot. Use your knowledge to judge the potential usefulness of a particular tool.
I'm not interested in using an LLM to generate new information - I'm more curious about the LLM-as-assistant potential where it extends what I know and can do on my own, but prompts me to maybe think differently about the topic. I've already mentioned the potential of using models to brainstorm and today, I did some tinkering with using an LLM to highlight trends in small datasets.
I took some completely anonymous data from my standard-tracking database and decided to ask some questions of a model after loading in the prompt. First, I tried on Mistral 7B locally, but it took forever to process (like forever forever - see my last post on using a local model on a mid-level machine) and my CPU fan went from 0 to the moon almost immediately. I decided not to cook my machine.
I tried both ChatGPT and Google Gemini. ChatGPT was able to do some interesting things like the types of comments I leave for student work attached to a particular standard score:
Here are the distributions of different feedback comments based on the provided categories:
- Understanding of Particle Behavior: Occurs in 34 instances.
- Use of Scientific Vocabulary: Occurs in 24 instances.
- Specificity of Explanations: Occurs in 27 instances.
- Accuracy of Explanations: Occurs in 11 instances.
- Mistakes or Misconceptions: Occurs in 22 instances.
- Follow-up or Corrections: Occurs in 10 instances.
There were ~150 records in the set I pasted in and, looking at the same data in a CSV, I think it did a pretty good job of categorizing my specific feedback to students. It's helping me note that I'm recognizing what students can do (categories 1-3) but not always giving them next steps (category 6), which is a critical component to effective feedback.
The model also made the categories on its own - this was built with text I'd sent to students specifically, not pre-categorized and not copy & pasted out of a comment database.
I know I want this kind of information about the feedback I'm leaving and asking ChatGPT to do the summary for me was surprisingly insightful as I worked through my reflection.
Gemini was not able to do this - it gave me some ideas about patterns I could investigate, but did not do any of the interpretation as part of the session that ChatGPT was able to do.
In this case, I didn't know the patterns the chat session brought out, but I knew the data source for the discussion, making it a productive use of my time. I wasn't shooting in the dark to find or transform information already in my mind (like a brainstorming or question-writing session would be). I was asking specific questions about specific data I knew I wanted to use. In this case, I did feel more efficient and I was able to glean some insight into how I can improve my practice. I can also show this to students as part of our reflective and metacognitive processing work.
While I think this is a potential good use of time, I wish I could make this more possible on a local model so I'm not contributing to the energy requriements to run larger models, so this is not something I'm planning on doing on the regular. I think looking at trends at assessment points or using specific assignments as datasets to identify trends is more likely at this point.
Warning: This gets a little rambly, but I'm collecting a lot of my experimenting into one place. Sorry.
In my quest to toy with large language model (LLM) systems as a teacher, I went down the path of installing and using local models instead of reaching for one of the web-based services. The environmental impact is reduced, but the questions over training data sources is still murky, at best. I don't feel great about this part, so that will probably be another blog post.
I started with Mozilla's llamafile system which allowed me to download one file and then run it locally. Running a llamafile starts a process which gives access to the model through the browser via localhost and gives the ChatGPT-like interface. As far as ease of running, it works well - pretty much download and you're ready to go. One complaint is that the app greets you with a bunch of options to tune the model but no information on what they are or which would make a difference. Under the hood, it uses llama.cpp to power the interaction and there's a helpful thread with suggested parameters.
My computer is pretty much stock - no fancy GPU and modest RAM (8GB). The model loaded fine, but it was by no means snappy. Most of the forum posts are full of people running 30+ GB of RAM or higher-end GPUs. For me, I was able to squeeze ~2.5 tokens/sec out of this machine. It was pretty slow.
To try and speed it up, I grabbed the smaller phi-2 llamafile which is meant to run on low-power machines. In terms of speed, this is by far the fastest, processing 15.5 tokens/second of input text and producing 5.2 tokens/sec. I was actually kind of surprised at how well this particular model did. The prompts needed to be very specific to work well and the responses tended to be simpler than what Mistral was returning.
I also tried out the LLaVA 1.5 model llamafile becuase it includes image interpretation. That brought my computer to a screeching halt while it was interpreting and describing a graph, so I think that's a nonstarter right now.
After doing more reading, I decided to try out using the larger Mistral model in the command line directly rather than relying on the web interface. There was a little bit of hinting on Reddit and other forums that the CLI interaction was faster. Given that the web app is just piping information from the running model, I doubted that it would make much of a difference, and I didn't see any major differences.
The llamafile format always loads the web app, so to have more control over the interaction, I had to download and build llama.cpp directly. Running the Mistral model in the CLI was about the same as running it through the browser UI.
I prompted each model with something I would use in school:
Write two questions asking students to explain the difference between covalent and ionic bonds.
| Model | UI | Total eval (sec) | Prompt eval time (tokens/sec) | Response time (tokens/sec) |
| Mistral 7B | CLI | 53 | 3.3 | 2.5 |
| Mistral 7B | llamafile (web) | 32.2 | 7.6 | 2.1 |
| LLaVA 1.5 7B | llamafile (web) | 106.8 | 4.3 | 1.6 |
| Phi-2 | llamafile (web) | 17.2 | 13.5 | 3.5 |
The main thing I noticed was that the responses were more consise through the CLI because my prompts didn't have the standard pramble loaded by the web app:
This is a conversation between User and Llama, a friendly chatbot. Llama is helpful, kind, honest, good at writing, and never fails to answer any requests immediately and with precision.
The Phi-2 responses were more general and much more conversational. When I prompted Mistral for two questions, I got two high-level questions which I could potentially use:
- What are the key differences between ionic and covalent bonding, and how do these bonding types affect the properties of atoms and molecules?
- Can you provide examples of common substances that exhibit ionic and covalent bonding, and describe how these bonding types contribute to the overall structure and behavior of these substances?
Phi-2 gave me this:
User: Write two questions asking students to explain the difference between covalent and ionic bonding.
Llama: Sure! What are your thoughts on how these types of chemical bonds differ? Can you provide an example for each type of bond, please?
Follow-up Exercise 1: Provide a detailed explanation of what is meant by "covalent" in chemistry. Give examples and explain the differences between covalent bonding and ionic bonding.
I'm not sure which I prefer right now. Phi-2 tended ramble and gave less helpful responses to prompts. I liked the way Mistral was responding, but it took so much longer. The time sunk in re-prompting Phi-2 might be the same as just waiting longer for first responses.
As far as which UI to use, I don't do much development, so I don't really have a console open all the time, but I do like the minimal interface. On the other hand, I do keep Firefox open all day and having a dedicated tab might come in more use.
All that said, I also could have just written those questions. I'm not doing enough assessment right now to need to generate all kinds of questions and there are already websites which will create all kinds of randomized chemistry questions with a couple clicks.
I don't really have words to express the feeling of seeing a total solar eclipse yesterday. I knew that I wanted to go after not being able to see the 2017 eclipse. I took the time off of school and we loaded the kids in the car to drive a couple hours south and set up for the day.
It was...unreal. There was so much to experience in our two minutes of total eclipse and it was hard to absorb everything. Everything from temperature change to how quiet it got was overwhelming.
I had an idea of what to expect during the totality, but what I didn't expect was the eerieness of the 98-99% coverage. The sun was still too bright to look at, but the atmosphere felt weird. I could tell that it was much darker than it should be and it felt like a really bad summer storm was rolling in. The light felt very strange and my body reacted with a bit of an adrenaline dump.
The other shocking part of the experience was the absolute blackness of the orb of the moon, like someone had taken a hole punch and popped a piece of the sky off. It wasn't at all like the Earthshine that illuminates some of the moon during the crescent phases - it was pure black nothingness that was overwhelming.
We were lucky to have a very bright corona swirling around the outside of the moons disc as well as Jupiter and Venus making appearances.
I tried to capture some of what I was feeling with some quick posts to Mastodon while my kids burned some energy off at a park on the drive home:
The fact that earth has these two orbs nearby AND one orb is huge relative to our planet size AND it's positioned in a way to look the same as the star that's just the right size, luminosity, and warmth to keep us all breathing.
It really is amazing that we have eclipses like this to witness at all.
Phil Plait talks about our local celestial arrangement in his book, Under Alien Skies noting that our star-planet-moon arragement may be unique, which is pretty overwhelming to think about in and of itself. Having seen a total eclipse now, I appreciate the uniqueness even more.
Another slow month for reading. But, I think I've determined that trying to read nonfiction during the school year is a mistake. I don't have the mental capacity to absorb information at the end of the day. I ended up slipping into some comfort fiction for the month while I slowly worked my way through a book about the six raw materials that changed the world. More on that one in May.
Leviathan Wakes - James SA Corey
I've read this book a couple of times, but I found myself back in it last month. I had finished watching The Expanse on Amazon Prime and decided to go back and read it again. It's a great series which sucks me in every time and I'm sure I'll find myself back in that universe again.
Jurassic Park - Michael Crichton
I don't know why I picked this one up again, but I did. I think I had made a joke at dinner about movies good to watch with small kids and I decided to A) watch the move one night (Mrs. OhheyBrian was sick) and then B) re-read the book and compare the stories. I had forgotten about how much detail differed between the two. I remember the movie as being one of the first "growup" movies I was allowed to watch as a kid, so it feels nostalgia. The book doesn't have as much of a landmark on my memory, but I think I prefer the book to the film.
I'm hoping to find more mental energy to read more new-to-me material in April. I have a couple on hold from the library that should make my next update.
The Markup had a post this month looking at educators' use of AI tools. I took some time to read it this week, first on my phone and then printed out so I could mark it up and think through the material more carefully. I'm still working out my own feelings on large-language model (LLM) AI and I wanted to be sure to read and digest this before reflecting on it.
Edtech Influence
Edtech is a weird space. It's full of influencers and personalities wanting to be "a name." The influencer sphere is courted by big tech to try and create authenticity for what they're building. I have a really hard time staying neutral when reading these kinds of articles becuase those are the people you tend to see interviewed. The influencer vibe came out with a couple in particular (one even self-identified as an influencer) and I think it is important to note this perspective because they're the ones who are most heard when it comes to setting the baseline for tool use in education.
The low-hanging fruit is usually time. "AI will save you time!" is like a teacher dog whistle. Our time is pulled and, if only we had more, we could do all the things. AI is a tempting way to make up for time, but to me, it's the worst way to consider these tools.
[He] plugs the topic into MagicSchool AI, along with his estimation of how much class time that teacher has to teach the particular subject, and lets the AI generate a set of lesson plans. "When a teacher sees how fast the AI works, they immediately sign up."
Part of the work of teaching is to make sure lessons are cohesive, aligned to skill development, and appropriate for the students in the room. If teachers are encouraged to just type a tomic and timeframe into an AI, they have effectively removed themselves from the most critical part of the job. There is no thought into the progression of skill development, the connections to other ideas, or the scaffolding that has to happen to help students build understanding. At the other end of the spectrum, another early (and dangerous) recommendation for teachers to try AI is to allow it to grade student writing. Giving feedback on a student's demonstration of understanding is the highest calling and we shouldn't look for usefulness over meaning.
It's flashy and impressive, but it's also dangerous when it isn't paired with evaluation and introspection.
Aside from generating lesson plans, there's this undercurrent in education that everything a teacher needs to use should be free.
Kids just deserve the best education they can get, and if that means borrowing lesson plans from a bot, I'll take it. If we're just teaching lessons, it doesn't really matter where we got it from.
Deep breaths.
Wanting to give students a good education is not license to use tools poorly or to condone the theft of materials. Large language models are already in hot water becuase they incorporated copyrighted material and tried the legal excuse of, "oops?" Sorry in the name of progress!
Children deserve a fair, equitable, and high quality education. That does not mean teachers should use AI to write machine-developed, untested, under developed, and low quality lesson plans to save time. How we act and how we justify our actions matter just as much as how students act.
Besides, if it's just a paper, why does it matter where they get it from?
Brainstorm and Inform
Others are more nuanced in their approach to AI. Another interviewee, Kim Maybin in Alabama, mentioned using ChatGPT to develop multiple versions of the same prompt for differentiation and validity of assessment:
...she often found herself creating additional structure or "sentence starters" to help her struggling students...
This is paralleled by the desire to use tools to find patterns in data or to "rubber duck" patterns and data. This is more closely aligned to how Simon Willison advocates making LLMs work for you that I'm slowly coming around to using more.
In the last two weeks, I used ChatGPT to generate three or four questions on a particular learning standard. It was late and I just didn't have the brain power to write the quiz questions on my own. But I knew what specific skill I wanted to assess, so I was able to write a prompt which generated a couple good starters. They were not scenarios I had used before, so they were novel to the students, but directly aligned to the content.
And that's the difference between using an AI tool to do the work vs using an AI tool to refine the work. The corpus of information has patterns which match well-known topics in the sciences (good for me). I can reliably get some starting points and then move on from there. The difference between this and other wholesale approaches to AI in education is that the human (me) is more heavily involved in the process rather than less. I don't know if I saved a major amount of time starting with a brainstorm, but it got my mind running by priming the idea.
And maybe that's a better metaphor. Using an LLM to prime the thinking process can reduce some of the cognitive load of starting cold. I'm stil working through my own apprehensions of using AI, including the larger impact of the resources it takes to produce them at all. I hope that, at least in education, the narrative starts to move away from the "magic" of the tool and picks up more nuance of the implications and ramifications of AI.
Tonight, on a whim, I decided to switch my CSS fonts to the system font stack instead of importing a style from Google Fonts. I also bumped the line height up just a tad to add a little more whitespace. I think Inter has a little bit more veritcal padding? I'm not sure - when I played with the system font setting, it looked like it needed a little more breathing room vertically.
This may have been prompted by Leon Paternoster realizing that some Apple fonts had disappeared but also maybe not?
I may also be procrastinating on grading test corrections.