Posts

Conditioning and Learning

I don't think there's a question in my mind about the stages we go through as we learn new things. Much learning seems to be rooted in habits and routines, sometimes without us realizing it. Charles Duhigg (2012) quotes psychologist William James: "All our life, so far as it has a definite form, is but a mass of habits." Later in the same chapter, Duhugg notes a Duke University study that showed up to 40% of our actions every day are driven by habits, not decisions. Habits are learned behaviors - they're developed over time. A habit is simply an automatic response to a stimulus of some kind in order to reach a reward.

The good news is that these habits can be overridden. Bad habits can be replaces with good, but we need to be aware of what triggers those routines in the habit loop. When we're aware of our "normal" responses to situations, we're able to actively change our responses. So, where do our "normal" responses come from?

The idea of conditioning is that learning is based on the stimulus-response interaction (Resnick & Ford, 1981; Berkeley, n.d.). The more times the stimulus is associated with a behavior, the stronger the association becomes. The mechanism of conditioning depends on the situation. At times, we're responding to the stimulus (classical conditioning). Other times, we're more concerned with the result of the response (operant conditioning). The type helps us explain what is happening in the brain as part of the learning process but all forms of conditioning are deterministic. Simply put - the individual has no say in whether or not they are conditioned into learning a particular behavior (McLeod, 2012).

There are situations in which conditioning an individual to form an automatic response to a stimulus is appropriate - the military being a prime example of needing a workforce to act instantly and efficiently (see Duhigg, 2012, chap 3.1 for an example of classical conditioning in the NFL fo another example). With this in mind, habits are formed by conditioning. The habit loop, cue, routine, and reward (Duhigg, 2012), is a clear sequence of a stimulus causing some behavior to reach a goal. The routine - the habit itself - can be learned and conditioning can be the vehicle for learning.

I see Skinner and Thorndike's fingerprints in schools. Thorndike was convinced that learning was rooted in simply forming a strong enough stimulus-response bond in the subject's mind (Resnick & Ford, 1981). Strengthening those associations required practice. Lots of practice. So much so that he argued that schools simply needed to provide students the correct bonds, in the correct order, and in the correct quantity for them to learn the material. Skinner (1937) agreed that repeated interaction with a stimulus (or an idea) would form strong associations for learned behaviors, but he was more interested in the consequences of the behavior and its effect on learning (Cherry, 2020). Learning was a matter of finding the best reinforcement or punishment to form associations between a stimulus and desired response.


I work in technology and the great edtech promise of the last 15 years has been "technology will engage your students in learning." Audrey Watters (2021) notes Skinner's influence in educational technologies that persists today:

If behavior was controlled (and controllable) by the environment, then what better way to make adjustments to individuals — and, as Skinner imagined, to all of society — than by machine....And that is a legacy that is foundational for education technology. It’s not where the story of teaching machines begins, but it’s almost always how the story of teaching machines ends: deeply intertwined with Skinner and with his psycho-technologies. It is a foundation from which education technology has never entirely broken.

If life is a collection of habits acting out, where is there room for understanding? If our behaviors are driven by our own habits, we could infer that the end goal of schooling is to receive a passing grade. Students are given skills to practice and their performances on their tasks are rewarded. Students have learned that the grade is the desired end, not understanding.

But why? Particularly in the realm of learning, behaviorism and conditioning on their own are not enough to describe not only how we learn but why we're driven to learn. Moore (2011) asks why it is "that behavior should be explained without directly referring to mental processes." I want students to develop good habits, some of which are taught through content. At the same time, I want to have a learning environment where students are able to connect ideas to one another to truly form new knowledge.

Resources

Berkeley Graduate Division. (n.d.). Behaviorism. Graduate Student Instructor Teaching Resource Center. https://gsi.berkeley.edu/gsi-guide-contents/learning-theory-research/behaviorism/

Cherry, K. (2020, June 3). What is operant conditioning and how does it work?. Verywell Mind. https://www.verywellmind.com/operant-conditioning-a2-2794863.

Duhigg, C. (2012). The power of habit: Why we do what we do in life and business. Random House.

McLeod, S. (2018). Classical conditioning. Simple Psychology. https://www.simplypsychology.org/classical-conditioning.html

Moore, J. (2011). Behaviorism. The Psychological Record, 61(3), 449-463. http://dx.doi.org.proxy2.cl.msu.edu/10.1007/BF03395771.

Resnick, L. B., & Ford, W. W. (1981). The psychology of mathematics for instruction. Routledge.

Watters, A. (2021). The Engineered Student: On B. F. Skinner’s Teaching Machine. The MIT Press Reader. https://thereader.mitpress.mit.edu/the-engineered-student-on-b-f-skinners-teaching-machine/

My Leadership Lens

This year, part of my job has taken on a functional team-leader role. I'm not a leader in the sense that I'm evaluating my team members, but my role has started to become more directive as I work to align professional devlopment across programs within the district. The Instructional Technology team has always been an active arm of PD within the district and the team looks to me to set direction, advocate at the district level, and assign tasks as needed.

However, I'm not a district administrator. I'm essentially a teacher-leader with a seat at the admin table to bring perspective and help plan staff training opportunites with other district leadership. I've approached this position tentatively, which has led to some frustration. Given my non-admin role, I don't always feel like I have the station to speak as a peer with the leadership team. At the same time, there are programs and systems-level challenges I feel I can speak to as a problem solver and leader within the district.

Over the last several months, I've thought about how I can change my thinking and my working habits to takcle some of these challenges. I started by reading Leadership on the Line, which provides insight and perspective on how to effect change in large organizations without burning out or being forced out (systems resist change). I felt like I needed to find ways to confront problems and bring uncertainty forward so we could work to find solutions.

On paper, that sounds great. Leaders effect change. But it also made me feel like I was always in conflict, which hurt my job satisfaction and made me question whether or not I was really capable of taking on more leadership responsibility.

Last week, I had the chance to bring the instructional coaches to a team dynamic workshop where we looked at each of our own work habits and considered how those affect one another and how it affects our team as a whole. I've done these things in the past and I have to say, this one felt different. The facilitators made sure we all understood that people are complex and that patterns are simply indicators and not 100% accurate all of the time. It felt like we were people, not numbers or "types" to dissect.

I realized that my problem was that I wanted to make systems differences before I really understood how to work with the coaches I see day to day. I started to see that my focus on leadership as a way to change systems at the top came at the expense of making sure my team was as effective as possible.

We're early in the semster. I've already spent time digging into some of the tools I have access to as a result of the workshop and I'm considering how I can better facilitate relationships with the coaches as a team and not just as a group of people who do similar work. I want to make sure each of the coaches I'm leading - even if it feels "unofficial" - is equipped to do the best work they can when they're with administrators or teachers out in the buildings.

My lens was focused on the wrong place. I still need to bring challenges to the district level, but that's only half of the work. Taking on the "change-maker" attitude isolated me in my work and made it hard to see how the team I'm leading can be a part of the solution and not just highlighting problems.

Should We Condition? - Weeknotes for 2023-01-23

Here are some interesting things and half-formed thoughts from this week:

Shorts

A litte longer

I'm taking a graduate course this semester on the psychology of learning. The class just started, so we're doing some basic psych on methods of learning and some research background. Last night, I was reading on some of the early research, much of it focused on classical conditioning. In my notes, I wrote:

Conditioning is often presented as a way to achieve behaviors automatically. If those behaviors are things like self-regulation and awareness, is that a bad thing?

If we condition students to calm themselves at a sound (like the meditation bowl thing) have they learned self-regulation? Or are they simply responding to a stimulus out of habit?

Am I interested in forming habits which take over in specific situations or forming students who are aware of themselves and then choose the habits they want to develop?

I have a lot of thoughts floating around that I need to consolidate. This is a difficult subject. Here's the note in context.

This Was Written with Helix

I'm writing this blog post using Helx. This is the first time I've really committed to using a terminal-based text editor, focusing on keyboard commands for working in the file.

This is definitely a stretch for my brain, but here are a couple things I like about Helix:

The helix command pallet view

The helix command pallet view including descriptions and shortcuts for action

Action chords pop up on the right

The hardest part, by far, is training my fingers to move around the editor with h, j, k, l instead of the arrow keys. I've started watching some YouTube channels on using terminal editors and it's incredible how fast experienced users move. I'm hoping to get there eventually.

Anyways, this is a really small chunk of what I've learned after a couple hours of use. I'll probably post an update after a week or two of more focused use with more learning.

I've Moved My Blog...Again

Welp, I've done it again. I decided to go back to static instead of using WordPress. I want to explain why I made that choice and go over what I'm doing now.

Earlier this year, I decided to move from managed hosting to a Linode VPS. I've been doing more app programming in Python and my shared hosting plan - while great - didn't let me use runtimes other than PHP. There are a couple projects I wanted to make publicly available, so moving to a new host was pretty much the option I was left with.

Instead of paying for two hosts, I've also cut down on the number of things I'm running, focusing instead of the important things. The Linode plan I can afford is very low powered and after experimenting for a little bit, WordPress is a little too heavy for my modest storage, RAM, and bandwidth limits.

Going Static

Several years ago, I wrote about using a static site generator and then about how I was moving back to WordPress. At the risk of losing the game by switching again, well...I'm switching again.

Last time didn't work well because I didn't have a good process for writing and deploying the site. This time around, I've gotten some great help from people I've found on Mastodon to get a workflow that works well for me. For example, last time required maintaining a full repository of writing on Github. Jekyll is also built in Ruby, which I don't know. It was difficult to know where things were in the publishing process and in the end, it was all hosted by Github, not me.

This time, I've decided to use Pelican, a static site generator written in Python. It was important to me to have a better idea of everything from the post creation to how to create my own template. It's quite simple - I write posts in Markdown and Pelican spits out a directory of HTML. No magic code repository required and no wondering where the HTML actually is.

Pushing to the server

In learning about how to manage a new workflow, I kept seeing reference to rsync. I didn't know what it was and hooo boy, now that I know, my life has changed. This is one of those truly magical pieces of software that you wish you'd known about years and years ago.

In short, I can do everything - all my writing, templating, and publishing - on my computer without having to push anything up to the server. I can see the generated site to make sure all is well before pushing. Once it's done, rsync can take the created files and push them to the server efficiently and quickly. Since it's all HTML, there isn't anything different here than on the server, so the result is a carbon-copy.

All this to say is that my site feels more resilient to change than it did with WordPress. When I migrated the WordPress version of my blog, I had to copy source files, copy directories, get a database dump, and then reset the nginx server to handle incoming requets to WordPress. With this, I can copy and paste the entire output directory here - or anywhere - and it'll Just Work.

Changes

Will this be my last blog switch? Who knows. I'm happy with this because it's easy to write and manage and requires next to no resources on the server, which means I have more space and resources to play with the fun stuff.

One side effect of static is that commenting isn't embedded anymore. I have copies of all the old comments from WordPress and I'm going to slowly move those back into the posts as text. If you have something you want to contribute or push back on, you can send an email any time to brian@ohheybrian.com and I'm happy to pick up a conversation.

Side note...

Many thanks to AlexMillerDB and Benjamin Hollon for helping to find some hidden wonk in the redesign. Some things are incremental fixes (link syntax, in particular) and a couple others were to improve usability, especially on mobile.

Yes, You Can Use ChatGPT with Students

We got talking about ChatGPT yesterday at work and surprisingly, none of us have really been asked by teachers to block it from students. It could be because they haven't heard about it yet or because we're in final exam week and students aren't doing a whole lot of work aside from wrapping up for the semester.

Feelings among the coaches were mixed. We understand the anxiety that comes from the publicity that ChatGPT has drummed up and for some of the examples we've tried out. At the same time, if you actually try using it yourself (requires a login), you'll quickly discover that it isn't as scary as it sounds.

A tool is a tool

At the end of the day, ChatGPT is a computer program which takes in a question and gives back a human-ish response. You can ask about anything (for the most part - it couldn't tell me about myself even though I'm on The Internet) and the site will give you a response summarizing the thing. The summaries were okay and, I will admit, the code samples were cool to see created on the fly.

But here's the thing - it's a summary machine. It gives these responses based on information it has already been given. If you're a teacher or student looking for an interactive method for summarizing information, this is a great tool because it can take natural language prompts ("Tell me about the solar system") and provide information quickly.

Finding teachable moments

If you're not looking for summaries and are more concerned about students making less-than-genuine submissions to your assignments, don't lose any sleep. The responses from the machine are very dry and are easy to spot. If you're taking time to actually read what's coming in, you'll be fine.

For other, less subjective submissions, here are some ways you can use ChatGPT to push your assignments up toward Synthesis and away from Knowledge on the Bloom's spectrum:

ELA

Since it's a summary machine, consider generating a summary via ChatGPT on your own and then use the response as a close-read and editing activity with your students.

  • Is the summary factually correct?
  • Is there extra, unnecessary information that can be removed?
  • Is there context that should be added?

I also gave it a short prompt to write a story and it gave back a passable response. It was creative in the sense that it followed the prompt ("Write a story about a penguin named Sparky who moves to the rainforest.") and gave a story with a start, middle, and end. If your students do this, here are some questions to ask:

  • Who owns a story once it's written? The person with the idea or the writer?
  • Can this be edited to have a better story arc?
  • Is the resolution satisfying? What makes a satisfying resolution?

STEM

One of the big breakthroughs with this model is the ability to generate code samples on the fly. Learning to code can be frustrating because we might not always have the mental model to do what we need to do. Giving ChatGPT a prompt like, "Write a program which generates random odd numbers in python" will give you a working program. Use this as a starting point:

  • Is this the best way to accomplish that task?
  • Can you refactor it into something more concise?
  • How would this type of program be useful?
  • If you work for a company and you use code from ChatGPT, who owns it?

Math

Wolfram Alpha has had an equation solver for a while, but this goes a little further because you can ask ChatGPT to validate a proof or equation. I tried giving it a proof that "proves" 1 = 0 by including a subtle logical fallacy. The machine tells me it isn't valid, but it does a poor job of explaining why.

  • Provide students an explanation of what is happening (created by ChatGPT) and then improve it.
  • Give students a challenge and ask them to validate their responses using the AI.
  • How could using AI to evaluate mathematics change the way we think about math?

Social Studies

I asked ChatGPT, "How does the geography of the United States contribute to its political climate?" to see how well it could synthesize a response. It give me a five-paragraph essay which kind of danced around a coherent answer, but failed to really make a solid point.

  • Have students generate a response and then use that information to defend a position they hold in a debate.
  • Provide students with a summary and use it to research other contributing factors.
  • Create quick summaries of historical figures to reference in discussions.

World Language

I'm not going to lie, this one was cool. Google Translate already exists and we know students use it. ChatGPT differs in that I could prompt it for regional dialect and formal vs informal responses. It will event provide phonetic responses if requested.

  • Quickly generate prompts for students to translate or to analyze in class (less work on you)
  • Compare and contrast different methods for translating a piece of text
  • Summarize rules for translation for reference

What do we really want to teach?

ChatGPT is a summary machine. It can be used as a way to quickly get information to use as a starting point, and that's the key. It does not cite these sources, and that's where the teaching comes in. Evaluation and synthesis are the next steps to actually do something with what we know. This is an emerging tool and we don't know what wider impact it will have in the future. For now, I would recommend thinking deeply on what I want to teach and how a powerful source of summaries could be used.

Don't forget that it is still susceptible to errors. Even a calculator will always give an answer, but that doesn't mean it is the correct one. Teach students to develop critical habits and to check what they're given to make sure it's factually correct. Train them to look for errors by inviting them to challenge ideas and ask questions.

ChatGPT is impressive and AI is only going to become more impressive. Take some time to think about implications of the tools as it relates to teaching practice. What kinds of questions are worth asking?


Comments

John Sowash

I always appreciate your thoughtful articles, Brian! I’m still trying to formulate my opinion on Chat GPT and related AI tools.

The Red Ranger Rides Again

This is a story about one of those life events that seems routine that turns upside down real quick.

The Red Ranger herself

The Red Ranger herself

It started as a routine clutch replacement. Mine started rattling a little while idling, so I decided to go ahead and change it before it gave out entirely. This is the third or fourth clutch I've done, so I wasn't worried about it and went into the weekend expecting to take at least a full day of work, maybe a couple hours into the next.

All was going well until we started to remove the transmission. My brother in law was under the car and I heard a low, "Oh dear." That's when the adventure started.

You should not be able to see those bolt threads.

You should not be able to see those bolt threads.

The casing where the transmission mounts to the body of the car had broken at some point. This is a Bad Thing to Happen because if the one remaining bolt had broken, the entire engine and transmission of the car could have rotated backwards and essentially torn itself apart.

There are two options in this situation: replace the car or replace the transmission. The benefit of replacing the transmission is that it is much less expensive than replacing the car, so we went for that option. And that's where another snag popped up.

My car is sort of a chimera. The original transmission (5-speed manual) is well known for a bad bearing, which causes catastrophic failure. So, mine has a 6-speed, replaced by the previous owner. It's a more robust transmission and is highly sought after. I wanted to keep the six-speed, but we couldn't find an economical option (rebuilt transmissions can start at $1,800). And this is when we discovered - and ultimately went with - option three: find a five-speed transmission and take the portion with the engine mount and put it on the six-speed.

We headed off to the junkyard and - miraculously - found a wrecked car with an undamaged five-speed manual transmission still in the engine compartment. Jackpot.

Two days later, I was the proud owner of a junkyard transmission and a broken six speed. It was time to hybridize parts yet again.

As complex as these machines look, they're still simple enough to repair.

As complex as these machines look, they're still simple enough to repair.

Problem three: get into the transmission. These are complicated machines that work so well, their overall design hasn't really changed since cars were invented. A series of levers slide collars over gears which send power to the wheels. The power of the Internet confirmed that we were on the right track and we even found a mechanic with a GoPro who showed exactly how to take the transmission apart. So, we set to work.

A couple hours and a broken tool later, we gave up for the night. I went home, questioning every decision I've made to this point. I woke up after a fitful night of sleep and decided it was time to just take the things somewhere. I called nearly every transmission shop and mechanic in the region, but no one wanted to take on such a daring project. Most transmission shops don't even touch manual shift vehicles anymore, which is a bummer. They're repairable! I'll save those comments on American replacement-ism for another day.

Late that morning, I caved and called the dealer service department. Their transmission engineer was willing to give it a shot. With a burst of hope, I loaded everything into the van and drove down to drop things off.

The next day, the shop called and the transmission tech decided it wasn't something he wanted to mess with, either. I asked that he simply remove those top gears so I could dive into the belly myself, which he agreed to. Back to square whatever-we-were-on-two-days-ago.

At this point, I had resolved myself to buckling down and doing the work. Several late evenings and a full Saturday with my brothers-in-law working with me, we had the transmission repaired and re-assembled. The most difficult part, frankly, is getting the thing (it weighs ~90 pounds [41kg] assembled, so it isn't small) back into the tight engine compartment. But we managed it and I could see the light at the end of the tunnel.

Another late night, alone in the garage, and the car is nearly done. After I hook up the battery, I decided to start the engine before I put the wheels back on. All had gone well and I was looking forward to being home before 11PM.

But.

The car wouldn't start.

The good news is that a non-starting car is not usually do to the transmission. So, something was wrong, but it probably wasn't the big something that we just fixed. I started poking and prodding with a multimeter to find the culprit.

The battery was good. The starter was good. I had ground between the battery and the body of the car. The dashboard would light up and I could even get the radio going. So, there was a short somewhere, but I couldn't find it on my own. Generally, you need to be checking power while someone tries to start the car.

I got one of the brothers-in-law back over to help out and we found one ground wire that had a slight loose connecting, but it was slight enough that the starter couldn't pull enough power from the battery. Those connections matter a lot, especially in that high-current surge to turn the engine over cold.

At 10PM on Monday night, the car started for the first time in over a week. The clutch is smooth and the gear shifting feels like butter. It's driving well and I feel like we can easily get another 100K out of this little sprite.

I could write a trite thing about time and patience, both of which were stretched in this little party, but I won't. This was just one of those life things that will help me remember that big projects are doable when you break them down, rely on the expertise and wisdom of others, and get your hands dirty to solve a problem when one comes up.

Mastodon is Not a Twitter Replacement

The great Twitter Migration is causing all sorts of hype around Mastodon, but, in my opinion, for the wrong reasons.

People disillusioned with Twitter since Musk took over are flocking to Mastodon to the tune of hundreds of thousands of registrations per day. That is astronomical growth for what has been, for the last several years, niche communities of people around shared interests.

If you've never used it, the look and feel is very close to that of Twitter. Your timeline takes up the majority of the interface, you use "@" usernames to mention people, and you can follow topics using hashtags. There are some nuances in the differences between a singly-owned space like Twitter and the interconnectedness of individual Mastodon services (instances), but for the most part, the look and feel is similar.

I'm definitely not the first to say Mastodon is more comparable to email than it is to Twitter in terms of system structure. With email, you pick a home - @yahoo.com, @gmail.com, etc. Mastodon is similar - your username is linked to your home. For me, I'm brianb@fosstodon.org.

Looks are deceiving

With federated spaces, your home is part of your identifier. Instances look the same, but the community guidelines, norms, and expectations can vary widely. To really understand the value of Mastodon, it lies in the expectations of people within the community. This is where the email analogy breaks down and where most articles about Mastodon fall short.

Email is federated - different services talk to one another using a shared set of rules for communicating. But I don't see what other people on gmail.com are saying (nor do I want to). Mastodon provides this structure but in a social media context.

Mastodon is federated for a specific reason - a single entity setting the rules for everyone is usually not the best way to go. Each instance is able to set their own expectations and the runners are provided tools to moderate the space. When you're joining an instance, it isn't just a place to post. It's a community you are joining intentionally.

Community and trust building

Joanna Stern has a simple overview of Mastodon on the Wall Street Journal and she touches on the difference of trust as capital in a new social media paradigm:

We’re in a big trust exercise,” said Jennifer Grygiel, a communications professor at Syracuse University. “Is the server in some rando’s closet maybe better right now than Elon Musk’s Twitter?” Prof. Grygiel suggested looking for different trust signals on Mastodon, including servers with larger populations and those pitching more supportive communities.</p>

Joanna Stern, [Don't have 55 Billion for your own Social Media Network? Try Mastodon.

There's trust in the people who are running the instances. Sometimes, it's single administrators running a public instance while others are teams of people. Either way, you're shifting your trust from a corporation to not be bad to an individual or small team of people - all of whom have names and faces within your community - to act in everyone's best interests. In return, as a member of the community, we act in accordance with the community norms.

Many instances have had some growing pains this week as new registrations flooded in. Some technical pains, but many more cases of culture clash. Hugh Rundle makes a good point in his post:

It's not entirely the Twitter people's fault. They've been taught to behave in certain ways. To chase likes and retweets/boosts. To promote themselves. To perform. All of that sort of thing is anathema to most of the people who were on Mastodon a week ago.

Hugh Rundle

When you join a community, take time to make a good introduction and then spend some time looking at your local timeline. See who pops out and follow to begin curating your own Home feed. Once you feel comfortable, start searching with hashtags for your other interests to gather people from outside your home instance.

Most importantly, take time to listen and get the vibe. The time it takes to de-Twitter extends well past when you shut your account down. Resist the habits of interaction developed on Twitter because they don't fit well with the structure of your Mastodon instance.

“This is a little bit more complicated. But in the long run, for people who are interested in a more community-oriented space, I think it is very much worth it.”

Condé Nast

Slow down and really take some time to rethink how - and why - we spend time in these spaces in the first place.


Side note, the header image for this post was generated by the Stable Diffusion image-generation AI with the prompt, "a person running away from a giant blue bird in an impressionist style."

More Flexible Test Databases in Flask

I've been longing for an easier way to manage test data in Flask. Specifically, when running automated tests, I wanted an easier way to populate a database with some known values which would then be used in the tests themselves. This turned out to be tricker than I thought, but I learned a bunch along the way and I'll share that process in detail here.

Why I needed test data

I tend to focus on integration tests - I'm interested in how the application takes in requests and returns a response. Having test data in my database allows me to define test results easily. I know what types of responses I should be getting from each route and dynamically loaded data from a JSON file allows me to quickly define those results over and over.

Up until this point, I would create database objects like normal, using a model constructor:

import unittest

from myapp.models import Event, User


class MyTestClass(unittest.TestCase):
  def setUp(self):
    user = User(name="My name", email="myname@example.com")
    user2 = User(name="Another name", email="another@example.com")

    event = Event(title="Some event")

    db.session.add_all([user, user2, event])
    db.session.commit()

  # the rest of the tests

class AnotherTestClass(unittest.TestCase):
  def setUp(self):
    # do the same thing...

The problem with this is that it is extremely repetitive. Each test (or each TestCase instance) has its own database declarations which have to be loaded when the test is run. That means I'm either typing each record for each test or I'm copy/pasting items in between tests. If my routes ever change I have to change each instance of the test as a result, which is no fun.

Using libraries

I came across two libraries, but neither really solved my problem but for different reasons.

Flask Fixtures is a library which allows you to run unit tests based on JSON representations of your data. It takes in a list of JSON files and then populates an in-memory sqlite database. I tried this method, but the library hasn't been updated in several years and didn't play well with Flask's application factory pattern.

I had Factory Boy recommended, and while tempting, I needed to have consistent data in memory to run tests against. That said, I'll probably come back to Factory Boy for generating large data sets where I have more freedom in how I test.

I like the pattern of using JSON to populate a test database on the fly. I ended up writing my own, much simplified, version of Flask Fixtures.

JSON structure

I followed the pattern in Flask Fixtures because it provides a clear, extensible way of loading data into the application.

[
  {
    "table": "user",
    "records": [
      {
        "id": 1,
        "name": "Admin",
        "email": "admin@example.com",
        "usertype_id": 1,
        "location_id": 1
       },
    }
]

Each file can be expanded as necessary, adding new items or new files to expand the test database scope on the fly. These files live inside /test/fixtures in my project tree.

Dynamically loading test data

Instead of defining database records at the start of each test, I now define records in JSON files which can be loaded on demand within a test or set of tests. The biggest change in my app structure was to handle application context appropriately.

A new Loader module is created with the current application instance, database, and a list of fixtures to load into sqlite. The module only runs within the current context, so I can control when loading happens within the individual tests, even loading data after the setUp function has run.

import json
import os
import unittest

from sqlalchemy import Table

from app.extensions import db

class Loader(object):
    """
    Reusable class for loading fixture data into test databases.
    Initialize with an in-context application and database engine.
    """

    def __init__(self, app, db, fixtures):
        self.app = app
        self.connection = db.engine.connect()
        self.fixtures = fixtures
        self.metadata = db.metadata

    def load(self):
        for filename in self.fixtures:
            filepath = os.path.join(self.app.config["FIXTURES_DIR"], filename)
            with open(filepath) as file_in:
                self.data = json.load(file_in)
                self.load_from_file()

    def load_from_file(self):
        table = Table(self.data[0]["table"], self.metadata)
        self.connection.execute(table.insert(), self.data[0]["records"])
        return


class MyTest(unittest.TestCase):
    def create(self):
        self.app = create_app(TestConfig)

        # Build the database structure in the application context
        with self.app.app_context():
            db.init_app(self.app)
            db.create_all()
        return self.app

    def setUp(self):
        self.app = self.create()

        # Set up the application context manually to build the database
        # and test client for requests.
        ctx = self.app.app_context()
        ctx.push()

        self.client = self.app.test_client()

        # Include any data to be loaded into the database
        fixtures = [
            "events.json",
            "users.json",
        ]

        # Now that we're in context, we can load the database.
        loader = Loader(self.app, db, fixtures)
        loader.load()

    def tearDown(self):
        db.session.remove()
        db.drop_all()

Main takeaways

My biggest frustration was figuring out application context. This update to my test runner included moving to an application factory pattern, so I had to rethink how everything ran from the ground up. In the main application, context is handled by the create_app function and I didn't have to think about what context was active. In the tests, that has to be done manually with each instance. Moving the app context into startUp ensured only one context was used at a given time.

Being self taught, I do my best to apply best practice principles like "don't repeat yourself" (DRY). This was especially noticeable as the number of tests increased and I'm happy with this solution. There's still some boilerplate for each test case and one of my goals is to wrap that up in a unittest.TestCase subclass so I can simply inherit the boilerplate rather than type it out. I'm still working on the best way to do that for my use case.

This is probably one of the more complex problems I've had to solve on my own. The application itself is just a layer to interact with database records, so the logic itself isn't too complex. Writing my own module to handle the automated work was new and I'm happy with the result. I'm hoping to be able to expand on it and eventually (maybe?) package it up into something I can import and use in some other projects. But that's another task for another day.

Dune Hikes

In September, it was my family's turn to battle COVID-19. We're not sure who brought it home first, but it hit me first. Hard.

Not as hard as a lot of other people, but I was flat for a couple days battling high fevers and severe muscle and joint aches. Luckily, I didn't have any of the breathing issues that have hurt so many people. Following me, my wife and kids all came down with positive tests, so we settled in for - what we hoped - would be a relatively short infection period.

Unfortunately, our tests lingered positive. We stayed away from family and friends. We weighed how to safely get groceries (we're pretty rural, so delivery isn't really viable), and how to pass the time. The hardest part was when we were all feeling better, but still testing positive.

We ended up finding ways to be outside together away from people. We took trips to local creeks and parks. We spent time working in the garden on the farm, preparing for winter. One of the best trips was to Warren Dunes State Park about 30 minutes north of here.

The sand stretches in every direction.

The sand stretches in every direction.

The kids hadn't ever been to the dunes, so this was a treat for us. In the parking area, there is a monster dune immediately across the road and they started running to the top. By the end, they looked like Everest hikers, stopping every couple of steps to catch their breath due to the actual height and an incredibly steep grade.

Warren Dunes is nearly 2,000 acres and has public camping access, so there are trails all over the dunes to different campsites. They weave down, in, and around natural trails through the scrubby dune grass so you move from full sun to shady and change elevation quickly (if you want to). There are also several stretches that cross the ridges of adjoining dunes, so you can move across the area with amazing views out across the lake.

We chose to stay on top instead of going down and then back up.

We chose to stay on top instead of going down and then back up.

We've lived in small-child land for a very long time. This was the first time our three year old was almost as independent as his older sisters. He wanted to walk, slide down hills, and climb with the older kids. My wife and I were actually able to walk and talk together while the others explored.

I think this is the start of the next phase of family life - one where we can begin to set and break boundaries at the same time. It also makes me realize how many of the childhood memories I have were created by my parents - and we need to do the same. We're looking forward to protecting our family time so we can get out and explore more frequently.

No agenda other than showing up.

Dealing with Bad Questions

...not all questions are created equal and some questions inhibit learning.

Source: Yes, There is Such a Thing as a Bad Question — Teachers Going Gradeless

This is a great look at how our practice of teaching needs to change if we want students to think differently about school. The author writes from the perspective of doing this as a result of going gradeless, but the same habits of instruction can be used to make the same shift if gradeless isn't an option.

...I find that students approach me with a different question; “Can you show me how to do this?” Since my students are self-assessing their work regularly throughout the term, they always have a strong grasp of where they have shown understanding and where they are developing their understanding.

The same thing often happens in standards-based approaches to grading or even just focusing on feedback to drive your interaction with students. The critical component is to hone your own responses to questions to point students back to the main goal: learning.

Break's End

The end of summer break is always a little weird. I have a shorter vacation than normal because of the nature of my job, but going back to work at the end of July still feels...too soon.

Brian is smiling in the foreground while his family is crouched behind him, selfie-style, behind. They are on the beach with the sunset in the background.

I'm thankful for my breaks. It seems strange in the US to get such a large block of time away from work and I'm getting better at actually taking a break.

I originally titled this "Summer's End," but I'm reminding myself, like.i reminded my kids last night, that going back to work doesn't mean summer is over. It just means our schedule changes a little bit.

I'm still thankful for that summer break.

Coming Back Around

This week, my wife pointed something out to me: our entire dinner was either something we grew or something we helped raise. Vegetables from our garden. Pork from working with my brother in law and his animals.

Time spent in the ground and with the ground is never wasted.

I was watching a farmer process chickens when learning how to do our own and he said something that has stuck with me.

Something had to die for you to live today.

He meant animals, but the same is true for our vegetables. We work hard to make sure we grow and raise food well so that we can appreciate the value it brings to us when it comes time to harvest. We're starting to understand that more. My kids might understand it better than I do.

We're early in our work to establish a sustainable space and I'm looking forward to when more dinners we're raised here, at home.

Cloud Power

Summer brings some pop up storms at the end of the day when the humid air starts to cool. They often form over Lake Michigan to the west and slowly move west, reaching up until they flatten.

Looking west from the house, we get to watch these spectacular white towers slide across the landscape. Oftentimes, we hear the rain before we feel it.

Clouds tower in the west.

Clouds tower in the west.

The late day storms are my favorite because the setting sun is still able to backlight the delicate edge of the cloud, reminding me that storms come and go.

Recent Happenings

We're in the throes of summer and there have been a number of events, ranging from new construction (because there is never enough stuff to build), beekeeping, and birthdays. Here are a few highlights.

This guy turned three.

This guy turned three.

The honeybees are going gangbuster.

The honeybees are going gangbuster.

Chores with the animals usually include a pit stop at the swing set to get some wiggles out.

Chores with the animals usually include a pit stop at the swing set to get some wiggles out.

We're building a barn.

We're building a barn.

I'm not posting to social media much. I post some to Pixelfed and (even less frequently), Instagram.

Adventures in Building an Interactive Apps Script Sidebar

Now that I've finished a rewrite of an application in HTMX, I wanted to see if HTMX could be used to enhance the use of Google Apps Script sidebar interfaces. I build these from time to time at work to help with spreadsheet interaction that goes beyond simple formulas. The idea is to allow for more dynamic interactions in the (very limited) sidebar available through Google Apps Script.

HTMX: This Won't Work

I started with adding HTMX directly and had no luck. The main issue is that HTMX uses ``XMLHttpRequest` to fetch data <https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest>`__, and that leads to CORS issues. When HTMX starts a request, a preflight check is made because it includes the upload parameter, which is considered "not simple" by the browser. This preflight check has to pass for the request to proceed.

The preflight fails because of the redirect between the published Apps Script /exec endpoint and the googleusercontent URL with the actual code. HTMX also adds headers to the request, which also makes the request unsafe and the preflight check fails with a CORS error.

So, I tried getting around the CORS issue by issuing async fetch calls, but that just adds a layer of complexity rather than solving a problem. It also confirms the fact that browser engineers are much smarter than me because it still didn't work.

All that said, fetching content directly from a sidebar won't work, let alone posting data to the backend.

Hyperscript to the Rescue

Hyperscript is a companion to HTMX which offers similar functionality, just in a different context. It's a lightweight scripting language with Python-ish syntax that you can include as a "_=*" attribute on HTML elements. You can add different handlers and listen for events just like with HTMX, so clean interactions are still possible.

Using Hyperscript, you can take advantage of the google.script.run() API to interact with your Apps Script code. This allows you to do some simple dynamic content replacement all the way up to accepting input from a user to query or update data in the sheet through the sidebar.

The Hyperscript cookbook has some examples, so let's look at how to implement it in an Apps Script context for some real-world application. All of the code to try it yourself is in this GitHub repo. You can also make a copy of this sheet to get your own version.

Getting started

There are some utility functions we need to get the project started. I'm greatly indebted to Bruce McPherson and his extensive writing on Apps Script project structure. In your Code.gs file, we'll create some global functions to allow us to work more effectively between the Apps Script code and the client.

// Code.gs

// Allow the client to access your Apps Script functions dynamically.
function exposeRun(namespace, method, argArray) {
  var func = (namespace ? this[namespace][method] : this[method]);
  if(argArray && argArray.length) {
    return func.apply(this, argArray)
  }
  else {
    return func();
  }
}

// Utility function to include other files in HTML templates
function include(filename) {
  return HtmlService.createHtmlOutputFromFile(filename).getContent();
}

// Trigger the menu to open the sidebar
function onOpen() {
  const ui = SpreadsheetApp.getUi()
  ui.createMenu('Menu').addItem('Run', 'showSidebar').addToUi()
}

// Display the sidebar
function showSidebar() {
  const html = HtmlService.createTemplateFromFile('template/_base.html').evaluate()
      .setTitle('The sidebar');
  SpreadsheetApp.getUi() // Or DocumentApp or SlidesApp or FormApp.
      .showSidebar(html);
}

I'll also use Bruce McPherson's Promise-based wrapper for working with Apps Script as a starting point. All of our requests will go through this method:

// static/main.js.html
<script>
  var Runner = (function(ns) {

    ns.run = function(namespace, method) {
      let runArgs = Array.prototype.slice.call(arguments).slice(2);

      if(arguments.length < 2) {
        throw new Error('Need at least a namespace and method.')
      }

      return new Promise(function(resolve, reject) {
        google.script.run.withFailureHandler(function(err) {
          reject(err)
        }).withSuccessHandler(function(result) {
          resolve(result)
        }).exposeRun(namespace, method, runArgs)
      })
    }
    return ns;
  })(Runner || {})
</script>

Templates

Now that the boilerplate is done, we need to start defining some worker classes and templates.

To keep things clean, I wrap each of my operations in an IIFE object which defines methods and the templates to return with each completion. This means there are more files to manage in the code editor, but each one encapsulates function cleanly and it's easier to maintain.

Make the Apps Script handler:

// SimpleSwap.gs

var SimpleSwap = (() => {

  const htmlTemplate = () => {
    let html = `
    <p
      class="active"
      _="on click set my innerHTML to 'Clicked!'"
    >Click me!</p>`;
    return html;
  }

  return {
    htmlTemplate
  };
})();

And finally our base HTML template for the sidebar:

// static/_base.html
// This is the sidebar wrapper. Content will be inserted dynamically.
<!DOCTYPE html>
<html>
  <head>
    <base target="_top">
    <script src="https://unpkg.com/hyperscript.org@0.9.5"></script>

    <!-- Our promise-based runner google.script.run() requests -->
    <?!= include('static/Runner.js'); ?>
  </head>
  <body>
    <main>
      <div class="main-container">
        <div class="sample">
          <b>Dynamic insertion and interaction</b>
          <!-- Hyperscript to interact with the Apps Script code -->
          <button
            _="
              on click
              call Runner.run('SimpleSwap', 'htmlTemplate')
              then put the result into #target
              then call _hyperscript.processNode(#target)
            "
          >trigger</button>
          <!-- This receives the result of the request -->
          <div id="target"></div>
        </div>
        <!-- other divs... -->
      </div>
    </main>
  </body>
</html>

This template will:

  • Allow us to access the Runner middleware to marshal API calls.
  • Click a button to run a bit of code.
  • Put the result of the Apps Script code into the #target div.
  • Initialize any Hyperscript included in the template to allow for interactivity following the swap.

The key in this method is to think through what interaction you want your template to have and to include that in the hyperscript attribute.

A More Complex Example

Let's say you want to make an Apps Script Extension (formerly "Add-on") or container bound script which allows you to fetch data from an API and then selectively insert results into your sheet. You can do that with Hyperscript inside the Apps Script sidebar quite cleanly. We'll keep the same boilerplate code but define a couple of functions to build a quick sample. I'm going to use the handy Star Wars API as the data source.

Although you can use Hyperscript to get data directly (via the ``fetch` command <https://hyperscript.org/commands/fetch/>`__), templating the response in the script gets messy, especially if you want to have actions on the results of the fetch request. To clean this up, we'll take advantage of Apps Script's ``URLFetchApp` class <https://developers.google.com/apps-script/reference/url-fetch/url-fetch-app#fetch(String)>`__ and another HTML template fragment.

Start by adding a button to a sidebar:

Get Star Wars characters

Now, we'll create our IIFE function

// SWAPI.gs
var SWAPI = (() => {

  // We'll use these sheet params to interact with the spreadsheet.
  const ss = SpreadsheetApp.getActiveSpreadsheet();
  const sheet = ss.getSheetByName('Sheet1');

  const getPeople = () => {
    let request = UrlFetchApp.fetch('https://swapi.dev/api/people').getContentText();
    let json = JSON.parse(request);

    // Apps Script templates can evaluate data dynamically. Call the
    // template fragment and then define a parameter on the object you
    // want to access.
    // https://developers.google.com/apps-script/guides/html/templates#pushing_variables_to_templates
    let template = HtmlService.createTemplateFromFile('template/swapi-list')
    template.people = json.results;

    // Evaluate the template and then get the resulting HTML to return.
    let html = template.evaluate().getContent()

    return html
  }

  const saveName = (name) => {
    sheet.getRange(sheet.getLastRow() + 1, 1).setValue(name);
  }

  return {
    getPeople,
    saveName
  }

})();

And lastly, our new template fragment which handles looping over the results as part of the evaluation step.

<!-- templates/swapi-list.html -->
<!-- Read the docs on Apps Script template evaluation if you
     haven't looked them over before. They can be quite helpful
     https://developers.google.com/apps-script/guides/html/templates
-->
<? for (var i=0; i<people.length; i++) { ?>
  <span class="active"
    _='
      on click call Runner.run("SWAPI", "saveName", "<?= people[i].name ?>")
      then remove me
    '
  >
    <?= people[i].name ?>
  </span>
<? } ?>

To help understand what's happening, we are:

  • Rendering a sidebar with a button to fetch results from a third party using URLFetchApp.
  • Allowing the Apps Script templating engine to handle rendering results.
  • Collecting the resulting HTML string and sending it back to the client.
  • Hyperscript swaps the HTML result into the DOM and then re-initializes on the new elements.
  • Clicking on a name adds it to the spreadsheet and then removes the option from the page.

Here's the result:

In the template, we define Hyperscript actions on each element so they also become interactive in the sidebar. Clicking on a name calls the saveName function and adds the value to the next available row in the sheet before removing itself from the sidebar.

Is it worth it?

This seems like a ton of work that could be achieved with out-of-the-box Javascript. So, is it all worth it?

It depends.

If you have minor interactions here and there, it might not be worth adding the extra attributes or taking time to create template fragments. DOM interactions can be pretty simple if you're just fetching and displaying data.

The real power of Hyperscript comes in locality of behavior and in making interaction plain in the HTML and not burying those actions in script files and event listeners. In the advanced example, I think Hyperscript is worth the effort because it is easy to see exactly what interactions exist on which elements.

Apps Script is notorious for weird behavior just because of the platform. Adding Hyperscript as a tool to manage interaction and behaviors can help identify bugs sooner because you - the developer - have a better idea of which interaction causes which behavior in the application.

If you're on the fence, take some time just to play around with simple swaps like I showed in the first example. Once you have the hang of writing behaviors on elements rather than in event handlers, some of the benefits will start to emerge.

Moving from Svelte to HTMX

Last year, I built an event registration tool for our district, mainly to keep track of what professional development we were doing and try to get paper out of the workflow. I chose to use Flask and Svelte for this project and this year, I decided to move away from Svelte and rebuild the application using HTMX. In this post, I'll explain why I decided to make the change and highlight some situations where HTMX just makes more sense for a large app managed by a single person.

Making a Switch

I'm self taught. I started playing with Internety things in the late 00's, starting with WordPress and straight HTML/CSS. I added Javascript slowly and since then, have managed to cobble together some kind of useful tools. Mostly for myself, but some to benefit others.

I had tried taking a dive into React, Meteor, and Vue, all with little or no success. The complexity of the frameworks and the abstraction needed to get stuff to show up on the page went way over my head, especially as a hobbyist. Adding complexity was the requirement to use build systems (I know Vue can be added directly with script tags, but that's not where I was introduced to it) and those felt untouchable.

Over the 2019-2020 school year, I got what I'll call a "working prototype" of an events registration system published using Firebase, some HTML, and a lot of vanilla Javascript. That pushed me closer to understanding build systems and when Svelte came along, its syntax finally felt familiar - just some template tags with scoped Javascript and CSS. Not too hard to handle.

I had a published version of the registration site by fall 2021 and it worked well for a first "real" project. Python did the heavy lifting in the background and I was able to have a nice interaction on the front with Svelte. So, why the move?

There are two main reasons:

  1. It's essentially an application to create and maintain records. I don't need elements to be running continuously (ie, a video player) while the user does things. Having a Javascript-built application was overkill for the purpose of the site. Rich Harris (creator of Svelte) elaborates on this in a podcast on when to use single-page applications over multiple-page structures.
  2. Maintainability of large Javascript-based systems is very difficult. Granted, I'm an amateur and this was my first project, so there are things I can certainly do better to take advantage of the systems, but I found myself reluctant to touch the app for updates because it would take me so long to untangle how everything worked together.

With those issues in mind, that's why I decided to move to HTMX.

How is HTMX Different?

I started following Carson Gross, the author of HTMX (formerly IntercoolerJS) and reading some of his essays, arguing for a return to hypermedia as the main driver of the web rather than Javascript. For my background, this made a lot of sense. Servers exist to serve content. The modern Javascript frameworks were created to allow for non-full page refreshes of content. HTMX blends those two goals, allowing the server to send content to the client and allowing for dynamic and strategic (even surgical) updates to the page the user is on.

This provides instant benefit in several ways:

  • The client no longer has to manage state. The server already knows the application state, so why not just send content that is stateful based on the user session (ie, logged in vs not logged in)? Using Flask sessions makes this very simple.
  • Templating engines can still be used for dynamic content creation. Each template contains the content it needs and doesn't have to re-wire itself with the other items on the page once it loads.
  • Specific elements can be updated as a result of a user action. The user interaction is clean and does not require any full-page refreshes, but without any of the messy Javascript workarounds to make that kind of interaction possible (looking at you, shadow DOM [whatever that means]).
  • Each request returns everything the user needs. You're not required to make more network calls to get the data necessary for the view or template (I think that's what "hydration" is).

With any Create/Read/Update/Delete (CRUD) tool, form rendering is a common task. I want to dive into the difference between doing this in Svelte and HTMX as an example of why I decided to make the move.

Form Rendering

In Svelte, I created a FormWrapper component which would dynamically handle input element rendering and form submission. This pattern was extremely helpful and I actually took some of the principles I learned and applied them to HTMX. The snarls came in determining what actually needed to be rendered.

A simple example is this view: when the user clicks on one of the action buttons, a form is rendered. Depending on the action, the form is different. In Svelte, you need to first render the form and then fetch the fields - two network calls to render. It works this way because Flask is serving JSON - agnostic structured data rather than data which carries all the information it needs in order to render.

Svelte form wrapper

So, a couple hundred lines of code to render a form. Reusability is good, but when it came to editing a form, it got gnarly really quickly. The main drawback was that for the form to even render, it required several more requests to the backend to get the fields necessary. Because the backend was just a JSON cannon, even the data it returned needed to be mapped and filtered into a usable state.

HTMX form rendering

With HTMX, that form is pre-templated and then sent from the server. Instead of a single endpoint to return all the form data, individual endpoints can be used to send back whatever form is necessary in HTML. There is no needed to request more information or process the data in the client to make it usable.

Because of Flask's templating engine and thanks to the extremely helpful Jinja partials extension from Michael Kennedy, I was able to mimic the dynamic action where a temporary sidebar is used to load and display the form.

The obvious tradeoff is that there are more files to maintain. But because each file is single-purpose, they're smaller and more focused, which means maintaining is much easier. It's also much more clear what is being returned by each operation rather than firing the JSON cannon and then untangling everything on the client.

The Takeaway

I'm very happy with the decision to make a switch. In the cutover, I've made some incremental improvements that were on the list of "someday" changes, but had felt overwhelming to try and attempt because of all the side effect potential. I'm also not saying that Javascript frameworks like Svelte are a terrible idea - writing this app in Svelte to start helped me learn how to build resilient and flexible backends.

For me, the value in HTMX comes back to maintainability and developer experience. The language of the web is HTML. Svelte was a good entry point for the initial build because its syntax was very close to plain HTML. But in the end, it still relied on Javascript to work and instead of feeling flexible, it felt brittle.

HTMX's approach to adding functionality via HTML attributes is much closer to the surface of normal web structure and gives just as much flexibility for 99% of what I need to do at a much lower complexity. I just enjoy working with HTMX because I can spend less time trying to figure out where an emitted event goes to make an update and more time making the tool more useful for my colleagues. That's a win every time.

Comments

Alan Levine

Thanks for blogging this Brian although my understanding level is hovering at 16% 😉 But something I did pick up on is what I want to be implementing more on some work projects, being able to create filterable tables of data, where you can provide a place to dynamically pare down tables of stuff to ones of interest.

I like AwesomeTables but you have to pay per app. And I tried the Google Dynamic Tables thing but it looked like it was built for something else- I get a sense HTMX might be a way. And like you, I prefer this HTML oriented approach.

Cheers!

Brian Bennett

Oh boy, you’re in for a treat. HTMX has a partner tool called hyperscript which allows for some lite client scripting. It’s made by the same author and it’s pretty much plug and play with HTMX. I’ve got it filtering tables (alphabetical clicking on table headers) as well as an active search on rows in large tables, all right there in the HTML. I’d be happy to send a demo if you think it’d be helpful.

Sort a Google Sheet with Vertically Merged Ranges

This is a quick tip, but if you've ever tried to sort a sheet with vertically-merged rows, you've probably been disappointed. Here's an example:

If you wanted to sort your sheet by Column A, you're out of luck. You can't do that automatically because Sheets doesn't know how to move blocks of unmerged cells even though they're right next to the merged range. Your best bet in that case is to highlight all of the rows in the merged range and drag & drop in the correct order.

Even though the gray line while you're dragging makes you think you can drop a merged range into another, you'll get an error if you actually try to do that.

This won't work for super-large data sets, so try to alphabetize as you go.

You can, however, sort ranges within selections, not just the entire sheet. So I can sort the names within each team to make life easier.

Highlight your range and right click to bring up the menu. At the bottom, hover over "View more cell actions" and then select Sort Range.

You can sort this selection how you'd like to at least have your subset data in the right order.

This is why I typically don't use merged cells when creating large spreadsheets. It makes it harder to move data around the way I'd like to.

That Time I Deleted a Database

I've been hobby programming for several years that has spilled over into making applications for staff at school. One of those tools is a website to manage PD events, signups, and documentation. This week, I accidentally deleted that database.

I don't want to downplay how bad this was, but it also isn't on the scale of losing student or financial data. But, it was roughly eight months of events, signups, and more importantly, participation records. Several things went wrong, all of which I should have caught at some point:

  1. My database migrations (history of changes to structure) didn't match between my computer and the server.
  2. I didn't double check the data in the database dump I had made before re-importing it.
  3. I had an artificial deadline in my head and I didn't slow down in the process of the changes.

We're constantly trying to follow best practices when it comes to accessibility. The change I was introducing allows staff to request accommodations on their registration so the presenter knows to do one thing or another. In addition to our general presenter guidelines, this helps make sure everyone's needs are met in a non-obstructive way.

I learned some things...

First, instead of trying to force the migration history to reconcile, I should have slowed down and fixed the root issue. Those migrations files are critical to making sure everything moves around in a way that can be reversed and repaired. Instead of reconciling the differences, I made some quick changes to which steps were related to one another and that contributed to the problem in the first place.

Second, I assumed the database dump from last night was good enough. I didn't check to see that it actually had good data in it. As it turns out, the dump was blank - and that means that when I reloaded it back into the database, I effectively erased everything. Since that was my only backup, there was no chance to roll it back to a previous state. So, now we'll be doing nightly backups as well as dumps right before migrations to make sure we have at least one good copy. I'll also be checking the file directly to make sure it holds information, period.

Third, I wanted to get the update in place. It didn't take too long to make and the code itself wasn't too complex. I wanted to say it was done and be able to move on. Instead, I ended up giving myself about 10 more hours of work to piece information back together and then work out ways to safely integrate everything back together. Deadlines are always flexible, especially when you're setting them yourself.

I did have one stroke of luck in this whole ordeal: when a session is created, a Google calendar event is automatically created. When someone registers for that event, they are also automatically added to that event as a guest. Since this only affected the site database and not the Google Calendar events, I was able to use some quick-and-dirty Google Apps scripts to restore users, events, and their registrations. The only thing I couldn't restore was attendance data.

Scripts

The first task was to get all the events on the calendar within a date range. I like using the Calendar v3 API because it gives me access to more properties on each Event that we can throw into a spreadsheet.

Once I had this sheet, I was able to do things like repopulate all users, extracting their data from the attendees string and using the Admin SDK to look up their name and location:

In this case, I got lucky that most of what I needed was available in the Calendar. Knowing what's available in Apps Script made this a partial loss rather than a total loss.


Systems of Trust and web3 in Education

This post has been percolating in my mind for several months. It started as an opus that sat stale in my drafts because I didn't want to wrestle the mess it was into something readable. Then I pretty much forgot about it because, in the end, .

In that time between writing a fiery essay and not really caring to write at all, I finally settled on why web3 doesn't sit well with me. And it comes down to systems of trust.

Set the stage

For the sake of clarity, let's get on the same page. web3 is a buzzword right now which posits that the "next" version of the Internet will run on the blockchain. This is manifested from the rise in popularity of Bitcoin and other cryptocurrencies, the NFT weirdness happening, and techno-futurists hedging on being a part of the "next big thing."

The platform dictates the conversation, and since education and technology are intimately woven together (for better or worse), there are articles and accounts popping up, prompting teachers to start thinking about how to make sure they're ready for the blockchain.

How did we get here?

Understanding how this all works is important to forming an informed opinion. Again, this is a very brief description, but here's a rundown of some of the history.

"Blockchain" is a technology which is essentially a history of something done that cannot be changed. Once something is on the blockchain, it's there forever. The history is public and everyone contributing to that chain can verify the record. The idea was developed in the late 1980's and early 1990's, but didn't really come into play until the release of Bitcoin in 2008. Since then, blockchain technology has been seen as the next big thing in everything from currency to supply chain routing to vegetable freshness. If you're interested in a technical rundown of blockchain, this article was immensely helpful to me.

Bitcoin's big selling point is that you can have secure financial transactions if everyone can see the entire record. In simple terms, a blockchain is a database that you can only add records and only if everyone else watching the chain agrees. Instead of relying on a bank to tell you who paid what to whom, a network of computers manage the consensus model for any update made.

Imagine you're at a soccer game. Instead of there being one scoreboard which displays the current state of the game, every person in the stadium has a scorecard where they keep the game score. Normally, when a goal is scored, the authority (referees) update the scoreboard and the game continues. In our imaginary blockchain soccer game, any time there was a goal, every person in the stadium would have to agree on the new score based on that goal.

The idea is the same in the digital space. When a change is made to the blockchain, every computer involved in that network adds to the consensus of that change.

Systems of trust

Every structure in society is based on trust. I trust my employer will give me a check every two weeks. I trust other drivers on the road will stay on their side of the road. Blockchain technology moves trust away from people and into technology. There are certain benefits to trusting in systems (traffic lights being consistent, for example), but there are also drawbacks, especially when it comes to web3 and education.

At its core, our time with students is based on trust. Parents trust us to make wise decisions for their children. School systems trust teachers to implement the curriculum with depth and rigor. Students trust us to watch out for their best interests. Trust is built into every interaction in the school day and relationships are the backbone for why most of us are teachers.

web3, on the other hand, asks us to trust in the distributed blockchain network. Trust is in the system rather than the person. For a blockchain to work, you need a lot of people (computers, really) to trust one another. In simple examples, it sounds like a great idea. That student actually turned in that assignment.

But when we get into the nuance of some of the proposed benefits of web3, you lose the relationship aspect of teaching and learning completely. Is this their best work? Is this actually their work? We see this already with surveillance tech making it's way into classrooms across the country - systems which assume the student is an adversary before the first day of class.

With relational trust systems, there are avenues for disagreement. Dialogue and relational history come into play when we're judging student work. With web3, we're relying on a distributed network of computers to make a judgement call. If that is the wrong judgement (or a false judgement - theft and fraud still happen with Bitcoin), there is no avenue for recourse.

Permanent scars

Trusting in a system which cannot be revised may have some fringe benefits, but at the end of the day, we want our students to grow. We want students to be better citizens today than they were yesterday. With an educational web3/blockchain, any past mistakes are there forever as signposts of bad judgement. Rather than giving students an opportunity to describe lessons and what came of those situations, they're now an open record for anyone to interpret however they want. Systems of trust are better when people are involved, not when they're distributed.

Technology trends sweep through education just like they do through hedge fund and venture capital circles, albeit somewhat slower. This is an important enough topic to read and think through what implications could come as a result of being a part of the "next big thing." Because if we get this wrong, it's going to be pretty hard to go back.


Other reading

I spent a lot of time reading and chewing through resources to learn more about the underlying technology and some of the other problems that make blockchain a questionable solution to many of the problems it claims to solve. Here is a selection you may find helpful.


The featured image is Iterated Function System by Quasimondo found on Flickr. It is licensed under CC BY-NC.