Posts

Grace Hopper Models a Billionth

This is worth watching. When we're describing principles to students - especially concrete thinkers - modeling and finding concrete examples of abstract ideas is critical to develop understanding.

Taking it one step better, scaffold your students to help them come up with the analogy. Or, better yet, challenge them to find the lengths themselves and then create the model.

Related video: modeling the speed of sound vs light with a metronome.

A Critical Gradebook

I am finding the right balance of scaffolding to provide the best learning environment for my students.

Source: A Critical Gradebook

The gradebook seems like the most frustrating and under-developed part of any LMS. We use Canvas and have had our own struggles with making the gradebook helpful, not hurtful. Laura Gibbs has more thoughts on that than I do.

The Learning Mastery component of the Canvas gradebook is immensely powerful if you take time to set it up correctly. It's a shift away from singleton points and gives students and teachers a more high-level view of what objectives/skills/standards a student has attained over time. This can be (but doesn\'t have to be) linked to the students course grade. Again, my view is to stick with Frank Noschese's Keep It Simple SBG schema.

Translating that is a chore of it's own, but I'm hacking away at a helper tool...more on that another time. I think this is where something like an LTI tool can help across multiple platforms, if the new gradebook (or commentbook) is flexible enough to focus on feedback rather than a specific assessment protocol.

New law will require Indiana high schoolers to take US citizenship test - FOX59

The new law in Indiana doesn’t say that students have to pass the test, but it does require them to take it to be able to graduate.

Source: New law will require Indiana high schoolers to take US citizenship test – Can you pass it? | FOX59

In today's Completely Frivolous Testing Update.

I'm not against new ideas or exposing students to the rigor we ask of people working to naturalize, but requiring students take a test - with no requirements set - is the definition of fivolous.

I wonder how many Indiana legislators could pass this.

Bent

My car stopped running suddenly on the Indiana toll road about 10 days ago. The timing belt decided to break, which makes the car not want to do anything correctly. To keep engines smaller, some are designed so that the valves sneak down into the piston chamber. (This also increases fuel efficiency.) These engines are known as "interference" engines and is controlled by precise timing to make sure the pistons don't hit the valves. When your timing belt breaks, well...things aren't timed so precisely anymore.
A bent engine valve.

This should not be bent.

It's such a small piece, but it took me a day to get it out of the engine. We took the head (all the valves) off and inspected the pistons. When it hits a valve, the piston itself can be damaged, which would realistically mean ditching the car. I was super lucky because the pistons looked good. There was one small dent, but it wasn't catastrophic. The real danger is shearing the head off the valve and scratching the piston chamber, but that didn't happen either.

My car with the engine head removed, exposing the pistons in the combustion chambers.

Instead of repairing the valves myself, I decided to take it to an engine repair shop because they can do the work in a few hours what would otherwise take me days to accomplish. I picked up the engine head within a week and got it all put back together.

This car is so great. It's at 230,000(ish) miles and with this repair, could probably go another 200k if I keep up with oil and belt changes. This year alone has seen a new engine head, the head gasket, timing belt and water pump, a new alternator, a new clutch and flywheel, and several new sensors and other bits.

Now, this would not have been such a big job had the belt not broken. In a couple years, I'll pre-emptively change the belt to avoid such a situation.

Turns out our minivan also has an interference engine so I'll be taking a day to change that belt before we run into the same problem.

KQED on Active Engagement, Not Compliance

More than that, the characteristics should be observable to anyone who walks into the room.

We work hard with our teachers to make sure they're changing instruction and not just flavoring old ideas with tech. The eight reflective questions in this article are a great outline (guide?) teachers can use as they're planning ahead with technology in general.

Beyond purposeful planning, if you can't see students engaging in some way, they probably aren't. Our indicators for engagement have to be updated as well. From earlier in the article:

...he looks for behavioral, emotional and cognitive engagement at play together.

Quiet seat work does not equal engagement.

Source: How To Ensure Students Are Actively Engaged and Not Just Compliant | MindShift | KQED News

Spring Broke

Spring break finishes tonight, so this is the "what we did over spring break" post in case any of my old teachers are reading the blog these days.

The whole family got sick. Except me. So, I played doctor (with no small role being played by my parents, whose house we were in while infirmed). It was your bona-fide Influenza A for Mrs. Bennett and the three Bennett children. Not the stomach bug nastiness, the everything-hurts-why-do-I-still-have-a-fever nastiness.

Backyard camping. CC-BY by me. `This has an interesting story. <https://photos.ohheybrian.com/#15533873363118/15546528233275>`__

Backyard camping. CC-BY by me. This has an interesting story.

`Outside grilled cheese <https://photos.ohheybrian.com/#15533873363118/15546528317716>`__. CC-BY by me. The crossed legs killed me. So grown up.

Outside grilled cheese. CC-BY by me. The crossed legs killed me. So grown up.

`Garden prep <https://photos.ohheybrian.com/#15533873363118/15546528415743>`__. CC-BY by me.

Garden prep. CC-BY by me.

One of the best parts was that even though I took my computer to Kentucky, I forgot my charger at home. There were no emergencies, the world did not end. I ended up reading a book and a half in between of nursing kids and my wife, which was a great treat to myself.

I think I might start leaving my charger places so I have a hard-stop deadline for working.

My Phone is a Phone. (Mostly.)

I have a love/hate relationship with phones. Several years ago, the "shitphone" post on Medium caught my attention and made me start thinking more seriously about A) what I spend my money on, and B) why I did that. This year has been a year of disentangling myself from my phone. I started by deleting all social media. That was easy and didn't feel too painful. I wasn't constantly

Next, I removed Gmail completely. I no longer check email on my phone. There are few instances in life where an email is so urgent it needs a reply while I'm walking somewhere. Those times were better solved with a phone call or text anyways. That was a little more painful because of the instant-reply expectation that comes with email.

The next step was adding an app called Action Dash which reported my usage time daily. I respond to data, so seeing hard numbers about my use helps me meet those goals. Now that I have data, I can start making some more difficult decisions.

After a week, I got my phone usage down to under an hour consistently. Even then most of the use was using Hangouts through the day to keep in touch with my team while I moved around different buildings.

I got thinking about how I use my phone and what I wanted to be using it for. Andy Crouch's The Tech-Wise Family is a big influencer in how I think about technology in general and my phone use specifically. The premise is that a phone has a proper place, just like toys and books. The challenge is that we have to define the proper place in the face of manufacturers and developers trying to define it for us.

My proper place is to focus on communication. Calling and texting (through various apps) is my goal. The phone is a utility, not an entertainer. After entertaining thoughts of moving back to a flip phone, the loss of a calendar in my pocket would be a huge burden to manage because my schedule is so variable. I can't realistically limit my phone to only communication, but I can make some other changes to define its role in my life.

I went on a deletion frenzy. I deleted YouTube and Netflix. I deleted Goodreads. I deleted non-family and non-work related chat apps. Games are gone. I deleted and disabled all of the browsers this week. I deleted everything I could that didn't directly relate to communication as a rule of thumb.

It felt great. It feels great.

My phone isn't completely locked down to communicating, but I'm getting closer to having a very specific and well-defined role for its place in my life. I still have my Kindle and Overdrive books, I still have a podcast manager and an RSS reader. I'm solidly in young-children mode, so my camera gets plenty of use. But each of those consolations has a specific purpose in specific situations.

My phone is here to stay, but now it's on my terms.


Pattern flickr photo by Jonas B shared under a Creative Commons (BY) license

Spring

We heard the first group long before we could see them. Almost as small as long-distance airliners, the Sandhill crane call is distinct, clear. The kids and I are craning our necks, looking for the group of birds heading north for the summer months.

This week, the girls asked why they were going to bed while it was still light out. The first time this year when it's been light enough to look at books in bed without a flashlight. We look forward to the nights where we can fall asleep and wake up to the light in the windows.

Spring teases us here. Glimpses of green grass and blue skies here and there. Sometimes they're swept under a late snow shower or heavy frost. But we know the sunlight is coming back.


"Cardinal."

"Chickadee."

"Woodpecker."

"When will the hummingbirds come back?"

`Keeping watch <http://photos.ohheybrian.com/#15578415044774/15578363605227>`__. Taken by me, CC-BY.

Keeping watch. Taken by me, CC-BY.

We practice our bird calls outside. We're all rusty from a winter spent indoors, faintly hoping some winter holdovers will visit our bird feeder in the front yard from time to time. Even if we can see them, we can't hear their songs. Sometimes we practice with an app, but they know it isn't the same as listening outside, picking calls from among the noise.


The flock wheels around, calling to one another. This one is smaller...maybe 30 or 35 individuals. When they're gone, we go back to raking and tending the fire, listening for sounds of the next flock to float down.

Planning the AR Study

Some initial thoughts on my action research design as I get ready to write up the study methods and timeline:

Considering PD Structures

I'm in the midst of an action research course and my topic is evaluating and reflecting on our systems of PD in the district. This post is the literature review I did as part of the research process. This is similar to some of the work I did last year on leadership development and PD and those links to related items are at the bottom of this post.


“Professional development” as a catch-all for staff training has a degree of uncertainty associated which clouds our ability to critically discuss and reflect on programming. As an instructional team, we have not taken time to critically assess and address our effectiveness in presentation or facilitation nor have we done any work to gauge the effectiveness of professional development in changing teacher practice.

In Elkhart, we have worked mainly with self-selected groups of teachers as technical coaches according to the definition provided by Hargreaves & Dawe (1990). Though our sessions contained collaborative elements, they were singularly focused on developing discrete skills to meet an immediate need. As a team, these have been effective in closing a significant digital teaching and learning skill gap present in the teaching staff. We have not, to date, considered specific models of professional development as a mechanism for planning or evaluating the effectiveness of workshops offered in a given school year.

According to Kennedy (2005), comparative research exploring models of professional development is lacking. Her analysis and resulting framework provides helpful questions when assessing and determining the type of offerings for staff. Reflective questions range from the type of accountability organizers want from teachers to determining whether the professional development will focus on transformative practice or serve as a method of skill transmission. It is tempting to always reach for models which support transformative practice, but there are considerations which need to be made for those structures to be truly transformative.

As a district, our efforts have centered on active processes with teachers, but this has been done without an objective measure of what those types of programs actually look like in practice. Darling-Hammond & McLaughlin (1995) summarize our working goal succinctly: “Effective professional development involves teachers both as learners and as teachers and allows them to struggle with the uncertainties that accompany each role,” (emphasis mine). Struggling with uncertainties requires some measure of collaboration, but collaboration alone does not necessarily lead toward transformative ends and can even drive top-down mandates to improve palatability (Hargreaves & Dawe, 1990).

To structure collaborative development opportunities, Darling-Hammond & McLaughlin (1995) make a case for policies which “allow [collaborative] structures and extra-school arrangements to come and go and change and evolve as necessary, rather than insist on permanent plans or promises.” This counters many district-driven professional development programs which require stated goals, minutes, and outcomes as “proof” of the event’s efficacy and resultant implementation. The problem with these expectations is that truly collaborative groups are constantly changing their goals or foci to meet changing conditions identified by the group (Burbank & Kauchak, 2003).

In response, a “Transformative Model” (Kennedy, 2005) attempts to move beyond a simple “collaboration” label and build a professional development regimen which pulls the best from skills-based training to into truly collaborative pairs or small groups attempting to make changes in practice. She argues that transformative development must consist of a multi-faceted approach: training where training is needed to open spaces when groups need time to discuss. All work falls under the fold of reflection and evaluation of practice in the classroom. Burbank & Kauchak (2003) modeled a collaborative structure with pre-service and practicing teachers taking part in self-defined action research programs. At the end of the study, there were qualitative differences in the teachers’ responses to the particulars of the study, but most groups agreed that it was a beneficial process and they would consider participating in a similar structure in the future. Hargreaves & Dawe (1990) alluded to the efficacy of truly collaborative research as a way to combat what they termed “contrived collegiality,” where outcomes were predetermined and presented through a “collaborative” session.

Collaboration as a means alone will not change practices. Hargreaves and Dawe’s (1990) warning against contrived collegiality is characterized by collaborative environments with limited scope “to such a degree that true collaboration becomes impossible”. Groups working toward a shared goal of transformative practices is undercut when the professional development structures disallow questioning of classroom, building, or district status quos. If collaborative professional development groups are allowed to “struggle with the uncertainties” (Darling-Hammond & McLaughlin, 1995) present in education both in and beyond the classroom, the group will be more effective in reaching and implementing strategies to improve practice. This view subtly reinforces Hargreaves & Dawe’s (1990) perspective that collaboration must tackle the hard problems in order to have a lasting impact.

There are several other factors identified which contribute to the strength and efficacy of professional development. These range from continuous, long-term commitments (Darling-Hammond & McLaughlin, 1995; Hargreaves & Dawe, 1990; Richardson, 1990), work that is immediately connected to classroom practice (Darling-Hammond & McLaughlin, 1995; Richardson, 1990; Burbank & Kauchak, 2003), and a group dynamic which recognizes the variety of perspectives which inform teaching habits across a wide spectrum of participants (Kennedy, 2005).

As an instructional coach, one of my core responsibilities is to help create a culture of learning amongst members to mitigate division or power dynamics based on experience (Darling-Hammond & McLaughlin, 1995; Burbank & Kauchak, 2003), which is particularly evident in mixed-experience groups. In addition to fostering a strong group dynamic, the instructional coaching role becomes facilitative rather than instructive to help teachers address problems of practice (Darling-Hammond & McLaughlin, 1995). It is easy to fall into an technical coaching position in collaborative groups, but such a role reduces the chances for transformative work to emerge as teachers become trainees rather than practitioners (Kennedy, 2005). This becomes more apparent as districts add instructional coaching positions, but limit the scope of the role to training sessions under the guise of “encouraging teachers to collaborate more…when there is less for them to collaborate about” (Hargreaves & Dawe, 1990). Ultimately, the coaching role is most effective when it is used to support teachers through “personal, moral, and socio-political” choices (Hargreaves & Dawe, 1990) rather than technical skill and competence.

In order to fully reflect upon and evaluate our programming, Kennedy’s (2005) framework for professional development will serve as a spectrum on which to categorize our professional development workshops and courses. Hargreaves & Dawe (1990) also provide helpful reflective questions (ie, are teachers equal partners in experimentation and problem solving?) to evaluate just how collaborative our “collaborative” groups are in practice. Once our habits of working are established on the framework, we can address shortcomings in order to build toward more effective coaching with the teachers in the district.

Resources

Burbank, M. D., & Kauchak, D. (2003). An alternative model for professional development: Investigations into effective collaboration. Teaching and Teacher Education, 19(5), 499-514. doi:10.1016/S0742-051X(03)00048-9

Darling-Hammond, Linda, and Milbrey W. McLaughlin. "Policies that support professional development in an era of reform." Phi Delta Kappan, Apr. 1995, p. 597+. Biography In Context, http://link.galegroup.com.proxy.bsu.edu/apps/doc/A16834863/BIC?u=munc80314&sid=BIC&xid=abd8b6f2. Accessed 5 Mar. 2019.

Hargreaves, A., & Dawe, R. (1990). Paths of professional development: Contrived collegiality, collaborative culture, and the case of peer coaching. Teaching and Teacher Education, 6(3), 227-241.

Kennedy, A. (2005). Models of continuing professional development: A framework for analysis. Journal of in-service education, 31(2), 235-250.

Richardson, V. (1990). Significant and worthwhile change in teaching practice. Educational Researcher, 19(7), 10-18. doi:10.2307/1176411


Here's a presentation I did for a class about a year ago over similar themes, but with a leadership spin.

The featured image is by Jaromír Kavan on Unsplash.

More Redefinition...

From a post last week where I continued to refine my research question:

How does continuity of study (ie, a PD sequence rather than a one-off workshop) affect implementation?

Is there an ideal timing? How often (in a series) seems to be effective?

What does the interim look like in between workshops?

Are volunteers more likely to implement training? Or are groups, even if they're elected to come by leadership?

How does the group dynamic affect buy in or implementation after the fact? Would establishing norms at the outset remove stigma?

I thought I was going to use, "How can my role effect change through professional development?" which isn't a great question for research. It's good for reflection, but it's too specific to me and not great for sharing in a collaborative environment (my team, for example).* *

Based on some of my literature research, I'm going to broaden back out to generalizing PD structures as a practice rather than focusing on my own role within those structures. Right now, I'm thinking:

How will aligning our professional development programs to goal-oriented frameworks affect implementation by participants?

I'm feeling good about this question for a few reasons:

  1. Much of my day to day work is with individual teachers. They often have a larger focus and I spend my time helping those teachers find solutions or methods to reach those goals.
  2. I am involved in building-level discussions through departments or administrators. It isn't as frequent as one-on-one contact with teachers, but I do work with administrators to help their staff reach collective goals.
  3. My team is housed at the district level, not individual schools. My involvement at the highest level eventually trickles down to buildings and individual classrooms.

We've never done a full, research-based survey on the PD activities we offer in order to evaluate whether or not our work is effective in changing instruction at any given level. Using academic research for a guide, we can begin to evaluate and categorize our work in view of larger goals. Hopefully, we are able to identify patterns, strengths, and weaknesses as individuals and as a team as we begin planning for next year's programs.

Connections

This is a copy/paste of a post I wrote in a graduate class. I'm posting it here so I can get back to the ideas after the blackboard course finishes.


I'm still working on my lit review and I've come across two articles that propose classifications of types of PD typically offered in school. Hargreaves & Dawe (1990) discuss "coaching" as a larger construct. The term has been used more recently (even my title has "coach" in it) but it's been poorly defined in terms of the job description and my actual, day to day work. The authors cite Garmston's (1987) model, which defines structures: technical, collegial, and challenge coaching. Hargreaves & Dawe describe each model and then evaluate its effectiveness in changing school culture. The article is timely because I'm asking similar questions as I reflect on my own work with teachers.

The other helpful article (Kennedy, 2005) I found provides a framework for analyzing and qualifying nine models of professional development and proposes a structure for analysis of effectiveness with teachers. Categories align with Hargreaves & Dawe and provide more nuance in determining the type from a teacher's perspective rather than the coach's.

Two do not represent a statistical sample, but both articles reach similar conclusions nearly 30 years removed from one another. Development for teachers must include reflection not only on individual practice but processing the political and power structures in place on the teachers and their functioning within those structures. Challenging the status quo through methods like peer review, paired or collaborative action research, or even something more elaborate like instructional rounds, is critical if lasting change is going to take effect.


Hargreaves, A., & Dawe, R. (1990). Paths of professional development: Contrived collegiality, collaborative culture, and the case of peer coaching. Teaching and teacher education6(3), 227-241.

Kennedy, A. (2005). Models of continuing professional development: A framework for analysis. Journal of in-service education31(2), 235-250.

Revising the Question...Again

I started a series of professional development workshops with teachers this week. It's a series of half-day work sessions with full departments and I'm focusing on active learning and assessment techniques all centered on literacy within the content area. It's really a part two to a full-day conference we held for teachers earlier this month and my task (and goal) is to make sure teachers are equipped with the how *after hearing the *why at the kickoff.
My original question was framed as a negative: Why don't teachers implement learning from professional development? I think this has an inherent bias, assuming that teachers don't try to use what they've learned. Based on my work this week (and looking ahead), there is definitely a desire to do things and it seemed that the lack of planning time with colleagues was a bigger cause of inaction than not trying.

I'm going to adjust my question: How can my role effect change through professional development?

I want to move away from what other people do to how I can help impact their habits through strong professional development. I'm still not thrilled with the wording, but I'm interested in what structural components make a program effective when it comes to implementing ideas. To start, I brainstormed some gut feeling indicators and questions that (I hope) will guide some of my research.

Some other related questions:


The featured image is IMG_6750, a flickr photo by classroomcamera shared under a Creative Commons (BY) license

The Why Loops

I spent some time last week running through some "why" loops to hone in on reasons behind my potential research question. I think the question is broad enough to allow for several avenues of exploration, but it was insightful to run through the cycle several times (below). We've actually used this mechanism as an instructional coaching team in the past and being familiar with the process helped me focus on larger issues. Granted, some of the issues contributing to some of the behaviors we see are well beyond my specific purview and definitely outside the scope of my AR project.

Below is a straight copy/paste of my brainstorming. I think items two and three are most within my realm of influence. I can use my time to focus on teachers who have recently participated in PD to help provide that instructional support. I can also work proactively with principals, helping them follow up with their staff members learning new methods or techniques and recognizing those either with informal pop-ins to see students in action or public recognition in front of their staffmates.

Why don’t teachers implement the training they’ve received in PD?

  1. Teachers don’t put their training into practice
    • There are good ideas presented, but no time to work on building their own versions.
    • The PD was focused on the why, not enough on the how
    • Teachers don’t understand why they need to change practice
    • The district’s communication about the offered PD is lacking clarity
    • There is a lack of leadership when it comes to instructional vision.
  2. Teachers do now show evidence of putting training to use with students.
    • Teachers don’t know how to implement ideas they’ve learned in the workshop
    • There are so many demands on their time, planning new lessons falls to the back burner
    • In-building support systems are lacking
    • The district is strapped for money and hiring instructional coaches isn’t a priority.
  3. Teachers do not put learning from PD into practice.
    • There is no outside pressure to implement ideas learned in training
    • Principals are spread too thin to pay close attention to inservice teachers are attending
    • Principals do not know what to look for after teachers attend inservice.
    • Teacher evaluations are based on outdated expectations and promote superficial expectations.
  4. Teachers do not communicate implementation of learning
    • Workshops in the district are often standalone with no formal structure for long term support
    • The resources committed to PD for several years were focused on one-off training
    • The district lacked a vision for teacher development as a continual process
    • District leadership did not see the value of instructional support as a formal position in the district.
  5. Teachers do not implement learning from workshops
    • No one follows up on the learning from the PD
    • There was no formal method for recognizing PD
    • There is no formal expectation of implementation from supervisors (principals, etc)

"Loop" by maldoit https://flickr.com/photos/maldoit/265859956 is licensed under CC BY-NC-ND

School Shootings and the Cheerily Gruesome World of DIY Classroom Prep

image1

This is an eye-opening look at the teacher-as-content-creator world of classroom DIY. Two quotes jump out that highlight the broken legislative system we have and the broken social media expectations we set for ourselves.

Teachers need their content to perform well and often, it’s the pretty, palatable posts that perform well; so that’s what teachers feel pressured to deliver.

As long as teachers continue to share online tips and tricks for shielding their students from the threat of gun violence, as if they were simply sharing book recommendations or math worksheets, in lieu of explicitly demanding gun reform, teachers will continue to carry the overwhelming burden of keeping their students alive in deadly situations.

Source: School Shootings and the Cheerily Gruesome World of DIY Classroom Prep

Checking Implementation

Running PD for an entire district is a challenge. The biggest gap I see is knowing how or when teachers actually use what they've learned in a session or a series of sessions. We have automated systems in place, but it doesn't give us information on the effectiveness of our instruction.

We coach our teachers to check for understanding and watch for application of learning with their students, yet this is something I have not done well with the teachers I work with. Granted, I work with all five secondary buildings (and teachers in general with my partners), so geography and time are a challenge in gathering and collating the right kind of information.

I'm interested in what kinds of supports we provide will help teachers actually use what they've learned. We run several programs, but which ones are the most effective at engaging and enabling our teachers to make changes to their teaching? What kinds of environments or availabilities are the most helpful to the staff?

Homing In

I haven't defined a specific question yet, but several I'm thinking about include:

  • How long do teachers wait before implementing training they've received from the district?
  • What professional development structures or systems best enable teachers to implement skills or strategies learned in professional workshops?
  • How does student engagement or learning change as a result of a specific instructional change by a teacher after attending a training event?
  • What are the reasons teachers do not put strategies or systems in place after a workshop?
  • Do professional development workshops make an impact on day to day instruction by the teaching staff?

My main concern is that several of these questions are very subjective. Measuring the result - either quantitatively or qualitatively - will be difficult and rely on select groups of teachers self-electing an evaluation tool. We already send a basic implementation survey to teacher three weeks after an event, so my intent is to go through all of those records and begin to identify the response rate as well as the most common responses for implementation vs non-implementation by teachers. I'm also hoping to gain some candid insight on the state of our professional learning opportunities from teachers' perspectives.

Reassessing in Standards Based Grading

I'm helping several teachers move toward standards-based grading practices this year. We work a lot on philosophy - why they'd want to use this grading mechanism over traditional scores, how to support learning, and the language of SBG in general with students - before we get into the how-to. That helps make sure everyone is in the right frame of mind.

Once they're ready to start, that's where the how-to work comes in. I know what I think about how to set up a class, but there is no gold standard when it comes to actually running the class. If you're looking to start, allow me to redirect you to Frank Noschese and his excellent blog as well as pretty much anything written by Rick Wormeli.

Today's post started as an email asking how I handled retests in my class. The following is more or less what I wrote back, with some edits for clarity and more general application.


I’m trying to up my standards based grading game. We briefly talked about this last semester, but I’m wondering...how can I most efficiently update students’ grades to show mastery when I’m having them do test corrections? Ideas welcome!! This came in an email

Do you do paper-and-pencil corrections? How are you building your tests? I ask because there are a few ways you could consider, but each kind of depends on your own style and class processes.

Grading paper-and-pencil corrections
When I did this, it was usually something like:

So, they would go through the material, evaluate their responses, and then find the right answer and justify it. I was mainly concerned with the justification of the response, not so much that they found the right answer. I would grade their mastery on that justification, bumping them up or down a little bit.

To track it, you could download the MagicMarker (iOS only) app and mark them on Outcomes as if you were talking to them in class. It aggregates those scores into the Canvas Learning Mastery grade book and then you can evaluate the overall growth rather than give credit based on that one assessment.

Question Banks

This is definitely the most time consuming to set up, but once it's set up, you're golden. Getting questions in standards-referenced banks allows you to build out Quizzes that pull randomly, so you can give a retake or another attempt that updates those Learning Mastery grade book results. This is what I tended to do instead of paper/pencil once I had everything going.

Students would get their results and then focus on any standards that were less than a three in their Learning Mastery grades (out of four total). There'd be some kind of work involved so they weren't blindly guessing, but then they could take the test again because the questions were likely to be different with the bank setup.

Set up banks based on standard and then file questions in there. When you build the Quiz, you use Add new question group rather than Add question in Canvas. You can link the question group to a question bank and specify how many items to pull at X number of points.

Student defense and other evidence

This one is probably my favorite: just giving students a chance to plead their case...a verbal quiz, essentially. I'd use MagicMarker while we were talking to keep track of their demonstration. I would ask them to show me work we'd done, explain how they know what they know, and then prod them with more questions.

I typically did this if they were having trouble demonstrating understanding in other ways. I wanted to remove test anxiety or reading comprehension from the equation, but this was typically the last option for those kids. I'd then work with them to get over those test-taking humps (granted, this was more important to do in the AP class because they had to take the test and I needed them to be ready for it).


I think all of this boils down to get more data into Canvas (or your LMS if you can)...try not to rely on a single demonstration to judge understanding. My goal was to have students show mastery on standards by the end of the semester. So, if they're not getting one of them now, it still goes in as a zero but it serves as a reminder that they still have to do that standard. I was updating grades on the last day of the semester for my students. It's a weird way for them to think and it'll take some prodding by you so they don't forget that a zero can always convert to full credit. Usually what happens is a later unit will give them more context for whatever they're struggling with and cycling back after more scaffolding is more effective than trying to drill the issue immediately, if that makes sense.


If you're not using Canvas, there may be similar systems in your LMS that will help you track growth. I also have a Google Sheet template that you can use to track student growth. Shoot me an email if you'd like that and I'd be happy to send it along.

Potential Action Research

I'm taking a graduate course this semester on action research, part of which is defining and designing a question to tackle. Most of the coursework relates to classroom-level research by teachers to drive reflection and instructional change, but I'm not in the classroom right now. I'm thinking through what kind of teacher-focused research could help me in a coaching role.

This definitely isn't exhaustive, but it's a start. There are some others floating around my head that I can't quite verbalize yet. Much of what I'm interested in surrounds teacher intent to join PD, their actual attendance, and then, most importantly, their implementation of the methods and techniques learned together. What kinds of prompts or supports are needed to ensure follow through?

At face value, it seems collaborative action - longitudinal groups of teachers - working together has a high impact on implementation. But, given time constraints (including perceived time restrictions) on the part of teachers, this is hard to get off the ground at a systemic level during the school day.

The district as a whole is ripe for this kind of problem solving. Department and cross-department PLCs are forming and they are given freedom to choose how to spend that time. Perhaps a good way to start is to identify a team at each building willing to go through a more formal process. While their focus is on student improvement, I'm more interested in the supplemental activities I can provide as a coach to develop the action research mindset of the teacher.


Featured image from *`Unsplash <https://unsplash.com/photos/1NyiWD3iorA>`__* by David Papillon

Copy Notes Between Google Sheets

If you work in big systems, sometimes you come across a situation where you want to share a single tab of a Google Sheet with someone else rather than the master copy. An easy way is to just make a snapshot copy of that tab and share a new sheet with them. A more advanced method is to use a parent/child relationship and some Sheets cell formulas to share an always-up-to-date copy of the master data. The problem with that (there's always a problem, isn't there?) is that it only copies the data, none of the notes or any other information that might be on the master sheet.

In this post, I'm going to give an example of a Google Apps Script that can be used to copy notes from a master parent sheet to a child spreadsheet. If you want to make a copy of a folder with working examples, ask and thou shalt receive.

The Basic Setup

For this example, we have a master Google Sheet with several rows of data organized by location, like this:

The sample master spreadsheet

The sample master spreadsheet

We want to share child sheets tied to schools A, B, and C with only their relevant data. This first part is done with query and importrange in the following formula in cell A1 of the child spreadsheet.

=query(importrange("masterSheetURL","Sheet1:A:E"),"SELECT * WHERE Col1 CONTAINS 'A'",1)

Query selects an entire row where Column 1 contains the string 'A'. If this were a real situation, the building name would be the imported data, which looks like this:

Imported data into a child spreadsheet using QUERY.

Imported data into a child spreadsheet using QUERY.

I prefer query because I don't have to select a specific range - it will look at the entire sheet for that data. If you don't want to import notes, this works really well. If you do want to import notes, make sure data is grouped together because you need to determine some offsets when writing to the child sheets. Create a second spreadsheet in masterthat has the following structure:

Helper sheet in ``master`` with links to the child sheets

Helper sheet in master with links to the child sheets

where column A is the building (or location or other selector) and the URL to the child sheet you want to update. Column C will become very important as it holds the offset data for writing notes to the child sheet.

The Script

Copying notes can only be done with the .getNotes() Google Apps Script method. This script looks at the sheet and uses a couple of loops to build arrays to post to the child sheets. The first challenge is to set the offsets. Notice location B in the master sheet is rows six through eight, while on the child sheet it is rows two through four. Without setting an offset, our notes would be written into the wrong rows, and that's no fun.

After running that script, your child sheet will update the offset of each building in Column 3 of the helper sheet in the master copy. I also added an onChange trigger to this function so it runs when rows are inserted anywhere in the sheet.

Now, you're ready to copy notes from one sheet to another.

This script will loop the master spreadsheet and look for a building name it recognizes. If there's a match, it will open the child sheet and set the notes for each row using the offset provided in the previous step. To be honest, this isn't the most elegant solution, but hey, it works.

Just in case of catastrophe, here's another little utility script you can use to clear all notes from the child sheets. This was particularly helpful when I had my offset calculations off by a row.

This is a manual process - there is no edit or change event you can hook into when a note is added or deleted. So, I wrapped all of this into an onOpen simple trigger to add a custom menu.


There are definitely improvements that can be made. Here are all the files as a gist so you can clone, copy/paste, and hack away at your own sheets.

Comments

Michelle Maynard

Do you ever give personal help?

Brian Bennett

I can try to help here and there. If you have something specific about this method, you can leave it here as a comment so others benefit from a (possible) answer.

Matt Kvancz

I’m trying to do something similar to this but I get the error: You do not have permission to call SpreadsheetApp.openByUrl

I’m reading that this can not be used in a custom function, so not understanding what I am doing differently than what you have done here.

Thanks for any help.

Brian Bennett

There are a couple possibilities. It won’t work if you’re trying to run it using an

onOpen
event or if you don’t have edit permissions on the child spreadsheet. The other possibility is that the sheet might not have reauthorized you when you updated the script. To do that, sometimes I’ll write a little function that forces the OAuth window to open again and it’ll pick up the new permissions:


         function reauth() {
            SpreadsheetApp.getActiveSpreadsheet().getSheetByName('Sheet 1').getDataRange().getValues();
         }
      
M. Wallingford

Does this work for comments as well? I am having hard time setting up another sheet (we have two and the “master” sheet will have information that does not need to be shown on the 2nd sheet). The 2nd sheet is for clients. I have the IMPORTRANGE function working and only pulling the information I want but the comments are not coming through.

Brian Bennett

Unfortunately, no. Comments aren’t actually a “part” of the document in the sense that you can get them with IMPORTRANGE. To copy comments, you would need to use some scripting to pull them from one sheet to another.

Rafael Iglesias

I hope you still look at this article. I have always been a heavy user of comments in cells. When I used Excel or Open Office this was not a problem. I could copy/paste large portions of text into the comment in one cell, however, with Google sheets this does not seem to work. I can copy text out of the cell comment Ok, but when I try to paste text into the cell comment it does not work. Unfortunately I got used to Comments as a way to streamline my spreadsheets, and by now I have hundreds of spreadsheets created with Excel or Open Office that are full of comments.. Appreciate any ideas you may have.

Many thanks in advance. Rafael.

Running arbtt on Mac OS

Tom Woodward has a semi-regular weekly roundup blog post with links to interesting things he finds on the Internet. A couple weeks back, he posted a link to something called arbtt (pronounced ar-bit? that's what I say, anyways), which is actually an acronym for "arbitrary, rule-based time tracker." In short, it runs in the background and samples all the open programs on your computer, once per minute, and writes that information to a text log.

It's super, super geeky. Like seriously. I've used todo-txt for almost two years now and I recently started tracking how long I work on a given task so I can keep better track of what I spend my time on. But, the catch is, I have to remember to do turn on the tracker. arbtt runs in the background. The data is standardized, so you can write different queries and poll for information in very, very granular ways.

It was a real pain to get set up. After two days of fussing on and off, I have it running well on my computer. The documentation for Mac OS is really lacking, so here's what I did for my machine.

(This is fairly technical, so jump to the bottom if you want to see what it does once it's running.)

Install

Installation wasn't too bad for arbtt on Mac. The install instructions on the project page worked fine, particularly because I already had Homebrew installed and set up. I'm not going to rehash those steps here. Go read them there.

Binaries

Getting the thing to run was a different story. arbtt installs itself at the User level on Mac OS in a hidden .arbtt directory. This holds the configuration file and the actual capture.log file.

The actual executables are in the .cabal directory (also under /Users/yourname) because they're managed by the package manager. The documentation says to go into .cabal/bin and run arbtt-capture in order to start capturing data with the daemon.

Well, that didn't work.

The files in .cabal/bin are symlinked to the actual executables, and from what I can gather, Mac OS doesn't like that. At all. So, to run the scripts, you have to call the absolute path to the actual binaries. Those are in .cabal/store/ghc-8.4.4/rbtt-0.10.1-*/bin. I don't know enough about package managers, but those binaries are buried. I ended up creating aliases in my terminal so I can use one-line invocation.

categorize.cfg

Because the collection of information is arbitrary, you can collect without knowing what you want to know, which is pretty cool. The syntax for the query rules is in Haskell, which I don't know, so mine is a little hacky right now. In my playing, there are two main rules:

  1. Target the current window with $program.
  2. Target the current window title with $title.

You can use standard boolean operators to combine queries to get specific information. Each query pushes to a tag object that contains a Category and identifier in a Category:identifier syntax. A query I'm using to watch my Twitter use is:

current window $program == "Vivaldi" &amp;&amp; current window $title =~ [m!TweetDeck!, m!Twitter!] ==&gt; tag Social:twitter

So, it checks for both the browser (I use one called Vivaldi) and the window title before pushing (==>) to the Social:twitter tag. Mine are all really rudimentary right now, but you can sort and filter by date, titles, even directory locations if you're working on your filesystem. Since the underlying data never changes, you can just write new rules that will parse with arbtt-stats (next).

arbtt-stats

The main capture daemon just runs in the background and collects data. The workhorse is arbtt-stats which parses all of the raw data. You can run arbtt-stats in the command line and get a full report of all matched rules or you can query by specific tag with the -c flag. So, executing arbtt-stats -c Social:twitter would return only the time recorded where Twitter was the focused window.

arbtt-graph (addon)

This all happens in the command line, which isn't super helpful, especially if you have a lot of results like this:

arbtt-stats raw output

Filtering down by tag with -c is helpful, but it would also be nice to turn this into something graphical. That's where arbtt-graph comes in. It's a Python-based parser that generates a website based on arbtt stats for each day as well as a four-month aggregate.

arbtt-graph output

The biggest problem I had with arbtt-graph was that python isn't super happy with relative file locations. I had to edit the scripts with absolute paths to write and read all of the necessary documents. It's a fun little helper on my computer, and if I was insane, I might investigate putting it online somewhere with a cron job, but that's for another day.