Posts
I had the pleasure of working with about 15 people yesterday on moving to standards-based grading next year. We started off with a long discussion about what grades are and what they mean. It's easy to get into what they should be, but I wanted to make sure we all had a solid understanding of what grades actually do in most of our classrooms.
I had a couple of guiding questions and one that generated the most interesting response was the following:
A student rarely comes to class and when they do, work isn't turned in. At the end of the semester, that student easily passes the final exam. Does that student pass your class?
Lots of eyebrows furrowed.
There was some uneasy looking around.
About half said yes, the other half said no.
Now, there are major assumptions here. Is the test valid and reliable (standards-aligned)? How did the teacher intervene? Did a student show growth before taking the test in some other way?
All issues aside, the root of the question forces us to consider whether a grade in our class represents learning or compliance.
Better than them doing well all year and then flunking the final exam?
—Brandon Dorman (@brandon_edu) June 13, 2019
I also wonder why we're more accepting of the inverse situation: a student who has not taken the class who passes the final is allowed to skip the course (or is given credit, etc).
If we're comfortable with allowing students to skip a class (be given credit) by testing out we should be just as comfortable allowig a student who "shows no effort" to be given credit for hitting the same benchmark. The difference is our perception of that student.
Challenging our biases is important, particularly long-held assumptions that dictate our perceptions about "good" vs "bad" students. Grades are the output of those biases in many cases.
What do you think?
The featured image is Br... flickr photo by Peter Schüler shared under a Creative Commons (BY-NC-SA) license
Continuing my Canvas excursions...
We recently made a change where teachers could not manually add students to their courses. The change was made because manually enrolling a student breaks the grade passback to our SIS, which causes double the work to fix the problem in the first place.
But, this also affects manually-created courses that don't need the grade passback. One workaround is to add all students as a TA or teacher, but then you run into issues where students have access to grades.
The API doesn't allow you to directly change a user's enrollment status. You need to delete the user object from the course and then re-enroll to change the status. The change in the UI on the website does the same thing, but they make it look nice with a dropdown and it doesn't actually tell you that the user was deleted and re-added all in one step.
The nice thing about the API method is that you can set their enrollment state to active by default, which removes the course invitation notification and acceptance step for someone who is just having their status changed.
The example below is what I used to convert all students who were added as TAs back into Students. As always, I'm using the UCF Open Python library to handle the API calls.
from canvasapi import Canvas
canvas = Canvas('your_url', 'your_key')
course = canvas.get_course(course_id)
enrollments = course.get_enrollments(type='TaEnrollment')
for stu in enrollments:
user_id = stu.user_id
stu.deactivate(task='delete')
course.enroll_user(user_id, 'StudentEnrollment', enrollment_state='active')
This does wipe out the 'Last Activity' field in the People tab, so if that's important to you, make the change before the course is published. I made the change for a user, going from student to teacher and back with no loss of grade data, which was nice to see.
I've been using Tiny Tiny RSS (TTRSS) for several months now and I'm finally getting into some of the more advanced uses. It's more than just an RSS reader - it can be an RSS curator which makes it so much more powerful.
Here's what I mean.
TTRSS can collect and categorize feeds like any other reader out there. I have mine grouped by topics I'm interested in. Each installation also has something called a Generated Feed which allows me, the consumer, to republish my own curated feed with any article I want to share.
A unique URL is generated for my installation. Each article has a publish option that adds it to my public feed.
Here's the generated feed, if you're curious. You can subscribe to this if you want to know what I think is worth reading. I can also hook this into IFTTT to auto-tweet new items, etc.
That's one layer. What if I want to share curated articles based on a topic? I can only have one published feed for my account.
That's where labels come in.
If I create a label - a custom tag, essentially - in TTRSS, it also has a feed I can publish out.
Woah.
Even more, there are filters in TTRSS, kind of like Gmail, which can automatically add a label to an incoming post. This is triple powerful because I don't have to manually mark articles I want to share out.
Here's a full example of how I'm taking advantage of this:
Next week, I'm kicking off a standards-based grading cohort with ~20 teachers from across my district. I want a way to easily curate and share articles with them. Instead of emailing everything, I'm going to use TTRSS filters and labels along with Diigo to collect SBG reading and share it all out in one, continuously updated place.
First, I set up the label in TTRSS. That created this RSS feed pushing back out.
Any post in my incoming feeds can be labelled with SBG which publishes it back out to the world.
Next, I set up a tag in Diigo for SBG-related stuff. This is anything I come across on the Internet that isn't from a blog feed. It can be YouTube videos, PDFs, newspaper clippings...whatever I want. Diigo gives a good RSS feed of tags and labels, so I ingest that with TTRSS and use a filter to automatically apply my SBG label, which then updates my outgoing feed.
TTRSS is becoming less of a pipeline in to me and more of a packaging complex which takes information in and allows me to publish it back out to serve a purpose.
RSS isn't dead.
Featured image: Pipes flickr photo by derekbruff shared under a Creative Commons (BY-NC) license
Update
After posting this and tagging WDBM on Twitter, they sent the following:
Hey Brian! We upload our playlists on Impact89fm’s Spotify every week! Thank you for the support, happy listening! ?
—Pity Party (@pityparty_wdbm) June 5, 2019
If you want to listen, just search next time ¯\_(ツ)_/¯
Python has been my programming language of choice lately. Today, I gave myself a little challenge to create a Spotify playlist from a tracklist posted to a website.
I'm a big fan of WDBM out of Michigan State University. They have a great college station that reminds of my the music scene back in Rochester, NY (what was awesome). Every week, they have a live show called Pity Party and it highlights alternative/emo/rock goodness. I try to catch the show if I can, but I often miss it because I'm not always near my computer to stream.
They post their playlist each week on their website. I fired up a Python project with BeautifulSoup and requests to get the web page data and a new (to me) library called Spotipy which gave me API access.
This happens in a couple steps. The first thing to do was scrape the web page, which is super easy with BeautifulSoup and Requests. The website uses the same format for their playlist each week:
<span class="storycontent">
<p>Track 1 - Artist 1</p>
<p>Track 2 - Artist 2</p>
</span>
BeautifulSoup lets me set up a quick loop to grab each of the <p> tags in a list that I can loop over.
The Spotify API allows you to search by artist and track name. If a single result is returned on the search, its ID is added to a list to post in bulk to the playlist. This is more efficient than looping each one individually.
Any track that isn't found for whatever reason is added to an errors list that is shown to the user when everythig is done. That way, they can go back and check them manually. It may be that the track doesn't exist or their was some weird punctuation or something.
Instead of taking 20 minutes to search and add each song manually, this runs in less than 10 seconds.
Rock on.
Here's the full script if you're interested in checking it out. The entire WDBM specialty show catalog uses the same format, so you can try it with other pages over ther.
Featured image is lightning flickr photo by Tom Gill. shared under a Creative Commons (BY-NC-ND) license.
I'm prepping a full-day workshop on standards based grading for about 20 teachers in a couple weeks. One major part of the day will be centered on converting a SBG report to a 100-point scale letter grade, mostly because we just have to.
Here are some of the methods I've come across, which have all (in one way or another) informed my own method, which is last in this post.
Equalized Weighting
I saw this calculation method first from Frank Noschese on his KISSBG blog post. He glances over it in the post body, but the comments below get into some of the details. Here's the formula:
50 + 50 * (earned/total)
At first, the additional 50 points added in look like a bonus, which feels weird. In reality, this wipes out the 0-50 F range. Now, each letter grade rougly corresponds to a 10-point spread:
- F: 50 - 60
- D: 61 - 70
- C: 71 - 80
- B: 81 - 90
- A: 91 - 100
It's an equivicator, not a bonus.
Reflective Grading
Shifting away from assigning arbitrary points is a big piece of standards-based grading. Laura Gibbs, Kathryn Byars and Ken Bauer are the three names that jumped out in this region. Feedback is the main driver. Work is given feedback and only feedback. The focus between teacher and student is on demonstration, not on points or numbers.
For assessment, students reflect on and provide evidence of proficiency on each standard. Laura, Kathryn, and Ken all did this differently, but the main flavor is the same. Take a look at Kathryn's helpful Google slides, Laura's deep-dive book chapter and Ken's various blog posts. This is by far the most flexible, fuzzy, and subjective method of reporting.
Standard Purism
The most "pure" method of standards-based grading removes all items from the gradebook except for the standards. The methods of grading these varies. Some use a straight average of binary items (pass/fail). Others put each standards on some kind of rubric scale and give an average.
The main benefit of this structure is that practice work (homework, classwork, etc) is excluded. If a student forgets or decides not to do an assignment, their grade is not affected because it is practice.
On the other hand, this opens the door for assignments to be completely optional. This is a detriment, in my opinion, because students may not have the self-awareness or diligence to do independent work otherwise. Additionally, if a student skips a test or quiz because it doesn't go in the gradebook, it can set up an awkward situation where a student is racing to prove standards at the end of the year.
Some kind of blend
I ended up blending several of these ideas into a system I like. I used components of KISSBG (binary yes/no for standards) with a weighted course average to calculate the final grade.
| Category |
Weight |
| Classwork |
20% |
| Standards |
80% |
In my gradebook, any classwork/practice was lumped together into one category. Homework, tests, quizzes, etc, all contributed to 20% of the total course grade.
Standards were individual assignments worth one point. They were assessed over time on a four-point rubric:
| Description |
Score |
| Exceeds Expectations |
4 |
| Meets Expectations |
3 |
| Approaches Expectations |
2 |
| Does Not Meet Expectations |
1 |
| No evidence |
0 |
The cutoff for toggling a 1/1 in the gradebook was a 3. This meant they demonstrated proficiency in the concept in that situation. A 4 was given if the student could connect different related ideas...showing the relationships between standards.
Rubrics were used on every assignment and that aggregate score was used to determine the gradebook 1 or 0. Over time, patterns emerged and students were able to track their growth/decline in Canvas (more on that another time). I rarely graded Classwork assignments in depth...if it was turned in, I often gave full credit just for having it done. The rubric feedback was the important piece and I tried to put the focus on learning from those pieces.
Is ther a best method? I don't think so. It really depends on your group of students and situational context. In 2012, I used a more reflective approach. In 2016, I was using more the 80/20 split with some reflection thrown in. Both were equally valid and I felt good about the grades I ended up reporting.
What others would you suggest? Leave a comment below.
[caption width="500" align="aligncenter"]

Here's another little script I hammered out for Canvas today.
With the new gradebook, you can set assignment statuses like "late" and "missing." This is helpful in the gradebook for on paper assignments (digital assignments are automatically flagged) but you can only change the status in the gradebook grid.
This is a hacked together script to add the same buttons to the SpeedGrader controls.
The easiest way to add this is by adding an extension called Tampermonky. This essentially allows you to run code on websites you don't have access to edit.
After installing the extension, click here to install the script.
Last step: click on the Tampermonkey Icon, choose Dashboard, and then click on SpeedGrader Status. In the editor, update line 14 with your Canvas URL.
I'm trying to make standards-based grading more approachable for my teachers. When I was teaching full time, I held to Frank Noschese's Keep It Simple philosopy. Single standards correlate to single assignments that are scored as pass/fail. Now, I averaged these out on a weighted scale to calculate a 0-100 grade, but that's for another post
Using Canvas, I was able to set up a functional reassessment strategy to aggregate demonstrations of proficiency.
The Learning Mastery Gradebook in Canvas does not translate anything into the traditional gradebook. This mean that every week or so, I would have to open the Mastery report alongside the traditional gradebook and update scores line by line. This was tedious and prone to error.
Using the Canvas API and a simple relational database, I put together a Python web app to do that work for me. The idea is that a single outcome in a Canvas course is linked with a single assignment to be scored as a 1 or 0 (pass/fail) when a mastery threshold is reached.
The app
Users are logged in via their existing Canvas account. There they are shown a list of active courses along with the number of students and how many Essential Standards are currently being assessed (ie, linked to an assignment).
In the Course view, users select which grading category will be used for the standards. Outcomes are pulled in from the course and stored via their ID number. Assignments from the selected group are imported and added to the dropdown menu for each Outcome.
Users align Outcomes to the Assignment they want to be updated in Canvas when the scores are reconciled. This pulls live from Canvas, so the Outcomes and Assignments must exist prior to importing.
As Assignments are aligned, they're added to the score report table.
Right now, it defaults to a 1 or 0 (pass/fail) if the Outcome score is greater than or equal to 3 (out of 4). All of the grade data is pulled at runtime - no student information is ever stored in the database. The Outcome/Assignment relationship that was created tells the app which assignment to update for which Outcome.
When scores are updated, the entire table is looped. If an Outcome has risen above a 3, the associated Assignment is toggled to a 1. The same is true for the inverse: if an Outcome falls below a 3, the Assignmet is toggled back to a 0.
I have mixed feelings about dropping a score, but the purpose of this little experiment is to make grade calculations and reconciliation between Outcomes and Assignments much more smooth for the teacher. It requires a user to run (no automatic updates) so grades can always be updated manually by the teacher in Canvas. Associations can also be removed at any time.
As always, the source for the project is on GitHub.
Today is my 10th anniversary.
tl;dr: I have a hacky proof-of-concept method for getting an Instagram account as an RSS feed. It uses Python and you can grab the source files here.
I'm not on Instagram because I got tired of only seeing three people's photos interwoven with ads. The problem is I still have friends who post frequently there and I feel like I should still be able to see those photos.
The wonderful thing about the Internet is that you can do things that weren't really meant to be done. Instagram (nor most other companies) provide RSS feeds anymore in order to force you into their platform. That's silly. I've been teaching myself Python and this seemed like a good way to flex some of my new powers.
Get that feed
Inspired by Andy Barefoot, who did some magic on his personal site with PHP, I decided to do the same using Python. What resulted was a command line program which can fetch any public Instagram account and create an XML document I could subscribe to.
I'm going to use Alane Levine as my guinea [STRIKEOUT:pig] dog for this post. To create the feed, run:
python subscribe.py cogdog
where the argument is the username of the account. I'm using a handly lbrary called PyRSS2Gen by Andrew Dalke to create the properly formatted feed. I ran the script and then threw it on my server and subscribed, just to see what would happen.
evil cackle
Update that feed
Instagram only shows 12 photos at a time. If I ran this script over and over, it would drop a photo from the feed each time it updated. That's no good.
I wrote up a second (notably more hacky) companion which takes almost the same form in order to update the feed rather than create one from scratch:
python update.py cogdog
This little guy looks for the existing XML doc and then fetches the user's Instagram page yet again. Instead of writing everything, it only writes things with timestamps newer than the most recent feed item. It's a little brute force, but hey, every tool can be a hammer if you swing hard enough.
Improvements
The subscribe script only loads those initial 12 photos. I may still go back and have it get the entire profile in the first go, but limiting it seems okay to me.
It's not general-purpose yet because you have to know how to install Python and several modules as well as have a web server to host the feeds on. I started finessing this into a small webapp which would do all the jobs, but my brain is stretched pretty far as it is.
If you want the source, it's in a GitHub gist and you can certainly tweak and improve. Let me know if you make changes or how I could do this better in the future.
This is worth watching. When we're describing principles to students - especially concrete thinkers - modeling and finding concrete examples of abstract ideas is critical to develop understanding.
Taking it one step better, scaffold your students to help them come up with the analogy. Or, better yet, challenge them to find the lengths themselves and then create the model.
Related video: modeling the speed of sound vs light with a metronome.
I am finding the right balance of scaffolding to provide the best learning environment for my students.
Source: A Critical Gradebook
The gradebook seems like the most frustrating and under-developed part of any LMS. We use Canvas and have had our own struggles with making the gradebook helpful, not hurtful. Laura Gibbs has more thoughts on that than I do.
The Learning Mastery component of the Canvas gradebook is immensely powerful if you take time to set it up correctly. It's a shift away from singleton points and gives students and teachers a more high-level view of what objectives/skills/standards a student has attained over time. This can be (but doesn\'t have to be) linked to the students course grade. Again, my view is to stick with Frank Noschese's Keep It Simple SBG schema.
Translating that is a chore of it's own, but I'm hacking away at a helper tool...more on that another time. I think this is where something like an LTI tool can help across multiple platforms, if the new gradebook (or commentbook) is flexible enough to focus on feedback rather than a specific assessment protocol.
The new law in Indiana doesn’t say that students have to pass the test, but it does require them to take it to be able to graduate.
Source: New law will require Indiana high schoolers to take US citizenship test – Can you pass it? | FOX59
In today's Completely Frivolous Testing Update.
I'm not against new ideas or exposing students to the rigor we ask of people working to naturalize, but requiring students take a test - with no requirements set - is the definition of fivolous.
I wonder how many Indiana legislators could pass this.
My car stopped running suddenly on the Indiana toll road about 10 days ago. The timing belt decided to break, which makes the car not want to do anything correctly. To keep engines smaller, some are designed so that the valves sneak down into the piston chamber. (This also increases fuel efficiency.) These engines are known as "interference" engines and is controlled by precise timing to make sure the pistons don't hit the valves. When your timing belt breaks, well...things aren't timed so precisely anymore.
This should not be bent.
It's such a small piece, but it took me a day to get it out of the engine. We took the head (all the valves) off and inspected the pistons. When it hits a valve, the piston itself can be damaged, which would realistically mean ditching the car. I was super lucky because the pistons looked good. There was one small dent, but it wasn't catastrophic. The real danger is shearing the head off the valve and scratching the piston chamber, but that didn't happen either.
Instead of repairing the valves myself, I decided to take it to an engine repair shop because they can do the work in a few hours what would otherwise take me days to accomplish. I picked up the engine head within a week and got it all put back together.
This car is so great. It's at 230,000(ish) miles and with this repair, could probably go another 200k if I keep up with oil and belt changes. This year alone has seen a new engine head, the head gasket, timing belt and water pump, a new alternator, a new clutch and flywheel, and several new sensors and other bits.
Now, this would not have been such a big job had the belt not broken. In a couple years, I'll pre-emptively change the belt to avoid such a situation.
Turns out our minivan also has an interference engine so I'll be taking a day to change that belt before we run into the same problem.
More than that, the characteristics should be observable to anyone who walks into the room.
We work hard with our teachers to make sure they're changing instruction and not just flavoring old ideas with tech. The eight reflective questions in this article are a great outline (guide?) teachers can use as they're planning ahead with technology in general.
Beyond purposeful planning, if you can't see students engaging in some way, they probably aren't. Our indicators for engagement have to be updated as well. From earlier in the article:
...he looks for behavioral, emotional and cognitive engagement at play together.
Quiet seat work does not equal engagement.
Source: How To Ensure Students Are Actively Engaged and Not Just Compliant | MindShift | KQED News
Spring break finishes tonight, so this is the "what we did over spring break" post in case any of my old teachers are reading the blog these days.
The whole family got sick. Except me. So, I played doctor (with no small role being played by my parents, whose house we were in while infirmed). It was your bona-fide Influenza A for Mrs. Bennett and the three Bennett children. Not the stomach bug nastiness, the everything-hurts-why-do-I-still-have-a-fever nastiness.
One of the best parts was that even though I took my computer to Kentucky, I forgot my charger at home. There were no emergencies, the world did not end. I ended up reading a book and a half in between of nursing kids and my wife, which was a great treat to myself.
I think I might start leaving my charger places so I have a hard-stop deadline for working.
I have a love/hate relationship with phones. Several years ago, the "shitphone" post on Medium caught my attention and made me start thinking more seriously about A) what I spend my money on, and B) why I did that. This year has been a year of disentangling myself from my phone. I started by deleting all social media. That was easy and didn't feel too painful. I wasn't constantly
Next, I removed Gmail completely. I no longer check email on my phone. There are few instances in life where an email is so urgent it needs a reply while I'm walking somewhere. Those times were better solved with a phone call or text anyways. That was a little more painful because of the instant-reply expectation that comes with email.
The next step was adding an app called Action Dash which reported my usage time daily. I respond to data, so seeing hard numbers about my use helps me meet those goals. Now that I have data, I can start making some more difficult decisions.
After a week, I got my phone usage down to under an hour consistently. Even then most of the use was using Hangouts through the day to keep in touch with my team while I moved around different buildings.
I got thinking about how I use my phone and what I wanted to be using it for. Andy Crouch's The Tech-Wise Family is a big influencer in how I think about technology in general and my phone use specifically. The premise is that a phone has a proper place, just like toys and books. The challenge is that we have to define the proper place in the face of manufacturers and developers trying to define it for us.
My proper place is to focus on communication. Calling and texting (through various apps) is my goal. The phone is a utility, not an entertainer. After entertaining thoughts of moving back to a flip phone, the loss of a calendar in my pocket would be a huge burden to manage because my schedule is so variable. I can't realistically limit my phone to only communication, but I can make some other changes to define its role in my life.
I went on a deletion frenzy. I deleted YouTube and Netflix. I deleted Goodreads. I deleted non-family and non-work related chat apps. Games are gone. I deleted and disabled all of the browsers this week. I deleted everything I could that didn't directly relate to communication as a rule of thumb.
It felt great. It feels great.
My phone isn't completely locked down to communicating, but I'm getting closer to having a very specific and well-defined role for its place in my life. I still have my Kindle and Overdrive books, I still have a podcast manager and an RSS reader. I'm solidly in young-children mode, so my camera gets plenty of use. But each of those consolations has a specific purpose in specific situations.
My phone is here to stay, but now it's on my terms.
Pattern flickr photo by Jonas B shared under a Creative Commons (BY) license
We heard the first group long before we could see them. Almost as small as long-distance airliners, the Sandhill crane call is distinct, clear. The kids and I are craning our necks, looking for the group of birds heading north for the summer months.
This week, the girls asked why they were going to bed while it was still light out. The first time this year when it's been light enough to look at books in bed without a flashlight. We look forward to the nights where we can fall asleep and wake up to the light in the windows.
Spring teases us here. Glimpses of green grass and blue skies here and there. Sometimes they're swept under a late snow shower or heavy frost. But we know the sunlight is coming back.
"Cardinal."
"Chickadee."
"Woodpecker."
"When will the hummingbirds come back?"
We practice our bird calls outside. We're all rusty from a winter spent indoors, faintly hoping some winter holdovers will visit our bird feeder in the front yard from time to time. Even if we can see them, we can't hear their songs. Sometimes we practice with an app, but they know it isn't the same as listening outside, picking calls from among the noise.
The flock wheels around, calling to one another. This one is smaller...maybe 30 or 35 individuals. When they're gone, we go back to raking and tending the fire, listening for sounds of the next flock to float down.
Some initial thoughts on my action research design as I get ready to write up the study methods and timeline:
- Since I already have data to look through, I'm starting to focus in on a mixed method study, looking at past data and teacher feedback to plan out future sessions for comparison.
- Since we have data to start with, I'm planning on an exploratory mixed-method design.
- I think exploratory is more beneficial in the long run because I'm interested in mechanisms and structures which increase implementation of ideas by teachers, not just explaining why they do or don't implement.
- We're finishing workshops this year and already planning for summer work. If I can identify some patterns and structures and correlate the level of implementation, we'll have a good starting point for aligning all PD, not just my teams, to the new structures using data-backed conclusions.
- Given the timeframe, gathering consent forms right now is difficult, considering we're coming up on spring break and the testing windows. Doing aggregate, anonymized data analysis will allow us to draft a descriptive letter before the summer PD series begins and we can make informed consent a part of the workshop instead of a mass email.
I'm in the midst of an action research course and my topic is evaluating and reflecting on our systems of PD in the district. This post is the literature review I did as part of the research process. This is similar to some of the work I did last year on leadership development and PD and those links to related items are at the bottom of this post.
“Professional development” as a catch-all for staff training has a degree of uncertainty associated which clouds our ability to critically discuss and reflect on programming. As an instructional team, we have not taken time to critically assess and address our effectiveness in presentation or facilitation nor have we done any work to gauge the effectiveness of professional development in changing teacher practice.
In Elkhart, we have worked mainly with self-selected groups of teachers as technical coaches according to the definition provided by Hargreaves & Dawe (1990). Though our sessions contained collaborative elements, they were singularly focused on developing discrete skills to meet an immediate need. As a team, these have been effective in closing a significant digital teaching and learning skill gap present in the teaching staff. We have not, to date, considered specific models of professional development as a mechanism for planning or evaluating the effectiveness of workshops offered in a given school year.
According to Kennedy (2005), comparative research exploring models of professional development is lacking. Her analysis and resulting framework provides helpful questions when assessing and determining the type of offerings for staff. Reflective questions range from the type of accountability organizers want from teachers to determining whether the professional development will focus on transformative practice or serve as a method of skill transmission. It is tempting to always reach for models which support transformative practice, but there are considerations which need to be made for those structures to be truly transformative.
As a district, our efforts have centered on active processes with teachers, but this has been done without an objective measure of what those types of programs actually look like in practice. Darling-Hammond & McLaughlin (1995) summarize our working goal succinctly: “Effective professional development involves teachers both as learners and as teachers and allows them to struggle with the uncertainties that accompany each role,” (emphasis mine). Struggling with uncertainties requires some measure of collaboration, but collaboration alone does not necessarily lead toward transformative ends and can even drive top-down mandates to improve palatability (Hargreaves & Dawe, 1990).
To structure collaborative development opportunities, Darling-Hammond & McLaughlin (1995) make a case for policies which “allow [collaborative] structures and extra-school arrangements to come and go and change and evolve as necessary, rather than insist on permanent plans or promises.” This counters many district-driven professional development programs which require stated goals, minutes, and outcomes as “proof” of the event’s efficacy and resultant implementation. The problem with these expectations is that truly collaborative groups are constantly changing their goals or foci to meet changing conditions identified by the group (Burbank & Kauchak, 2003).
In response, a “Transformative Model” (Kennedy, 2005) attempts to move beyond a simple “collaboration” label and build a professional development regimen which pulls the best from skills-based training to into truly collaborative pairs or small groups attempting to make changes in practice. She argues that transformative development must consist of a multi-faceted approach: training where training is needed to open spaces when groups need time to discuss. All work falls under the fold of reflection and evaluation of practice in the classroom. Burbank & Kauchak (2003) modeled a collaborative structure with pre-service and practicing teachers taking part in self-defined action research programs. At the end of the study, there were qualitative differences in the teachers’ responses to the particulars of the study, but most groups agreed that it was a beneficial process and they would consider participating in a similar structure in the future. Hargreaves & Dawe (1990) alluded to the efficacy of truly collaborative research as a way to combat what they termed “contrived collegiality,” where outcomes were predetermined and presented through a “collaborative” session.
Collaboration as a means alone will not change practices. Hargreaves and Dawe’s (1990) warning against contrived collegiality is characterized by collaborative environments with limited scope “to such a degree that true collaboration becomes impossible”. Groups working toward a shared goal of transformative practices is undercut when the professional development structures disallow questioning of classroom, building, or district status quos. If collaborative professional development groups are allowed to “struggle with the uncertainties” (Darling-Hammond & McLaughlin, 1995) present in education both in and beyond the classroom, the group will be more effective in reaching and implementing strategies to improve practice. This view subtly reinforces Hargreaves & Dawe’s (1990) perspective that collaboration must tackle the hard problems in order to have a lasting impact.
There are several other factors identified which contribute to the strength and efficacy of professional development. These range from continuous, long-term commitments (Darling-Hammond & McLaughlin, 1995; Hargreaves & Dawe, 1990; Richardson, 1990), work that is immediately connected to classroom practice (Darling-Hammond & McLaughlin, 1995; Richardson, 1990; Burbank & Kauchak, 2003), and a group dynamic which recognizes the variety of perspectives which inform teaching habits across a wide spectrum of participants (Kennedy, 2005).
As an instructional coach, one of my core responsibilities is to help create a culture of learning amongst members to mitigate division or power dynamics based on experience (Darling-Hammond & McLaughlin, 1995; Burbank & Kauchak, 2003), which is particularly evident in mixed-experience groups. In addition to fostering a strong group dynamic, the instructional coaching role becomes facilitative rather than instructive to help teachers address problems of practice (Darling-Hammond & McLaughlin, 1995). It is easy to fall into an technical coaching position in collaborative groups, but such a role reduces the chances for transformative work to emerge as teachers become trainees rather than practitioners (Kennedy, 2005). This becomes more apparent as districts add instructional coaching positions, but limit the scope of the role to training sessions under the guise of “encouraging teachers to collaborate more…when there is less for them to collaborate about” (Hargreaves & Dawe, 1990). Ultimately, the coaching role is most effective when it is used to support teachers through “personal, moral, and socio-political” choices (Hargreaves & Dawe, 1990) rather than technical skill and competence.
In order to fully reflect upon and evaluate our programming, Kennedy’s (2005) framework for professional development will serve as a spectrum on which to categorize our professional development workshops and courses. Hargreaves & Dawe (1990) also provide helpful reflective questions (ie, are teachers equal partners in experimentation and problem solving?) to evaluate just how collaborative our “collaborative” groups are in practice. Once our habits of working are established on the framework, we can address shortcomings in order to build toward more effective coaching with the teachers in the district.
Resources
Burbank, M. D., & Kauchak, D. (2003). An alternative model for professional development: Investigations into effective collaboration. Teaching and Teacher Education, 19(5), 499-514. doi:10.1016/S0742-051X(03)00048-9
Darling-Hammond, Linda, and Milbrey W. McLaughlin. "Policies that support professional development in an era of reform." Phi Delta Kappan, Apr. 1995, p. 597+. Biography In Context, http://link.galegroup.com.proxy.bsu.edu/apps/doc/A16834863/BIC?u=munc80314&sid=BIC&xid=abd8b6f2. Accessed 5 Mar. 2019.
Hargreaves, A., & Dawe, R. (1990). Paths of professional development: Contrived collegiality, collaborative culture, and the case of peer coaching. Teaching and Teacher Education, 6(3), 227-241.
Kennedy, A. (2005). Models of continuing professional development: A framework for analysis. Journal of in-service education, 31(2), 235-250.
Richardson, V. (1990). Significant and worthwhile change in teaching practice. Educational Researcher, 19(7), 10-18. doi:10.2307/1176411
Here's a presentation I did for a class about a year ago over similar themes, but with a leadership spin.
The featured image is by Jaromír Kavan on Unsplash.
From a post last week where I continued to refine my research question:
How does continuity of study (ie, a PD sequence rather than a one-off workshop) affect implementation?
Is there an ideal timing? How often (in a series) seems to be effective?
What does the interim look like in between workshops?
Are volunteers more likely to implement training? Or are groups, even if they're elected to come by leadership?
How does the group dynamic affect buy in or implementation after the fact? Would establishing norms at the outset remove stigma?
I thought I was going to use, "How can my role effect change through professional development?" which isn't a great question for research. It's good for reflection, but it's too specific to me and not great for sharing in a collaborative environment (my team, for example).*
*
Based on some of my literature research, I'm going to broaden back out to generalizing PD structures as a practice rather than focusing on my own role within those structures. Right now, I'm thinking:
How will aligning our professional development programs to goal-oriented frameworks affect implementation by participants?
I'm feeling good about this question for a few reasons:
- Much of my day to day work is with individual teachers. They often have a larger focus and I spend my time helping those teachers find solutions or methods to reach those goals.
- I am involved in building-level discussions through departments or administrators. It isn't as frequent as one-on-one contact with teachers, but I do work with administrators to help their staff reach collective goals.
- My team is housed at the district level, not individual schools. My involvement at the highest level eventually trickles down to buildings and individual classrooms.
We've never done a full, research-based survey on the PD activities we offer in order to evaluate whether or not our work is effective in changing instruction at any given level. Using academic research for a guide, we can begin to evaluate and categorize our work in view of larger goals. Hopefully, we are able to identify patterns, strengths, and weaknesses as individuals and as a team as we begin planning for next year's programs.
Hey, I just wanted to thank you for this blog post. I forked your code, fixed a bug, and added modification of the late days.
https://github.com/paulbui/canvas-tweaks/tree/master/speedgrader_statusNice! I always worry a little about sharing these tweaks because of bugs, so I appreciate you sharing the updated code back.