Two
This little one turns two today.
This little one turns two today.
If you're brand new to unit testing, start with the first post in this series to get caught up.
Up until now, our Calcs class has handled errors with simple true and false flags. That's not helpful to the user. At this point, we're ready to begin defining and testing custom errors in our functions. In this post, we're going to throw an error when the add() method receives an invalid input. To keep it simple, we're going to call anything other than the number type invalid and return an error.
You can see the completed source for this part on GitHub.
The ``throws` assertion <https://api.qunitjs.com/assert/throws>`__ is more complex than the ok, equal, notEqual methods we've looked at already. throws will call a function and then can have one of four possible expected params:
With throws, we are able to define not only an error to test, but the kind of error that's returned and even the message received by the test. This is helpful for testing functions that can throw several different types of errors.
We'll start by using the built in Error and TypeError and finish by writing our own CustomError class that you can extend yourself.
To begin, add a new block of tests in tests.gs. Four of these will fail at first and our code will be written to pass each one.
QUnit.test('Checking errors', function() { throws(function() { throw "error" }, "throws with an error message only"); throws(function() { throw new Error }, Error, 'The error was a generic Error'); throws(function() { throw new CustomError() }, CustomError, 'Creates a new instance of CustomError'); throws(function() { throw new CustomError("you can't do that!") }, "you can't do that!", "Throws with a specific message"); throws(function() { throw new CustomError() }, function(err) { return err.toString() === "There was a problem." }, 'When no message is passed, the default message is returned.'); throws(function() { throw new CustomError("You can't do that.") }, function(err) { return err.toString() === "You can't do that." }, 'Error.toString() matches the expected string.'); });
When writing your tests, the biggest mistake is that the first parameter must be a function call which throws your error. This is becuase it has to get the returned value to pass to the expected parameter.
When you run your tests by reloading the webapp, the first two assertions will pass because they're handled by the browser. You'll get failures for anything calling CustomError because it doesn't exist yet.
We need to create an error called CustomError that does four things:
Create a new script file called CustomError and place the following code inside:
var CustomError = function(message) { this.message = message || "There was a problem."; } CustomError.prototype.toString = function() { return this.message; }
This is scoped globally instead of namespaced (like the Calcs class) because it doesn't access any restricted services in the Apps Script environment. Any class or method can now access and raise this custom error.
If you re-run your test, all assertions should now pass. Now that it is available, we can go back and start using this error in our Calcs class.
Because the native Error object is always available, we can access those at any point. In calculations.gs, let's not just return false in our function, let's throw a custom TypeError with a message. Our Calcs.add() test block needs to be modified. I'm going to delete a test that no longer applies because we're going to move away from checking with equal. The old line is commented out:
QUnit.test('Checking the `add` method in Calcs', function() { ok(Calcs.add(1, 1), 'The method is available and received two variables.'); equal(Calcs.add(2, 2), 4, 'When 2 and 2 are entered, the function should return 4.'); // equal(Calcs.add('hello', 2), false, 'When a non-number is added, the function will return false.'); throws(function() { Calcs.add('foo', 2) }, TypeError, 'When a non-number is passed in the first param, the function will return a TypeError.'); throws(function() { Calcs.add(2, 'bar') }, CustomError, 'When a non-number is passed in the second param, the function will return a CustomError.'); });
To pass our tests, we want to update Calcs.add() to throw a TypeError if the first param is not a number and a CustomError if the second is not a number.
Here's a refactored version of the add() method which will pass the test we just wrote:
// ... rest of Calcs const add = function(a ,b) { if(!isNumber(a)) { throw new TypeError } if(!isNumber(b)) { throw new CustomError('This deserves a custom message.'); } return a + b } // ...
This refactor checks a and b independently and throws the specific error to satisfy the assertion statements in the tests. If you run your tests, all assertions should now pass.
The throws method is a powerful tool for testing your exception handling. At a minimum, using the broswer errors can help you give useful information to your users when exceptions occur. throws helps you confidently address each error appropriately before your users run into problems.
If you're brand new to unit testing, start with the first post in this series to get caught up.
We've looked at how to install and configure QUnit and just finished writing some simple tests. In this post, we're going to write a new method, add(a, b) in Calcs which will add the two passed parameters. Then, we'll use testing to check that the params are numbers before returning either the sum or false. We could make this more complex add allow an object (or array), but we'll write another method for that later to compare.
Here is the completed source code for this post.
We know our expected output should be a + b, whatever that happens to be. Let's add a test to tests.gs which will help us write a working function:
function calcTests() { // ... QUnit.test('Checking the `add` method in Calcs', function() { ok(Calcs.add(1,1), 'The add method should be available in the module.'); equal(Cals.add(2, 2), 4, 'When 2 and 2 are entered, the function should return 10.'); }); }
We added a new QUnit.test() method in the calcTests() wrapper which defines two tests for our new function. ok checks that the function is available (and not accidentally privately scoped) and equal adds 2 and 2 expecting 4 as the result. Running the test now will produce a failure, which is what you would expect because we haven't written the method yet.
Open calculations.gs and add the add method. Don't forget to return it!
// ... rest of code ... const add = function(a ,b) { return a + b; } return { name: name, about: about, author: author, add: add, } })()
We've tried, and passed, two tested conditions, both of which follow the expected use of the function. But, what if a user enters something other than a number? We're going to add a helper function to Calcs which will check a value and return true if it is a number, false if otherwise.
Our function will be called isNumber and here are the tests we'll use for this case:
function calcTests() { // .. rest of tests ... QUnit.test('Checking the `isNumber` method in Calcs', function() { equal(Calcs.isNumber(2), true, 'The value entered is a number'); equal(Calcs.isNumber('foo'), false, 'The value entered is NOT a number.'); notEqual(Calcs.isNumber('foo'), true, 'The value entered is NOT a number.'); }); }
In this block, we introduce the notEqual assertion which will pass if the returned value is false. We expect true in notEqual because I expect Calcs.isNumber('foo') to return false, making the assertion true and passing. (It's a little hard to wrap your head around at first.)
Writing tests first means they will fail whenever the web app is loaded. As you write the function to pass the test, you're keeping code concise and focusing on one (and only one) outcome, thereby improving maintainability and clarity of your codebase.
const Calcs = (function() { // ... rest of calcs const isNumber = function(val) { if(typeof(val) === 'number') { return true; } return false; } return { // ... rest of return isNumber: isNumber } })
When writing functions to pass tests, first focus on passing. This function could be restructured to use a ternary or some other method of boolean logic, but that doesn't matter right now. We're just focused on satisfying the test conditions. Then we can go back and refactor.
Running your tests should pass all assertions. If not, go back and look at the failures and debug your code.
In certain cases, not all methods need to be exposed to the global namespace. Our isNumber function could certainly be scoped privately because the Javascript core already includes typing (typeof(2) === 'number' // true) which can handle checking.
Testing private methods is tricky and reasons for why you should or shouldn't vary. In applications which compile code with a build process, there are methods for testing private methods. In Apps Script, there is no such build step, so testing private functions becomes more difficult. Here are some considerations:
In all, the design of your codebase is up to you. Let testing help you make these decisions. Refactoring is much easier because any change you make should still pass the tests you've already written. For clarity, we'll keep isNumber public for now.
We haven't updated the add() method yet, which is the ultimate goal. Remember, we want to make sure both parameters entered are numbers before trying to add. To start, let's make sure .add() returns false if a non-number is passed into it. Here's our test block:
QUnit.test('Checking the `add` method in Calcs', function() { // ... previous tests ... equal(Calcs.add('foo', 2), false, 'When a non-number passed in the first param, the function will return false.'); equal(Calcs.add(2, 'bar'), false, 'When a non-number is passed in the second param, the function will return false.'); equal(Calcs.add('foo', 'bar'), false, 'When two non-numbers are passed, the function will return false.'); });
All of these tests may seem redundant, but we want to try and cover each scenario of a non-numer entering our function. Again, writing tests first makes you focus on updating functions to pass. Let's make a change to the add() method which will fulfill that function. Here's our updated method:
const add = function(a ,b) { if(isNumber(a) && isNumber(b)) { return a + b } else { return false } }
At this point, we have all of our tests passing and our application will function as intended. You can now go back and refactor knowing that your tests will fail if you break a function somewhere.
If you're brand new to unit testing, start with the first post in this series to get caught up.
From part one, unit tests are for single units of code. They test a specific function for a specific result. I found a helpful living guide on writing unit tests that included some very clear expectations:
Unit tests are isolated and independent of each other. Any given behaviour should be specified in one and only one test. The execution/order of execution of one test cannot affect the others.
Let's create a simple class with some properties and methods we can test. We'll use QUnit to write some tests for those methods. Once we've covered the basics, a future post will look at more complex application structures and tests.
The completed source for this part can be found here.
Let's start by defining a Calculations class using Bruce Mcpherson's recommended namespacing structure to keep everything neat. If you're following along, create a Script file named calculations.gs in your editor and add the following code.
const Calcs = (function() { const name = 'Calculation class'; const about = function() { return 'A class for calculating things'; } return { name: name, about: about, } })();
Following the testing guide, naming tests clearly is important as their messages will be your guides to problem solving. Each test is given a specific message parameter that has a specific action...should...result format. An named action (calling a class parameter or method) should do something and end in a defined result.
In QUnit for GAS, the result is defined as the expected result in assertions that accept that paramter (keep reading below).
Now it's time to define some tests. The biggest change in my thinking came when I switched to writing tests first to define what I want the outcome to be before diving in and figuring out if my function is giving me the right output or not. Create a new script file called tests.gs and add the following:
function calcTests() { QUnit.test('Checking the Calcs class parameters', function() { ok(Calcs.name, 'The name parameter should be available in the namespace.'); equal(Calcs.about(), 'A class of calculation methods', 'The about method should return the Calcs class description.'); ok(Calcs.author(), 'The author method should return the Calcs class author description.'); }); }
Breaking this block down:
Inside the test are the specific assertions we're making about the function:
Naming and writing good messaging takes practice and I'm still working on a system that works well for me. The great thing is that if a system isn't working well, just rename it or change the messaging!
The last step before we can run tests is to tell QUnit where to look for those tests in the config file we defined in part one. Open your config.gs file and make sure it looks like this (excluding comments):
QUnit.helpers( this ); // Define the tests to run. Each function is a collection of tests. function tests() { console = Logger; // Match JS calcTests(); // Our new tests defined in tests.gs } // runs inside a web app, results displayed in HTML. function doGet( e ) { QUnit.urlParams( e.parameter ); QUnit.config({ title: "QUnit for GAS" // Sets the title of the test page. }); // Pass the tests() wrapper function with our defined // tests into QUnit for testing QUnit.load( tests ); // Return the web app HTML return QUnit.getHtml(); };
What's happening:
QUnit is run as a web application through apps script. Go to Publish and choose Deploy as web app.... In the popup, set the new version and limit access to yourself.
You'll need to verify the application can have access to your account. Once that is done, you can open your web application link. If you've done your setup correctly, you should see your three test results:
You just ran your first unit tests!
There are plenty of ways to write failing tests. They fail either because your code doesn't produce the expected value or because your test is expecting something that isn't happening. Let's make a small change to our Calcs class which will cause a test to fail.
In the class, change the .about method to:
const about = function() { return 'A class of calculation method'; }
Since our test is asserting that this function will return the string, A class of calculation methods, we can expect this test to fail because it will evaluate to false. Run your tests again either by reloading the web app page. Sure enough, we have a failure:
There are a couple things to note from this result:
Since the .about() method fails its test, I know I need to go back and fix the bug. Adding an 's' to 'method' solves the bug. Reloading the page will confirm with a passed test.
Stack traces in QUnit for GAS are marginally helpful. This is because the testing happens on Googles servers, not your computer, so there are several steps in the tooling that add layers of trace data. Some ways to make this more readable are to add code references to your tests file or to have function-based naming so you can find what failed. For this example, we don't have to worry too much, but we'll look into more complex applications at a later point.
The whole point of unit testing is that you catch breaking changes before your code is released. Let's make a change to our Calcs class and write a test to make sure that nothing is broken. Start by writing a simple test to define what we want that function to do.
// tests.gs QUnit.test('About Calcs test', function() { ... ok(Calcs.author(), 'The author method is publicly available'); ... })
...and then add the function to Calcs which will pass the test.
// calculations.gs const Calcs = (function() { // ... const author = function() { return 'This ' + name + 'is authored by Brian.' } ... })
Reload your web app page. What happens?
Your test should have failed (if you followed my code above) with the error, Cannot find function author in object [object Object]. But why?
Something is wrong...the test couldn't find the function author() even though I added it to my class. The explanation is that I never exported that function in the return statement! Since it wasn't exported, the test fails. A potential bug in my application has been caught early and is simple to diagnose and repair before it causes user errors later. Update the return statement in the calculations class to:
// calculations.gs ... return { name: name, about: about, author: author, } ...
...and run the tests again by reloading the web app to see that everything now passes.
This is the first glimpse into using QUnit inside an Apps Script project. Once the setup is complete, you can start writing tests for what you expect your code to do, which gives you clarity and insight into actually writing the function while knowing your test will catch bugs.
I'm not good at writing testable code. I'm more of a 'figure it out when it breaks' kind of hobby programmer. The problem with this is that I am constantly making my own bugs and not really finding them until a bad time.
Unit testing is the process of running automated tests against your code to make sure it's working correctly. Each test is for one unit of code - a single function, usually. It expects a value and will pass or fail based on the value received as part of the test.
To get better, I forced myself to write unit tests in Google Apps Script for two reasons:
The point of this series is to force myself to learn, and use, a unit testing method when writing code and to update the far outdataed unit testing tutorials for Apps Script published online already. I've tried several testing libraries but will be using QUnit as the testing suite.
I'm following Miguel Grinberg's method of posting tutorial code as tagged versions of a GitHub project. Each post will link to a specific tag with the completed source code for that section.
Here's the source for this post
Now, for large projects, you could argue that using clasp and a traditional unit testing library like Mocha or Jasmine is preferable, and you might be right. But, for the purposes of learning, I wanted to keep everything as 'pure' as I could, so all files and tests are written and tests in the online apps script editor.
It's a testing framework developed and maintained by the jQuery Foundation. It is used in jQuery development to make sure things don't self destruct as the library expands.
QUnit is written for Javascript. Because GAS is based on Javascript, there is a handy library which can be installed in your apps script project.
When testing on your local computer, tests are run by your machine. With Apps Script, everything is run on Google's servers. The QUnit library exposes the framework through a web app that fetches the framework code and executes it when the web app loads.
You can install QUnit for apps script by going to Resources > Libraries in the editor and searching for MxL38OxqIK-B73jyDTvCe-OBao7QLBR4j in the key field. Select v4 and save. Now the QUnit object is available in your project.
The QUnit library needs some configuration to work with an apps script project. There are three parts to the setup: 1) Declaring QUnit at the global scope, 2) defining tests, and 3) configuring the web application to run the tests.
Once the library is loaded, it needs to be instantiated at the global level to run. Create a new script file called config.gs to hold all of your QUnit code.
The first line should be:
QUnit.helpers(this);
This exposes all assertion methods in the QUnit library (ok, notEqual, expect, etc.) instead of a pared-down object.
Tests are defined within wrapper functions that can be passed into QUnit. This tests function will simply hold a list of tests to run when the web application is loaded. We won't be writing any tests in this post but go ahead and add a wrapper for populate later.
function tests() { console = Logger; // Match JS // Test definitions will be added here }
TheQUnit.config() object declares settings for the web app, so it gets wrapped in the doGet() function. URL params are used to pass information from the app to the testing library with QUnit.urlParams().
QUnit also has a config object which can set default behaviors. You can see a full config object in the project source. For this simple setup, all I'm going to declare is the web app title. Add this to your config.gs file:
// Updated Feb 2020 to account for V8 runtime function doGet( e ) { var params = JSON.stringify(e); return HtmlService.createHtmlOutput(params); };
Now you're ready to write some code. Running QUnit right now won't do anything; that will come in part 2.
Our second daughter turns four today.
tl;dr I have a Google Apps Script project getting a major overhaul. If you want to look at the code and contribute, it's on GitHub.
A couple years back, I published a little addon which would scan a Google Doc for linked YouTube videos and allow you to watch them in a popup or sidebar. I called it DocuTube and published without much more thought.
Since writing that app, I've learned a ton more and decided to give it a major overhaul. It was mediocre on the web store with some valid complaints about a lack of clarity and functionality.
As I added functions, I took my time to figure out better ways to structure my code. I followed Bruce McPherson's wonderful advice to add namespacing (isolating functions from one another) to help keep everything tidy. It bent my brain into pretzels, but it was so good to wrestle though. I now have an application that is more manageable and extensible because separate parts are sequestered from one another.
This month, I published an update which adds search, video previews, automatic embedding, and cleans up video playback.
This was the biggest addition (and the most frequent request) in the app listing. This version of DocuTube includes YouTube searching, which allows authors to find - and insert content - they want in the document without managing several hundred tabs.
This is used in schools - I've had several contacts of GSuite admins asking about the permissions and user data storage (minimal permissions, nothing is stored on users) before they pushed it out to their domains. Since studets are (potentially) searching for videos, all searches in DocuTube are forced to safe videos only.
I will not be making that optional.
The YouTube API is quotaed, which means every search counts against your total allowance for the day. To manage this, I cache results heavily. Each search returns 50 videos and they're each stored in the cache as you page back and forth. Embeds are pulled from the cached data and used to populate the preview portion and do the embedding work. That said, my quota is limited to 10,000 units per day. As use increases, I can see the need to apply for more search credits very soon.
Since searching was included, I wanted to be able to provide a way to actually check and make sure the video clicked was the one the user wanted to actually embed.
Clicking on a video gives a playable iframe for a preview. The user can then choose how to attach the video to the document: copy the link to the clipboard (manual paste), insert the thumbnail, or add some custom text. All of this is done with the cached resource so the API isn't hit again to get the link or thumbnail.
Google Docs doesn't include an "embed" in the traditional sense. When I use the term embed in the context of DocuTube, I mean it handles the link for you. Inserting a video as a thumbnail grabs the title image, throws it in the document, and then adds the link. You can certainly do the same thing manually with several tabs and clicks.
I think "embed" is an okay term because it leads into the other function of DocuTube: watching videos.
This hasn't changed a whole lot. The major update from v0.7 (the current version) to v1.0 is that video watching is inclusive. Prior to 1.0, you needed to choose where to pull links from: the document or the comments.
With the 1.0 update, all videos linked in the document, regardless of location, are added to the Watch sidebar. This also removes the option of watching videos in a popup because the whole point of including the video as an embedded item is not leaving the document. If videos are loaded in a popup that takes up the entire editor, I've essentially kicked you out of the document.
If you want to give it a try, you can search from the Docs Addons menu or install it from the GSuite Marketplace. Issues can be sent to brian@ohheybrian.com, posted as an issue on the code itself.
I maintain a website at school where teachers can register for professional development. They can see all offerings and save sessions to their profile for remidners, etc.
The bulk of the data comes through Javascript, which populates the main page in a fancily-styled form. Each course is a form element with a unique ID that is passed to the backend when they register. The database stores the user ID and the course ID in a couple different places to manage all the reminders.
Because each course is a JSON object, you cannot just send a link for a specific course, which is a problem when you sometimes have dozens on the screen. That means they're either having to remember to search (which matches titles) or scroll until they find the right session. Coordinating a group of people becomes difficult.
Since each of my form elements has a specific ID, I decided to use a URL query to automatically select the correct element. I can also build a specific URL for each session and add it to the element when it's built on the page. Double win. Here's how it works.
You can call window.location.search to pull any query parameters from a URL (anything after a ? character). The browser also has a handy URLSearchParams function that allows you use methods like .has() and .get() to make them more accessible in the code. With some Javascript, I can pull the keys from the URL when the page loads and then kick off another action.
I want to pass a specific course ID to the querystring for the user. So, my URL becomes:
https://mysite.com/?course=abc123
The normal page load scripts didn't need to change much. I added a quick conditional to check if the URL included a query string with the .has() method I mentioned above. I can specify which key to look for, which makes extending this function easier in the future.
if (urlParams.has('course')) { // If the query matches the course being built, select the input if (urlParams.get('course') === course.key) { document.querySelector(`input[name='course'][value='${course.key}']`).checked = true; // The page can get long, so I focus the window on the specific course. window.location.hash = `#${course.key}`; // set the submit badge quantity loadSubmitBadge(); } }
If a query is passed in the URL, that course gets a checkbox pre-selected and focused on the screen for the user. They're free to either select more courses or just hit sbumit and be on their way.
The last step was to add a link icon to each course that could be copied and sent in an email. The course IDs are random and not easy to remember, so they needed to be provided to the users.
This was one line in the constructor:
div.querySelector('.course-share-link').href =`https://mysite.com/?course=${course.key}`
I didn't want users jumping to the link when they clicked the button, so I needed one more little function to catch that click, prevent the redirect, and then copy the URL to the clipboard for easy pasting:
async function copyToClipboard(e) { e.preventDefault(); try { await navigator.clipboard.writeText(e.target.parentNode.getAttribute('href')); alert('Workshop URL copied to clipboard'); } catch (err) { alert('Failed to copy: ', err); } }
navigator.clipboard.writeText() is a new asynchronous browser method, which is why I call await inside the try...catch block. It prevents browser errors from showing to the user.
This method should work for any HTML form as long as your form elements are unique IDs (or names) that can be targeted.
A few times a week, we sing hymns with the kids before bed. I get the guitar out and we practice their favorites. The lyrics are Bible-based truth and my kids have always done great with song as a memory tool. They can probably sing more of a hymn from memory than I can at this point, to be honest. (If you're interested, here's a great list to start with from The Gospel Coalition.)
Part of that time is "silly time" where they make up a situation and I think up a song on the spot. Most situations have to do with an unfortuntate encounter between two incompatible animal species (the alligator climbed the tree and ate the raccoon, for instance). Other times, it's about the dog (Jo's Song) or the baby (I get back to my metal days and we all scream).
But there's also The Monkey Took His Banana to the Water, which is a regular in the rotation. It's got a Johnny Karate vibe and will stick in your ear for days.
195 BPM
Verse
Chorus
Repeat V1
Instant classic.
The featured image is my own from 2013. I only had one daughter at that time and my wife's cousins would come over to play guitar together. This living room has had a lot of music in it since we moved in.
Keynote has image masking built in. Masking allows you to more or less shape a picture in a frame. A simple example would be showing a portion of a photo in a circle rather than as a square.
This is easy to do and can help make a presentation look a little more polished.
A more advanced version is masking an image with text. Here's a great example of this technique:
(Fun side note: Disney was hit with a copyright suit for this string of promo posters.)
You can't do this in Keynote on iOS, though. It's not part of the text formatting settings you would need.
You can't mask text natively in Keynote. But, you can use an image of text whipped up in Pages (or similar) to create the same effect. Here's the final result:
First, make some big, bold text. I did this in Pages because the font choices are easier to use. When you have your word, take a screen shot and crop it down.
Add your base image and the text to the Keynote slide with the text on top. The, select the text screenshot and use Instant Alpha in the format menu to remove the inside of the letters.
After removing your text, you should be able to see your image in the empty space. Crop the image down (double-tap) so it's the same size as the text layer.
Finally, it's a good idea to lock the text layer and the image together in a Group so they can be positioned as a single object. Tap your text and while you're holding, tap the image in the transparent area. This will select both objects and bring up a menu. Select Group to lock them together.
Hey presto, you now have a masked image.
Just don't land in hot water like Disney.
The original image is "Boy Singing on Microphone" by Jason Rosewell on Unsplash.
My Google account is managing a lot of document creation at school. I set up a lot of autoCrat projects to create docs alongside several utility scripts that pump out Docs on a regular basis.
The problem is, I don't want to be the owner of all of these documents. Using some Apps Script, I can set a trigger to automatically set a new document owner so the files are out of my account.
We use autoCrat for document creation, which has a consistent spreadsheet structure. When setting up the form to create documents, make sure you do the following:
There's a check in the loop that makes sure the emailCol actually gives an email address. If it's not a valid email address, the row will be skipped. This shouldn't cause some rows to complete and others to fail because the entire column is checked.
You can run the script manually from the script editor and it will loop the sheet, setting the doc owner as the person who submitted the form. I set it to run daily with a trigger so I don't have to manage these long-running tasks.
This doesn't have to be used with autoCrat, either. All it needs is the ID of a document and the email address of the person to set as the owner. As long as you have that information, you can set this to run and help keep your Drive from becoming everyone's filing cabinet.
It's begun.
This summer, YouTube deactivated brand accounts for GSuite Education domains. A brand account is essentially a shared account for teams. There's no single Google account associated and it can be used for branded content. We set up a branded page when our team first started so we could each upload to the channel without having a shared Google account.
Well, those days are gone. While I understand the reasoning (and I actually agree with the reasoning), Google really borked the process by not providing a migration strategy. There is no way to take videos associated with a brand channel and automatically associate them with another. We were able to get our channel activated, but we cannot easily move videos to a new, shared Google account for our team.
This is where youtube-dl comes into play. It's a command line utility that downloads YouTube videos based on specific video IDs, playlist IDs, or channel URLs. It's awesome.
I'm not the only one who has looked for easy ways to download entire channels. Ask Ubuntu has a great answer for how to download videos for an entire channel. It's a one-liner that just runs in the background, writing files to a folder. Adding the --write-description flag to the command also automatically creates a file for the video description to make the copy/paste easier later.
Downloading is easy, but with no thanks to Google on that solution. Uploading is even harder.
YouTube does have an API that would allow me to write a little loop to upload videos in the background. But, it is tied to a quota for the user. The standard is 10,000 units/day, which sounds like a lot, until you look at the cost for each operation.
According to the quota calculator provided, each upload costs ~1600 units. That's an upload and description written via the API, which is bare minimum. We have 102 videos, which means it would take me 17 days to automate the uploads.
17 days.
The alternative is to manually sit and upload each video to the new channel. It isn't 17 days, but it's a wasted day, for sure.
Here's where we gripe about Google killing things without providing viable alternatives for users.
But remember, we're not users to Google. We're products.
Products don't get a say in how we're used by the corporation.
See you in 17 days.
Some quick notes of my progress on updating DocuTube in the Docs Addon store:
With this update, you'll be able to determine what kind of embed you'd like to make. If you choose text, it defaults to the title of the video, but you can also type in custom text ("Watch this video") and it will be linked automatically.
One of the major problems I'm going to face is the YouTube Data API quota. Applications are restricted, by default, to 10,000 'units' of data used per day from the API. Each call to the API has a cost that needs to be managed.
Right now, each pagination step is using the YouTube API to pull data down, which uses some of the allotted data. To make sure the app doesn't run out of resources, I'm probably going to implement some kind of simple cacheing, either in the script itself or in the browser's sessionStorage or localStorage cache.
This came up because I hit my quota limit while testing. Now, I'm calling a lot of videos to make sure it's all working - way more than a normal user would during normal use (preseumably). But, it's still a concern because the quota can get used up very quickly if I don't include some kind of cache mechanism.
The featured image is a screenshot from DocuTube. It is featuring Paul Andersen of Bozemanscience on YouTube.
I have an aging Macbook Pro. It's older than my children and starting to show it's age. Before I remove Mac OS and replace it with Linux, I'm trying to squeak a couple more years out of her.
Insert comment about planned sunsets and hardware longevity.
Unfortunately, my model is one where the AMD GPU was poorly built and designed and will eventually fail. Luckily, the Mac has two GPUs on board and you can disable the failing AMD chip by default and run on the Intel chip also included. This is fine for me because I'm not doing any heavy lifting with that computer anymore.
There's already a site detailing the steps below, but as with all things on the Internet, I'm creating my own backup in case that site goes down.
Boot into single user mode with:
Command + S after the first boot chime.
mount root partition writeable
/sbin/mount -uw /
make a kext-backup directory
mkdir -p /System/Library/Extensions-off
only move ONE offending kext out of the way:
mv /System/Library/Extensions/AMDRadeonX3000.kext /System/Library/Extensions-off/
let the system update its kextcache:
touch /System/Library/Extensions/
sudo reboot
The system doesn't know how to power-management the failed AMD-chip. For that you have to either manaully load the kext after boot by:
sudo kextload /System/Library/Extensions-off/AMDRadeonX3000.kext
Automate this with the following LoginHook:
sudo mkdir -p /Library/LoginHook sudo nano /Library/LoginHook/LoadX3000.sh
with the following content:
#!/bin/bash kextload /System/Library/Extensions-off/AMDRadeonX3000.kext exit 0
then make it executable and active:
sudo chmod a+x /Library/LoginHook/LoadX3000.sh sudo defaults write com.apple.loginwindow LoginHook /Library/LoginHook/LoadX3000.sh
In the Terminal (or in single-user mode if you can't boot to the desktop in Safe or Recovery modes)
sudo nano /force-iGPU-boot.sh
Enter the following:
#/bin/sh sudo nvram boot-args="-v" sudo nvram fa4ce28d-b62f-4c99-9cc3-6815686e30f9:gpu-power-prefs=%01%00%00%00 exit 0
Now make that executable:
sudo chmod a+x /force-iGPU-boot.sh
The boot loop (or a boot hang) can be solved with a PRAM reset (Opt + Cmd + P + R, wait for a double chime), but this wipes the GPU modifications from memory. If you can't boot to the desktop, boot to single user mode:
<Cmd>+<s> after the first chime
And after mounting your boot-volume read-write to execute just:
sh /force-iGPU-boot.sh
This will reset the GPU settings to ignore the failing AMD chip. Reboot with sudo reboot in the command line.
Featured image is Spiral stair (21st century) by alasdair massie is licensed under CC BY-NC-SA
The more I use Tiny Tiny RSS (TTRSS) for keeping up with blogs, the more I find it can do.
For instance, I can publish specific feeds of curated items using tags, which is sweet. I acutally used that method to gather standards-based grading articles for teachers at school to read as they have time.
But what about others? Well, TTRSS also allows you to re-publish articles to a general feed. They can come from anywhere, not just tagged or labeled items. Hooking this up to IFTTT, I can now reshare articles to Twitter - linked to the original source - without so much clicking.
If you want to subscribe right to posts I find extra interesting that's your choice and you can do whatever you want.
We're using Canvas LMS in our district. There are some things I appreciate and things I appreciate...less. One of which is when a main feature doesn't work.
Luckily, the Canvas API gives me some leeway in fixing things that Instructure doesn't seem know is broken.
I'm supporting teachers using Outcomes in their courses. Assignments can be tagged with relevant standards and recorded in a nice color-coded chart over time. I came across a big problem where students couldn't see their aggregated scores. There's no point in giving rubric feedback if students can't see those scores.
I thought it might be a fluke, so I tried reaching the student results through the API. It works as a teacher, but not as a student. There's a pipe broken somewhere on the Internet.
[File large bug report, crack knuckles.]
The easy fix would have been to hit the Canvas API to get this fixed, but student accounts came back with no data. So, I needed an intermediate, server-side account to do the work.
Earlier this semester, I set up an internal web app to manage reconciling grades. This was built in Flask so it was easy to add an endpoint as an API. I've been really digging the canvasapi Python package from UCF because it's so versatile and makes working with objects so easily.
So, I ended with:
Here's the commented code for a simple implementation using Flask and some Javascript.
I ran the following example with a small group of teachers evaluating standards-based grading in their PLC. The teachers are on board (and all are attaching standards to work for feedback and assessment) but they needed to see a tangible example of why standards-based grades can help all students.
I'm going to steal Dan Meyer's Three Act structure with no shame. Each screenshot uses the same four (fictional) students.
Questions for discussion:
Questions for discussion:
Questions for discussion:
We're using weighted categories for our students. Looking only at classwork (Act 1) doesn't show teachers gaps in the student learning. You can certainly target students for intervention, but it is based only on the the task completion, not necessarily the content.
In Act 2, we have a little more to go on because each assignment is aligned to a specific standard or skill. The big takeaway is that the student with the lowest assignment score (row 3) is actually learning all of the standards. The learning gaps are hidden for the "responsible" student who turns their work if we don't take standards into account.
Act 3 brings it home for teachers. Where do the 1's and 0's come from? It's from aggregated information over time. In this view, color coding (we have set up through Canvas) is a quick gauge of class comprehension on each standard. I can use this information to plan more effectively to help all students reach learning goals.
In the end, teachers wanted to know the student's calculated score. This table shows what would be on the student report card:
| Student | Classwork (20%) | Standards (80%) | Final score (%) |
|---|---|---|---|
| 1 | 85.2 | 33.3 | 43.4 |
| 2 | 70.4 | 50 | 54.1 |
| 3 | 62.7 | 100 | 92.5 |
| 4 | 81.5 | 66.7 | 69.7 |
Standards-based grading can help root out lack of learning by moving the focus away from compliance. Assessing learning goals and making them the focus of feedback and reporting helps make that change a reality.
This post is specifically for browsers that use the Chromium engine (Chrome [duh], but also anything on this list) but most browsers have a similar feature, you'll just need to dig for it.
Setting a custom search keystroke is helpful because you can target your results without any hassle. For instance, when I type "dr" into my search bar before my terms, my browser only returns items in my Google Drive. It makes my seaching faster because I don't have to open a site first.
In Chrome, go to your Settings. (A quick way to get there is to type chrome:settings into the address bar.)
In Settings, find Manage Search Engines. This is where you can specify some custom places to look.
Each search engine has to be configured with two pieces: the search url and the search term. Depending on the site, this can be obvious or really unclear. The best method I've found is to go to the site, do a search, and then do some copy/pasting.
Let's say you want a quick search for YouTube. If you open YouTube, do a search for something. We'll use dogs because why not. When you seach, you get this URL:
https://www.youtube.com/results?search_query=dogs
We're going to replace dogs with a special placeholder: %s. This tells the browser to substitute that term with what you typed.
https://www.youtube.com/results?search_query=%s
Click on Manage Search Engines and then click on Add. In the pop up, type in the name of your search engine and then a custom key (or keys) to trigger that search. Then, paste in your search address (see above).
Save the search term with Add. If you open a new tab and type yt, you'll see a prompt to search YouTube by pressing Tab.
Here are some of my most used searches so you don't have to go and make your own:
| Site | Trigger | Search URL |
|---|---|---|
| YouTube | yt | https://www.youtube.com/results?search_query=%s |
| Google Drive | dr | https://drive.google.com/drive/search?q=%s |
| Amazon | ama | https://www.amazon.com/s?k=%s |
| Unsplash | un | https://unsplash.com/search/%s |
| Flickr | fl | https://www.flickr.com/search/?text=%s |
I'm finishing up a website for a private project which allows for people to register and create posts as members. The client wanted to have a specific onboarding flow which moved users from account creation to a terms page before finishing on a brief site overview.
I started by hooking into registration_redirect and pushing the user to a simple terms page.
add_filter( 'registration_redirect', 'new_user_redirect' );
function new_user_redirect() {
return home_url('/complete-registration/');
}
This didn't work because the user was immediately redirected to the terms page instead of on their login. This meant my terms form (using Gravity Forms) couldn't collect the user_id field because it didn't actually exist yet.
To fix this, I hooked into the user_register action to capture the new user. There are several validation steps on the front and backend that I won't detail, but in the end, the user is written to the database and emailed a secure link to set their password. I created a new user meta key called is_first_login and set it to true. This was added when the user was written to the database.
update_user_meta( $user_id, 'is_first_login', true);
Now, I can check that key and send the user to the correct page when they log in.
add_action( 'login_redirect', 'redirect_user_on_login', 10, 3);
function redirect_user_on_login( $redirect, $request, $user ) {
if(get_user_meta($user->ID, 'is_first_login', true)) {
// Since it's the first login, set this to false
update_user_meta($user->ID, 'is_first_login', false);
// push to the terms page
return home_url('/complete-registration');
} else {
// Check the user role and redirect appropraitely
return (is_array($user->roles) && in_array('administrator', $user->roles)) ? admin_url() : site_url();
}
}
If it is a new user, they are shown the terms page. Accepting the terms allows for accountability. Gravity Forms handles the next redirect when the user submits the terms form.
Every year, I'm building more support for standards based grading within our district. Though Canvas isn't really set up for SBG, there is a way to make it work, and it works well. In short, every assignment, whether it's a test/quiz, classwork, or even a conversation with students, can be linked to a standard which is aggregated and reported over time.
Current status of Instructure's outcomes roadmap.
With some of the recent Canvas updates, this system is eroding and Instructure is missing out on a huge opportunity to truly change the way we approach grading. This post is a breakdown of the changes and what could be done to fix them.
When Outcomes are attached to assignments, reporting is...okay. The Learning Mastery Gradebook view (which is toggled in the Course settings under Feature Options) is a helpful color-coded chart of any assessed outcome. Hovering the Outcome column title shows a context card with class averages and the outcome detail. But that's it.
If you switch to the Individual View in the gradebook, you can click a student name and then use the Learning Mastery tab to see all aligned Outcomes and their scores. This view gets closer to being helpful because it shows scores on assessments over time, which allows you to track progress (growth vs decline) in a chart.
To see reports organized by Outcome, you can go to Outcomes and then click on the title of the individual item. This shows the assignments it was assessed on and a list of students who were assessed. This list is not sortable and can be many, many pages long.
This is just to demonstrate that there is no consistency on where to find the information, which makes Outcomes less compelling to use.
Canvas is building out a new Quiz engine called Quizzes.Next (though it will soon become 'Quizzes' and the old style will be 'Classic Quizzing'). While there are certainly some functional improvements, Outcomes are being left behind.
One benefit of Quizzes.Next is that Outcomes can be added to individual questions. This could be done in Classic Quizzing by using Question Banks, but the banks were never really exposed to teachers and alignment is a bit of a chore to find. I have more detail on our instructional tech blog if you want to see the process.
There are three main issues:
Removing the step of creating course-level question banks and moving it to the quiz creation step arguably makes outcome alignment more accessible. But, given that they do not report back erodes the value in aligning outcomes at all.
While I'm not going to hold my breath, there are a few ways Canvas could make Outcomes more usable as they move forward with some of their development strategies.
If Canvas takes reporting seriously, they'll understand that it goes beyond assignment averages and test scores. Outcomes and their attachment to items in Canvas can really open doors to more accurate and equitable reporting. Unfortunately, most of the decisions coming from leadership are making this harder instead of easier.