My Reassessment Queue

We're almost at the end of the third quarter over here. Here's the current plot of number of reassessments over time for this semester:

I'm energized though that the students have bought into the system, and that my improved workflow from last semester is making the process manageable. My pile of reassessment papers grows faster than I'd like, but I've also improved the physical process of managing the paperwork.

While I'm battling performance issues on the site now that there's a lot of data moving around on there, the thing I'm more interested is improving participating. Who are the students that aren't reassessing? How do I get them involved? Why aren't they doing so?

There are lots of issues at play here. I'm loving how I've been experimenting a lot lately with new ways of assessing, structuring classes, rethinking the grade book, and just plain trying new activities out on students. I'll do a better job of sharing out in the weeks to come.

SBG and Leveling Up, Part 3: The Machine Thinks!

Read the first two posts in this series here:

SBG and Leveling Up, Part 1
SBG and Leveling Up, Part 2: Machine Learning

...or you can read this quick review of where I've been going with this:

  • When a student asks to be reassessed on a learning standard, the most important inputs that contribute to the student's new achievement level are the student's previously assessed level, the difficulty of a given reassessment question, and the nature of any errors made during the reassessment.
  • Machine learning offers a convenient way to find patterns that I might not otherwise notice in these grading patterns.

Rather than design a flow chart that arbitrarily figures out the new grade given these inputs, my idea was to simply take different combinations of these inputs, and use my experience to determine what new grade I would assign. Any patterns that exist there (if there are any) would be determined by the machine learning algorithm.

I trained the neural network methodically. These were the general parameters:

  • I only did ten or twenty grades at any given time to avoid the effects of fatigue.
  • I graded in the morning, in the afternoon, before lunch, and after lunch, and also some at night.
  • I spread this out over a few days to minimize the effects of any one particular day on the training.
  • When I noticed there weren't many grades at the upper end of the scale, I changed the program to generate instances of just those grades.
  • The permutation-fanatics among you might be interested in the fact that there are 5*3*2*2*2 = 120 possibilities for numerical combinations. I ended up grading just over 200. Why not just grade every single possibility? Simple - I don't pretend to think I'm really consistent when I'm doing this. That's part of the problem. I want the algorithm to figure out what, on average, I tend to do in a number of different situations.

After training for a while, I was ready to have the network make some predictions. I made a little visualizer to help me see the results:

You can also see this in action by going to the CodePen, clicking on the 'Load Trained Data' button, and playing around with it yourself. There's no limit to the values in the form, so some crazy results can occur.

The thing that makes me happiest about the result is that there's nothing surprising about the results.

  • Conceptual errors are the most important ones that limit students from making progress from one level to the next. This makes sense. Once a student has made a conceptual error, I generally don't let students increase their proficiency level
  • Students with low scores that ask for the highest difficulty problems probably shouldn't.
  • Students that have an 8 can get to a 9 by doing a middle difficulty level problem, but can't get to a 10 in one reassessment without doing the highest difficulty level problem. On the other hand, a student that is a 9 that makes a conceptual error on a middle difficulty problem are brought back to a 7.

When I shared this with students, the thing they seemed most interested to use this to do is decide what sort of problem they want for a given reassessment. Some students with a 6 have come in asking for the simplest level question so they can be guaranteed a rise to a 7 if they answer correctly. A lot of level 8 students want to become a 10 in one go, but often make a conceptual error along the way and are limited to a 9. I clearly have the freedom to classify these different types of errors as I see fit when a student comes to meet with me. When I ask students what they think about having this tool available to them, the response is usually that it's a good way to be fair. I'm pretty happy about that.

I'll continue playing with this. It was an interesting way to analyze my thinking around something that I consider to still be pretty fuzzy, even this long after getting involved with SBG in my classes.

SBG and Leveling Up - Part 2: Machine Learning

In my 100-point scale series last June, I wrote about how our system does a pretty cruddy job of classifying students based on raw point percentages. In a later post in that series, I proposed that machine learning might serve as a way to make sense of our intuition around student achievement levels and help provide insight into refining a rubric to better reflect a student's ability.

In I last post, I wrote about my desire to become more methodical about my process of deciding how a student moves from one standard level to the next. I typically know what I'm looking for when I see it. Observing students and their skill levels relative to a given set of tasks is often required to identify the level of a student students. Defining the characteristics of different levels is crucial to communicating those levels to students and parents, and for being consistent among different groups. This is precisely what we intend to do when we define a rubric or grading scale.

I need help relating my observations of different factors to a numerical scale. I want students to know clearly what they might expect to get in a given session. I want them to understand my expectations of what is necessary to go from a level 6 to a level 8. I don't believe I have the ability to design a simple grid rubric that describes all of this to them though. I could try, sure, but why not use some computational thinking to do the pattern finding for me?

In my last post, I detailed some elements that I typically consider in assigning a level to a student: previously recorded level, question difficulty, number of conceptual errors, and numbers of algebraic, and arithmetic errors. I had the goal of creating a system that lets me go through the following process:

  • I am presented with a series of scenarios with different initial scores, arithmetic errors, conceptual errors, and so on.
  • I decide what new numerical level I think is appropriate given this information. I enter that into the system.
  • The system uses these examples to make predictions for what score it thinks I will give a different set of parameters. I can choose to agree, or assign a different level.
  • With sufficient training, the computer should be able to agree with my assessment a majority of the time.

After a lot of trial and error, more learning about React, and figuring out how to use a different machine learning library than I used previously, I was able to piece together a working prototype.

You can play with my implementation yourself by visiting the CodePen that I used to write this. The first ten suggested scores are generated by increasing the input score by one, but the next ten use the neural network to generate the suggested scores.

In my next post in this series, I'll discuss the methodology I followed for training this neural network and how I've been sharing the results with my students.

Exploring Dan Meyer's Boat Dock with PearDeck

In PreCalculus, I tend to be application heavy whenever possible. This unit, which has focused on analytic trigonometry, has been pretty high on the abstraction ladder. I try to emphasize right triangle trigonometry in nearly everything we do so that students have a way in, but that's still pretty abstract. I decided it was time to do something more on the application side.

Enter Dan Meyer's Boat Dock, a makeover concept he put together a year ago on his blog.

I decided to put some of it into Pear Deck to allow for efficient collection of student responses. The start of my activity was the same as what Dan suggested in his blog post:

After collecting the data, I asked students to clarify what they meant by 'best' and 'worst'. Student comments were focused on safety, cost, and limiting the movement of the ramp.

I shared that the maximum safe angle for the ramp was 18˚, and then called upon PearDeck to use one of its best features to see what the class was thinking visually. I asked students to draw the best ramp.

After having them draw it, I had them calculate the length of the best ramp. This is where some of the best conflict arose. Not everyone responded, for a number of reasons, but the spread was pretty awesome in terms of stoking conversation. Check it out:

The source of some of the conflict was this commonly drawn triangle, which prompted lots of productive discussion.

When students built their safest ramp using the Boat Dock simulator, it prompted the modelling cycle to return to the start, which is always great to have the ability to do.

I then asked students to create a tool using a spreadsheet, program, or algorithm by hand for finding the safest ramp of least cost for every random length of the ramp in the simulator. This open-ended request led to a lot of students nodding their heads about concepts learned in their programming classes being applied in a new context. It also lead to a lot of confusion, but productive confusion.

This was a lot of fun - I need to do this more often. I say that a lot about things like this though, so I also hope I follow my own advice.

Standards Based Grading and Leveling Up

I've been really happy since joining the SBG fan club a few years ago.

As I've gained experience, I've been able to hone my definitions of what it means to be a six, eight, or ten. Much of what happens when students sign up to do a reassessment is based on applying my experience to evaluating individual students against these definitions. I give a student a problem or two, ask him or her to talk to me about it, and based on the overall interaction, I decide where students are on that scale.

And yet, with all of that experience, I still sometimes fear that I might not be as consistent as I think I am. I've wondered if my mood, fatigue level, the time of day affect my assessment of that level. From a more cynical perspective, I also really really hope that past experiences with a given student, gender, nationality, and other characteristics don't enter into the process. I don't know how I would measure the effect of all of these to confirm these are not significant effects, if they exist at all. I don't think I fully trust myself to be truly unbiased, as well intentioned and unbiased as I might try to be or think I am.

Before the winter break, I came up with a new way to look at the problem. If I can define what demonstrated characteristics should matter for assessing a student's level, and test myself to decide how I would respond to different arrangements of those characteristics, I might have a way to better define this for myself, and more importantly, communicate those to my students.

I determined the following to be the parameters I use to decide where a student is on my scale based on a given reassessment session:

  1. A student's previously assessed level. This is an indicator of past performance. With measurement error and a whole host of other factors affecting the connection between this level and where a student actually is at any given time, I don't think this is necessarily the most important. It is, in reality, information that I use to decide what type of question to give a student, and as such, is usually my starting point.
  2. The difficulty of the question(s). A student that really struggled on the first assessment is not going to get a high level synthesis question. A student at the upper end of the scale is going to get a question that requires transfer and understanding. I think this is probably the most obvious out of the factors I'm listing here.
  3. Conceptual errors made by the student during the reassessment. In the context of the previous two, this is key in whether a student should (or should not) advance. Is a conceptual error in the context of basic skills the same as one of application of those skills? These apply differently at a level six versus a level eight. I know this effect when I see it and feel pretty confident in my ability to identify one or more of these errors.
  4. Arithmetic/Sign errors and Algebraic errors. I consider these separately when I look at a student's work. Using a calculator appropriately to check arithmetic is something students should be able to do. Deciding to do this when calculations don't make sense is a sign of a more skilled student in comparison to one that does not. Observing these errors is routinely something I identify as a barrier to advancement, but not necessarily in decreasing a student's level.

There are, of course, other factors to consider. I decided to settle on the ones mentioned above for the next steps of my winter break project.

I'll share how I moved forward on this in my next post in the series.

After Individualized Learning, What Comes Next?

This was my classroom in the latter part of the last block of the day.

I should point out that I usually have students seated closer together in groups. Conversation happens more organically in that configuration. I gave a quiz where I didn't want to set hard time limits. As each finished, I nudged them to work on their own on a PearDeck assignment.

This is what it looks like when everyone is working at their own pace. Each student with a single screen, each solving problems and answering questions.

I like that I can drift from student to student and either ask or answer questions when the time seems right. I can see each student's answers on the online teacher dashboard. I can decide which conversations I need to have. Students can also decide if they need to have conversations with me. I involved myself in student learning with surgical precision.

Some claim this is the future of learning in schools.

For me, the silence in the room today was unsatisfying. No sharing of ideas. No excitement shared between friends. Nothing that might compel a student to contemplate the other living, breathing beings in the room.

I don't do this every day, so I know this isn't how it will always be. Whenever I do this type of lesson, I know that the students are better off when they get what they need. I know it is good for them. My thinking always go to the next step. What will we do when we are back together in a big group, or at least in groups larger than one?

I asked the students the following question:

We have all worked independently today. What is the best way to use our time when we are back together?

Their answers gave me the direction I needed to think about the next steps:

  • Either going over answers so people who haven't put the answers in will know what to do and what the correct answer is.
  • Maybe review the questions people were confused with or got wrong a lot. This would help to review what we did.
  • Go over some of the answers together or answer some of the difficult problems.
  • I don't know.
  • To check where the majority of us got stuck and had trouble, and discuss how to figure out those problems. That, and to possibly discuss new concepts that we have yet to master.
  • Going through the questions that could be tricky and solve challenging problems together
  • I think it is good to have a mini lesson to quickly teach what we are learning and to go over, but also save time for students to work independently.
  • Give us a few examples at the beginning of class so we don't forget what we learned, and then learn new things.
  • To quickly go through and review all the topics we've learnt about polynomials.
  • Learn something new

Knowing what to go over is certainly where the online tools are helpful - they make incorrect answers or misconceptions stand out.

I know that the students prefer the social aspects of the classroom. Whatever our next step is, it should involve coming together and acknowledging and appreciating we are in a room of people learning together.

We need to make sure that we are social when being social is productive to learning.

We need to make sure students have time to learn and think on their own.

We need to make sure students can also learn what they need to know in the hands of an experienced guide.

All of these are crucial. Any one channel we use loses its effectiveness to learning when it becomes routine. I think this is especially the case when that routine involves staring at an entity that can't talk or laugh back.

Unit Circle Practice (#TeachersCoding)

I've always wanted a simple interface to help my students practice the unit circle. I've found Quizlet sites that help with this, as well as the occasional Khan Academy exercise that approaches what I want. The big issue I find with most of these is that the interface and the questions ask much more than what I'm looking for. I want a simple flashcard-like situation with no bells and whistles that gets my students the repetition and opportunity to think through the functions with feedback.

Over the winter break, I decided I needed to build the resource I had in mind. Here's the result:

The live site can be accessed here: http://codepen.io/emwdx/full/bgEJYK/

This is essentially a digital version of a set of flash cards, but they never stop. The angles rotate around the unit circle and the trigonometric function used is randomized. Since I am holding my PreCalculus students responsible for the reciprocal functions, but my IB students don't need them, I added the ability to flip those on and off.

I decided to do this on CodePen in case you want to look under the hood to see how it works. The editor view that contains my code is here. Let me know if you use it for something useful.

Notes to the Future with Google Scripts & PearDeck

I wrote previously about my use of PearDeck in an end of semester activity. One of the slides in this deck was one in which I asked students to write themselves a note containing the things they would want to remember three weeks later at the beginning of semester two. With vacation now over, that day is now. I wrote a Google script that automatically sends the notes they wrote to each student. This allowed me to generally send these out without inadvertently reading the notes in detail. I saw some of them, but made an effort not to see who was writing what.

The PearDeck output spreadsheet for this deck looks like this:

Column 3 of the spreadsheet contains columns with the student's email addresses, so that made it easy to get the address corresponding with a given note to the future, which is column 4. By selecting 'Script Editor' from the tools menu, you can create a script that has the ability to process this data.

You can delete the code that is there, and then paste in the code below to create the email script.

You'll need to save the script, and from the run menu, select 'sendEmails'. You'll need to give permission for this script to read the spreadsheet for this to proceed. The emails will all be sent from your Google email.

Code:

function sendEmails() {

var sheet = SpreadsheetApp.getActiveSheet();
var startRow = 2; // First row of data to process
var numRows = 12; // Number of rows to process

var dataRange = sheet.getRange(startRow, 1, numRows, 5) //get all the data in the spreadsheet from the range between (startRow,1) and (numRows,5).
//This gets the first 12 students that are in this class, and the five columns of data I want to use for the email.

var data = dataRange.getValues(); //Store the spreadsheet data in an array
for (i in data) { //for each row in the spreadsheet
var row = data[i];

var studentEmail = row[2]; //the student email is in element 2, which is the third column

var subject = "Note To The Future (AKA Now): " + studentEmail; //email subject

//This next line is text formatted using HTML tags that appears before each students' note.

var greeting = "Happy New Year!

Before break, I asked you to write an email to yourself with things you would want to remember at the beginning of the semester. Whatever you wrote in that text box in Pear Deck is below for you to enjoy.

I'm looking forward to seeing you Tuesday or Wednesday in class.

Be well,
EMW


";
var message = greeting + row[3] //Combines the greeting and the student's individual note

MailApp.sendEmail({to:studentEmail, subject:subject, htmlBody:message});
//Sends the email to the student, with the subject defined in line 14, and the message from lines 20 and 21.
//htmlBody means the email will be formatted as HTML, not just text.
}

}

Holiday Travel and Exporting PearDeck Data to Desmos

One of the unique phenomena of international schools is the reality that, during a vacation, the school population disperses to locations across the world. I had students do an end of semester reflection through PearDeck, and one of the slides asked students to drag a dot to where they were going to spend the vacation.

PearDeck allowed me to see the individual classes and share these with the students one at a time. I wanted to create a composite of all of the classes together in Desmos to share upon our return to classes, which happens tomorrow. You can find the result of this effort below. This is the combined data for draggable slides from five different sessions of the same deck.

The process of creating this image was a bit of work to figure out, but in the end wasn't too hard to pull off. Here's how I did it.

The export function of a completed PearDeck session, among other things, gives the coordinates of each student's dragged dot in a Draggable slide. I could not use these coordinates as is, as graphing them on top of the map image in Desmos did not actually yield the correct locations. I guessed that these coordinates represented a percentage of the width of the image used for the Draggable background since the images people upload are likely all of different sizes. I did a brief search in the documentation, and couldn't find official confirmation, but I'm fairly sure this is the case. An additional complication for using these is that the origin is at the upper left hand corner, which is typical for programming pixel art, but not correct for use with a Cartesian system as in Desmos.

This means that an exported data point located at 40, 70 is at 40% of the width of the image, and 70% of the height of the image, measured from the top left corner.

Luckily, Desmos makes it pretty easy to apply a transformation to the data to make it graph correctly. I took all of the data from the PearDeck export, pasted it into a spreadsheet class by class, and then pasted the aggregate data into a Desmos table. Desmos appears to have a 50 point limitation for pasting data this way, which is why the Desmos link below has two separate tables.

Click here to see the graph and data on Desmos

If there's an easier way to do this, I'd love to hear your suggestions in the comments.

Building Up or Breaking Down: Creating States-n-Plates

In piecing together States-n-Plates, I wanted to learn more about React, a web application library created by Facebook. In the process, I found myself finding parallels in how I go about learning anything.

Before I describe the details, I'll give a (hopefully brief) description of what React does.

An HTML page normally consists of HTML tags that tell the browser what to display, with other rules that also describe how the HTML should look. A page created through React reframes that page as a series of components that each serve a different function. The image components need to have the ability to be dragged onto the targets. The targets need to be able to accept a dragged image, and need to be able to indicate whether the image dragged onto the target corresponds with the correct plate. The scoreboard needs to know how many states have been correctly matched at any given time. The components also have the ability to respond when a user clicks, types, or drags other components into them.

In a well designed React application, each component uses information from the component that contains it in order to behave (or, um, react) as the application is used. Building the pathways for how this information flows from one component into another is deliberately designed so that each component can act independently from another.

When I first started working on creating States and Plates, I started with a fully formed webpage that looked much like the final product above. I followed the React documents that then suggested breaking the page down into components, one by one. I did this without really understanding in detail what I was doing, but was able to get the components to each have the appearance of the original web page, which felt like real progress. Eventually, my progress was halted when I reached the limits of what I understood. I needed help.

It was at this point when I picked up a book on React and started working through the basics. I began to understand better what the guiding philosophies of React were - the design decisions, the behaviors that one component had in response to another, and how to think through an application the React way. This was where it was helpful to read the perspectives of some people much more experienced then me - I understood the vocabulary they used and could make the connections I needed to make progress.

With some of the basics figured out, I rethought the application from scratch. Rather than starting with the webpage as a whole, I started creating components and making sure each one worked as expected before moving on.

By the end, I felt comfortable thinking about my application both from a bird's eye view and on an individual component level. I needed to have the experience of breaking the idea down into individual pieces and seeing how they interacted with each other to produce the whole. I needed to take time seeing what rules guided the function of one component in order to understand the whole. If I had started by reading the documentation as my step one, I would not have had the context that the big picture view of the application yielded for when I actually did so in my learning. Both views were important, and neither view was sufficient on its own to lead to full understanding or transfer.

We need to give our students opportunities to have both views of the content we teach. Insisting that student mastery of the basics is a necessary gatekeeper to higher levels of thought misses opportunities to understand the context of that basic knowledge. Student exploration of concepts through Desmos or Geogebra or problem solving is a great way to engage with the standards of mathematical practice, but without discussion, review of underlying concepts, or (gasp!) direct instruction where needed, opportunities for growth might be limited.

Let's make sure, as a team, that we are attacking this problem from both ends.