Thursday, May 7, 2015

God Had It Easy

God had it easy compared to me.

In the movie "Oh, God" John Denver could not believe the unprepossessing George Burns was God, but he at least believed God existed.

I do not have that edge: no one believes Algebra failure rates can be reduced from 60% to 10%.

God did not win John over by levitating a sugar bowl in the diner, but He got him later when they were driving in the car and He made it rain. Inside the car.

All I can do is quote letters like this (with Margaret's kind permission):
Dear Ken, 
Back in the '90s I used your Homework Tutor program successfully with my Algebra I students who were behind. I was excited about the efficient progress they made and how quickly they got to the same level as my better students after using the program.
Now I have some new teachers in my school who need help with their poor algebra students. Do you know where I could purchase your software? 
Of all of the software I have used in the evolution of school math and computers, your Algebra I Homework Tutor was the most elegant and practical. 
Sincerely,Margaret LekseRoundup High School, MT
Nice, but no rain.Or so I thought. Maybe you missed it, too.

When I reached out to Margaret (I was already resurrecting the app) and asked her to say more about her experience. She made clearer what her letter said: a half-dozen kids had fallen so far behind in Algebra that they no longer could follow her class presentations. She put all the kids on my software so the strugglers would not be embarrassed, then each kid worked at their own level. In one month the strugglers got up to speed and they put the software aside until new material proved challenging.

Wow. Same (exclellent, btw) teacher, same content, same kids. They fell behind under the traditional chalk talk/homework approach, then overtook the class when left to their own devices on my software.

But still, no one believes me, even long enough to let me show them the software. In this case my audience has definite information that what I am saying is impossible: we already tried software -- AlgeBlaster, Algebra Solved!, Cognitive Tutor, MyMathLab, MyMathLabPlus, ALEKS, Khan Academy, Mathspace, LearnBop -- and the Algebra failure rate is unchanged.

So what makes my software different? We will look at the details next, but the bottom line is that it was designed for real, struggling, unhappy, unenthusiastic students with shaky basic skills -- for the kids I worked with as a math teacher and private Algebra tutor.

Let's break that down by looking at my personal check list when assessing competitors:

Does the software check intermediate steps on multi-step problems?

I like to say that students do not do problems, they do steps of problems. Even if they do them in their head, they solve 3x-2 = 13 in two or more steps. Likewise, they do not get stuck on problems, they get stuck on steps of problems. Or make careless errors or reveal weak prerequisite skill on steps of problems.

If software is not checking steps, it cannot deeply engage the student. Sure, having the answer checked immediately instead of the next day is a big win, but it is not enough to create an immersive environment in which students are free to interact with Algebra the way a beginning programmer can interact with certain computer languages.

The ed theory buzzword for this is "active learning". With highly granular (step-by-step) feedback and hints and even videos, we enter a zone where even an indifferent student can apply native intelligence to sorting out the Algebra beast. They can try this, try that -- Algebra is talking to them saying "Fine" or "Not so much" or "Watch me", in detail.

Checking intermediate steps is hard because kids can type anything, and correct work can come in many forms -- the only way to do it is to create an artifical intellignece expert system that can parse and understand Algebra -- so most software checks only answers, explaining why come up short in research. 

I was a private tutor, and worked problems in the classroom with the students. Step by step. I knew that students can have trouble with any step, but most often the first step is where the new topic had to be applied.

Looking back at 3x-2=13, that would come after students could handle 3x=15 and x-2 = 13. So the first step presents the new challenge: which of the two familiar balancing transformations do I apply first?

Where is the vast majority of Algebra tutorial software while the student is contemplating that new concern? Nowhere to be found: "Come see me when you have the answer." followed by "Sorry, wrong."

Note that this is not so bad for the strong student, who then gamely rechecks all their steps looking for their mistake and usually finds it without much trouble. Great, but those are not the students failing Algebra.

Does the student generate answers, or is it multiple-choice?

Multiple choice just does not work. My French teacher liked to say (in French) that there is all the difference in the world between recognition and recall. She said this after we all came up empty on a bit of vocabulary but as soon as she started the word we all finished it with her.

Think about it. Multiple choice always provides the answer! It tries to challenge us by providing three or four other choices, but it still provides the answer. As for the several other choices, well, students have twenty ways to go wrong in Algebra. Providing just three silently corrects seventeen of those mistakes.

By contrast, let students enter their own answers and ironically we help them learn by making success harder. That blank piece of paper, be it of pixels or tree, forces them to generate the mathematics. Yes, help is available, but there again they must do the heavy lifting of asking for and analyzing the help.

The current buzzword for this is "active learning", but I prefer Confucius's "I do and I understand".

Whatever the term, multiple-choice makes this impossible.

OK, the student supplies all work. How? By typing (x^2 -1)/(x+1)?

My target students are strugglers. They will not get far trying to type or read stuff like
(x^2 -1)/(x+1), known as "ASCII math". 

And if the software can render maths nicely (as would TeX), does it include a slick maths editor? Or does it have a graphical keypad with icons for different operations forcing strugglers to build equations click by click? 

A slick editor is hard, btw, which is one reason software does multiple choice.

Fail on either of these and the struggle with composing or reading math will prevent immersion in the maths.
















Friday, March 6, 2015

Algebra and Tom Sawyer's Fence

[Thanks to PBS for hosting this excerpt.]
We educators -- especially those math educators who more and more are apologizing for math and even suggesting we should not teach Algebra -- could learn something from Tom Sawyer. So could Mark Twain. Twain wrote the unforgettable passage in which Tom Sawyer coaxed other boys into paying him to do his fence-painting chore, then drew from it the wrong lesson on human motivation:
In order to make a man or a boy covet a thing, it is only necessary to make the thing difficult to attain.
True, the story begins with Tom trying to buy Jim's assistance to no avail. Later Tom refuses to let Ben paint the fence, and later still he has the whole neighborhood lined up to paint the fence, paying him for the privilege. So what makes me think Twain got it wrong?

Twain missed a critical link from his own tale, one germane to the importance of doubling down on Algebra even as we struggle to find a way for kids to master it: why did they want to paint the fence? Because Tom said, No? Um, no. Look closely at the seminal exchange and see if you can spot the true motivational engine at work. Beginning with Tom:
“What do you call work?” 
“Why, ain’t that work?” 
Tom resumed his whitewashing, and answered carelessly:
“Well, maybe it is, and maybe it ain’t. All I know, is, it suits Tom Sawyer.” 
“Oh come, now, you don’t mean to let on that you like it?” 
The brush continued to move. 
“Like it? Well, I don’t see why I oughtn’t to like it. Does a boy get a chance to whitewash a fence every day?” 
That put the thing in a new light. Ben stopped nibbling his apple. Tom swept his brush daintily back and forth – stepped back to note the effect – added a touch here and there – criticised the effect again – Ben watching every move and getting more and more interested, more and more absorbed. 
Presently he said:
“Say, Tom, let me whitewash a little.”
Tom has not said, No, but the bait has been taken. Now Tom will set the hook (and drive monetization) by saying No, but his success hung on first highlighting the intrinsic reward of work done well. Here is the key bit again:
Tom swept his brush daintily back and forth – stepped back to note the effect – added a touch here and there – criticised the effect again  – Ben watching every move and getting more and more interested, more and more absorbed. 
Twain's cynicism, I wager, would be lost on anyone in the painting craft.

"Of course," they would think. "There are a hundred details to get right, details one knows only after painting a thousand fences."

And Mark does seem to know that:
You see, Aunt Polly’s awful particular about this fence – right here on the street, you know – but if it was the back fence I wouldn’t mind and she wouldn’t. Yes, she’s awful particular about this fence; it’s got to be done very careful; I reckon there ain’t one boy in a thousand, maybe two thousand, that can do it the way it’s got to be done.
Thus Tom is saying No, but it is a reluctant No driven by concern for performance: this work is hard and it matters and I would love to say Yes but this has to be done right.

Tom did not turn fence-painting into a game, nor did he make fence-painting somehow relevant to the boys' larger lives. He did not reward them -- he charged them! -- and he did not say they would need fence-painting skills later in life. He simply made it challenging.

We all take pride in a job well done and thus are drawn to challenges at which we might succeed. Look at video games, the single-player kind. The worst thing you can say about a game is that it is too easy to beat. Graphics, music, sound effects, and all the gamification in the world cannot save a game that is too easy to beat.

Why is that? Because the key lure of single-player games is the endorphin rush of success after substantial struggle, and the more struggle the bigger the rush. (Enhancing learning retention, research shows..) There is nothing like mastering something hard, nothing like performing well, meeting an unbending standard. When I complete a level it is because I can perform at that level: one does not pass the Professional Driver's License exam on Gran Turismo by luck.

Tom understood how to make hard work attractive: magnify for your audience the satisfaction of excellence. But US educators today are apologizing for Algebra, the doorstep to one of our greatest cultural wins, mathematics. We strive to make this purest of sciences relevant, as if it were not. We create games that teach mathematics and from that kids learn that we do not think math is worth learning in its own right. We even argue that one should be allowed a college degree without passing Algebra.

That is a losing game. If we want kids to stop showing up for college without fundamental skills in the three Rs, we will need their help: their hard work, their satisfaction in mastering skills that do not need to justify their worth.

We are apologizing for learning being hard. Tom knew that that is half of learning's appeal.




Tuesday, January 27, 2015

What Made HomeworkTutor So Good?

The Background

The Tilton's Algebra web site (TA) is a reincarnation of a desktop application sold for Macs and PCs back in the 80s and 90s.

That application was called Algebra I HomeworkTutor. I am a programmer, not a marketer.

It was reasonably successful but I am no businessman, either. I tried to do it all without raising money, and this software is challenging (think years of development, not weeks). Periodically I went to work on other great projects which lasted years before I could hunker down for another push.

The most recent push has lasted fifteen months. (This app is intense.)

The Question

While working on the new version, several times former users of HwT tracked me down (with no little effort) to find out if the software was still available, or in one case to ask me to give a talk on the product. The message was consistent: there was a lot of Algebra software out there but nothing like HwT.

I was asked recently to document as best I could why HwT was so powerful, even though it had none of the new features of TA such as embedded video, an on-line forum, and a "levelling up" process to ensure mastery as well as draw students in.

The Answer

Here is what I have learned, for what it is worth, from anecdotal feedback, occasional published research, and a small amount of experience myself observing students using my software and other tutoring systems. In order of increasing importance, here is my understanding of why HwT succeeded in the 90s to the extent it did and why there is still nothing like it in 2015.

Step-by-step error detection and forced correction of mistakes. 

With HwT, students entered each step of their solution, not just the answer, then checked their work for correctness. When correct, strugglers got encouragement if they were in doubt, greatly reducing anxiety.

If their work was wrong, it just said so, and students could not proceed to the next step without correcting the errant one. This forced correction was transformative and universally popular with teachers. Without this, struggling, discouraged students just plowed through worksheets making mistakes interested only in producing something to turn in so they did not get a zero, the ball now in the teacher's court to correct all the papers and try to reteach the material.

Student engagement.

When mistakes were made, it was up to the student to fix the mistake. Hints and solutions of examples were available, but they had to ask for them and understand them. Importantly, the hints were just that; with most software so-called "hints" actually tell the student the next step.

Furthermore, HwT simply waited until they fixed their mistake, allowing unlimited retries.
Most software stops the process after a certain number of failed efforts, presumably to avoid discouraging the student. HwT trusted the learner to decide for themselves when to ask the teacher for help, and anyway, research shows the struggle is important to the permanence of learning once the student has their "Aha!" moment.

Limited help--but not too little.

The software does offer subtle hints and solved examples for students to learn from, but these require the student to dig into their memory of the teacher's presentation for any benefit to be had. The student struggle is guided, but in the end they have to assemble the solution. I have recently seen research on the value of frequent quizzes: apparently the act of pulling content from memory strengthens the command of that content.

Student control.

I learned this one from the first student I observed using my software on-site at a school. She had the difficulty level set to "Easy" and was doing problem after problem successfully. I suggested she try an "Average" problem and she nicely let me know that she would do a few more easy ones before advancing.

I realized her comfort level was simply different than mine. For the many students traumatized by math it seems better for them to practice "until they themselves knew they were proficient", as one teacher put it.

There is much talk these days of data mining and adaptive learning and software customizing the learning experience automatically. I would be curious if the implicit loss of student agency reduces engagement, effort, and finally results.

Quantity of practice. 

While "time on task" (ToT) is being challenged as necessarily correlated with greater learning (Kohn being a good example), even Kohn acknowledges that if the student engagement is there then the learning does follow.

I am speculating here, but I suspect students using HwT did more problems, achieving a greater quantity of practice as well as the quality discussed above. For one thing, the problems just pop up ready to be worked -- no copying them out of the book onto paper.

And now with TA and its so-called "missions" (summative tests) generated at random, students can do problem after problem trying to pass a mission, much as they play video games for days trying to get past a tough level. The old line about "I do and I understand" is as true as it is old, so I think increased time "on task" is vital.

Speaking of which, the DragonBox Algebra people reported some fascinating numbers from their "challenges" in which thousands of students did tens of thousands of problems. They said something like 93% of the students completed the game, but some took ten times as long and did six times as many problems as the fastest.

Of course for that we need software generating problems and evaluating student work or the load on teachers would be untenable.

Is the anxiety eliminated?

I was not a math major in college so when I decided to move from elementary education to math I had to take a few courses at a local college. One course was differential equations, and I distinctly remember being half-way down a page on one problem doing calculus as a shaky prerequisite skill and feeling terribly uneasy about the whole thing.

It turned out in each case that I was doing fine, but there it was: without any feedback on each step, and with a lack of confidence in my calculus, I experienced math anxiety  for myself.

So I am curious how students will react to step-by-step correction: is knowing they have made a mistake OK, as long as they know? Or will they still report anxiety? Also, how much does it help to have the software say (in training mode) "OK so far."? As a private tutor many a time my clients did a step correctly but then looked at me in doubt. That was the anxiety I experienced doing differential equations.

I think some students will still be upset when they get mistakes, so the cure may not be perfect, but I will be curious to see if the reports are of anxiety or frustration. My hope is that getting the anxiety out of the way will draw out more perseverance and ameliorate even the frustration.

Two Sigma Problem?

Bloom (1984) identified a so-called Two Sigma Problem: how do we come up with an instructional method as effective as a combination of mastery-based progress and good individual tutoring, without hiring a tutor for every math student?

I have mentioned that I have done scant observation of my software in the field. This lack of field-testing was possible because I came at the design from the other direction and simply did my best to recreate in software the experience I provided as a private tutor.

One teacher reported that she put stragglers who had fallen off the pace on HwT and after a number of weeks they actually caught up with and rejoined the mainstream. Large-scale tests of Cognitive Tutor and Khan Academy have failed to demonstrate much benefit at all, but HwT helped failing students catch up with and rejoin the mainstream.

It would be interesting to see if students report any sense of being tutored privately by the software.

Good tutoring

Van Lehn (2011) did not find the two sigma effect Bloom et al reported. Instead, he found effects of 0.76 for software tutors and 0.79 for human tutors. But Bloom explicitly claimed to have used "good human tutors", while Van Lehn documented a wide variation in the quality of the human tutors. For example, often the one-on-one sessions involved very little student contribution -- the tutoring was more of a one-on-one lecture.

My style even when lecturing is to constantly call on students for contributions to the work I am doing at the board, and my style as a tutor was to help students when they got stuck by asking them leading questions they actually had to answer and then if all went well realize how they could get unstuck.

While a good human tutor will always be better than a good software tutor, embodying quality tutoring pedagogy in software makes it more reliable and more widely available. Over time it can be enhanced in the light of experience, capturing even better tutoring in the software asset.

Gaming, good and bad.

On one rare on-site visit I saw students also using Where In the World is Carmen Sandiego? Students had learned that if they asked for enough hints the software would just tell them the answer, so they did that until they ran out of hints. Then they would call out to the teacher to tell them, so they could build back up their reservoir.

One study I saw on Cognitive Tutor said students did not ask the software for help because CT deducted points for hints. Instead, students asked the teacher for help (who, again, provided it). CT  also provides answers after three mistakes, while HwT just lets them flail.

One factor we see is that a teacher can inadvertently undercut the software. That happened as well in the infamous "Benny Incident" where the teacher neglected to keep an eye on a student's fascinating internal dialogue with a tutoring system. (Shaugnessy, 1973.) The student was indeed bright, the questions were often multiple-choice, and the passing threshold was only 80% so the student was able sail through the automated instruction despite grievous deficits in understanding.

So that is the bad sense of gaming. On the good side, new for TA is a so-called "Mission Mode". Here there is no multiple-choice, the standard of success is higher than Benny encountered, the help non-existent, and the tolerance for error nil. But students can try a mission as often as they like, quit a mission going badly any time they like, and even when they fail they get a "Personal Best" certificate if they get that far. So like video game missions, the standard is unbending but the failures are simply stepping stones to eventual success, with progress drawing the student into more and more practice.

Mastery-based.

Last but (not as advertised) not least: a lot of the above boils down to students proceeding at their own pace in what is commonly referred to as the mastery-based model. Bloom felt the mastery model was more important than the private tutoring, perhaps precisely because of the variability of tutor quality documented by Van Lehn. As the DragonBox Algebra results show, over 90% of students could master the game given enough time.

This in turn aligns with what most math educators believe: Algebra is not that hard. I would be interested in whether students who struggled with Algebra before using the software reported a change in attitude in which they changed their assessment of the difficulty of Algebra.

Summary

There is a lot of powerful Algebra software out there, but Algebra failure rates are as bad as ever. Two-year colleges are forced to offer multiple courses even in arithmetic, and the AMATYC has just recommended dropping the Algebra requirement for non-STEM majors, a step already taken by the California college system and elsewhere. So why is all that Algebra software not working, when HwT did back in the 90s?

One easy reason: the only other software I know of that checks intermediate steps is MathSpace. Without step-by-step feedback, most of the wins delineated above disappear.

Other than that we have a good research question: which other elements of HwT were so instrumental in its success? Above are my guesses as to where lies the answer, but it is all anecdotal and seat-of-the-pants. More experience and data on how exactly students and teachers use the software is needed.

I suspect we will find the following:
  • Students report less anxiety.
  • Different students will use different kinds of help (trial and error, video, canned hints, solved examples, and the public forum).
  • Students will do many more problems.
  • Student performance will be better and more uniform, but with a normal distribution of how much practice is needed to achieve that performance.
  • Students will enjoy math in and of itself, as a puzzle.
  • Comparison with adaptive tools will show student agency is more important than precise, automatic throttling of content.

Saturday, November 1, 2014

Focus, people. Focus.

Just ran into Mike Antonucci blowing smoke over on EducationNext. I left a comment which I share here because I like the way I ended, emphasizing what goes wrong when analysis has a lousy focus.

It seems Mr. Antonucci was a navigator. One would hope he understood the value of hitting the target. Instead he seems to be no more than an education union gadfly. Education unions, that is the target? Has he looked at teacher salaries? Some power. How about how little support teachers get from management while trying to teach young people? That's the target, right?

I'd love to see a chart showing percentage word count specifically discussing learning in Mr. Antonucci's work.

Speaking of charts, if you want to know how Antonucci got on my bad side it was his chart on dropping teacher union membership. First, it pulled that classic "lying with statistics" move of having the base of the chart at 45%. Hey, I am a math teacher, I am counting that as two strikes. Second, the chart goes back to 1983, as does a chart I found on overall union membership. That chart shows about twice the drop as in teacher union membership. Strike three, have a seat.

But who cares? Tenure and teacher unions are probably on balance good ideas in small ways, but my interest is in improving schools because I love learning and hated all but a class here and a class there in school. And because so little cool learning goes on in schools. If the premise is that teacher job security was the problem -- see how dumb that sounds?

I guess Antonucci has plenty of company in Goals 2000, NCLB, CCSS, and Bill Gates in blaming teachers for kids graduating high school with lousy skills, but last I heard even Bill is close to giving up, having found something money cannot fix. Here's my comment:

Absent more information on how teacher unions are blocking reform (well, I should say "any information") this was a long read for nothing. 
Fun always to see charts with a baseline other than zero. Move that down from 45% then superimpose overall decline in union membership (much greater) and we discover the article has no point whatsoever.
Final note: unions "weathered the storm" of Goals 2000? But then along came the wonderful NCLB "reform"? Those are not reforms, those are government being government, thinking they can make something happen by passing a law mandating outcomes while offering nothing on the causation behind the existing outcomes. Oh, yeah, that will work. 
The presumption behind such laws is that teachers and schools are not really trying, so all we have to do is threaten them. These laws are not working because the teachers and schools are doing their very best every day to help kids learn.
Society, your schools are you.  Parents, your children's school performance is you. Children, what did you do today to learn something?
Bust all the unions you want, then go back to those three questions. They will be there when you get back.
Focus, people. Focus.

Thursday, October 30, 2014

Boot Camp Algebra?

Synopsis

Is Algebra a problem or an opportunity? 

The Algebra failure rates for high school and community college students are so high that many including the pre-eminent two-year college math group AMATYC are considering eliminating the requirement, with some substituting, eg, Carnegie Foundation's Statways/Quantways.

I raise the possibility that in fact Algebra presents not a problem but rather the perfect and even indispensable opportunity for us to turn American education around, a tightly defined battleground where we can decide once and for all whether American students can once again work hard, persevere, think rigorously, and work meticulously, not just in math, but in all intellectual endeavor academic or professional.

Continuing the unfortunate military analogy, Algebra can be an academic boot camp akin to the military's six to twelve week programs in which undisciplined, out of shape civilians quickly learn the discipline and basic skills required by the services, aided in their efforts by a well-constructed program delivered by dedicated professionals, and by a clear understanding up front that they are in for a challenging experience.

Where We Stand

A lot of folks want to drop the Algebra requirement for college. Why? Because so many cannot pass it. But is Algebra the canary in the coal mine? Algebra is not that hard. If students fail Algebra then we need badly to fix something, but that something is not Algebra. Note that I am worried neither about the canary nor Algebra: I am worried about the atmosphere being fatal to all learning.

Will dropping the Algebra requirement help? The skills required by Algebra -- learning a set of rules, knowing which rules can be applied when, then applying them accurately and meticulously -- are skills required in any job requiring a degree, STEM or not (see next), and many technical jobs not requiring a degree.

Why not train kids up on a generic meta-skill required for any job (or hobby) that is at all interesting?

(See Next)

My most recent computing gig in tall buildings was for a dental insurance provider. Claims adjudicators might have to cover three or more states, each with its own rules and regulations such as compliance windows (eg, claims might have to be settled within thirty days, twenty if delivered electronically). Within each state, different coverage plans were supported, each with different rules. A single plan would have one or two hundred rules covering different procedures and classes of patient and provider. A single rule might be expressed in a technical paragraph twenty to one hundred words long. Rules interacted, with one rule having precedence over another where both applied.

If someone cannot pass Algebra but we Oz-like hand them a degree, how well are they going to do at such a job?

Do not sell students short

Kids vastly prefer succeeding at academic challenges over getting free passes on them. What is hard for many is in fact succeeding, but if Algebra is easy (it is) why not use it as an opportunity to show them how to succeed with a logical set of rules before they tackle their intended profession? The profession itself will bring its own added complexity such as the dental procedures above -- better to get the meta-skills of study, perseverance, and meticulousness out of the way in an isolated context already mastered (arithmetic (I know, see next)) so they can concentrate on the logical system? They can walk before they run..

Next. Yes, I know, one of the big problems is that they have not in fact mastered arithmetic. That too is easy and curable with a bit of work and concentration. See where this is going?

Algebra is perfect because it presents all our learning ills in a nutshell: lack of basic number skills, meta-lack of accepting ownership for learning, in turn reflected in the meta-lack of perseverance. Algebra is a subject challenging enough to require the perseverance and self-direction we wish to cultivate, and a subject which, when mastered, will give students pride and confidence to reach higher. Earlier I said it might be indispensable: what other subject presents such a pure training ground?

Who knows? Instead of failing Algebra, many may be encouraged to aim a little higher at more rewarding (financially and otherwise) STEM jobs. Dropping Algebra will inspire no one, instead merely reinforce for students that they just need to fail often enough and we educators will move the curve.

They do not get much accountability in high school, but will respond well to accountability if we make clear to them that they are in college now and the rules are different. Isn;t that prospect more exciting than continuing the accountability forgiveness by dropping Algebra?

The problem colleges and students have is not that they cannot teach/pass algebra. It is a meta-problem of not knowing how to work, of expecting passing grades to be handed out in return for a minimum of effort. Google “grade inflation" (OK, here it is and here is another) look for the spike (in a chart on the first link) beginning mid-sixties.) And here is an anecdotal but stunning example of how we got soft. The natural consequences of lowering the bar were well documented a decade later: Nation At Risk.

The nice thing about trying against the apparent odds to succeed in teaching algebra: at least we will believe in what we are doing. No one will take satisfaction from abandoning Algebra, except those students who consider it an unreasonable imposition on them. But how are those people going to respond to the demands of their profession? Their resentment of reasonable demands is the natural outcome of decades of apologizing to students for asking them to work. We should jawbone them into understanding that the first thing a TYC or CC will do is disabuse them of attitudes that will make them workplace failures.

Random Musings

TYCs are in this mess because HSs lowered their standards. Dropping Algebra simply joins in on that party and passes the problem along to businesses. I understand the political pressure on TYC math departments to lower the bar, but will that achieve anything? Will TYC degrees become as meaningless as HS degrees, a mere certification of attendance?

If students sit back and think that just by showing up in class and doing assigned work (more or less well) they have done their part and deserve to pass Algebra -- well, that is a fatal flaw. No one can learn with that attitude. We need the student involved and active and owning the responsibility to meet an unbending standard.

The folks behind DragonBox Algebra came up with some great numbers with their huge “challenges”. Hundreds of thousands participated. 95% mastered the game, but some required six times more problems and ten times more time. Very encouraging, and one clear indicator of what we can fix about how we teach Algebra: employ mastery (aka competency) based learning.

Colleges have an opportunity -- one that high schools do not have -- to turn things around. They have
a self-selected, more mature population;
who turned to college by choice to move up in the world;
who is paying good money to do so; and
who is no longer backed up by nagging affluent parents or the school boards beholden to those parents.

If we hold this promising audience accountable and make it possible for them to succeed with mastery-based learning facilitated by technology, students will be thrilled. Listen to the Statways/Quantways videos -- successful graduates have so much pride, and some even consider STEM fields! Imagine if they had succeeded with Algebra.

Kids are quite smart. They will admit they have gotten a free ride. Welcome them to college and say, yeah, college is the big leagues, are you ready to step up your game? Then provide support, and watch what happens. No one really enjoys a free ride, everyone enjoys honest achievement.

There is a bit of a win there in that employers expect anyway to train new employees on their systems and processes, so why not move the kids along and Oz-like give them the diplomas they need? Actually, as long as we cannot do any better, that is fine. The employers will work things out or let the employee go. They can even test as part of the hiring process to avoid the substantial cost of a failed hire.

But what if we can do better? What if we can take students who have been passed along by their high schools and teach them how to work, how to persevere, how to concentrate? How much better will they turn out, and how much better will we and they feel about the work we are doing?

An Exemplary Anecdote (and the martial theme continues)

I once took a crash course in kickboxing to get ready for a full-contact tournament. Why the crash course? I had been boxing exclusively. My boxing teacher was right, I would be fine just with my hands, but when I found out my bartender Carlos had had his own martial arts school back in Portugal I asked him to show me a few things.

Carlos turned out to be a wonderful teacher who threw himself into the task. Somewhere around our third session he had me hold a striking mitt head high while he demonstrated a spinning back kick, looking at me over his left shoulder before instantaneously spinning 360 degrees to the right and delivering a tremendous roundhouse thwack to the mitt with his back foot.

I pointed out to Carlos that the fight was in two weeks and I was never going to do something like that in the middle of a real fight, but he insisted we work on it a bit. I like to do what my teachers say, so I made a couple of awful attempts and that was it. For one, I did not have the flexibility to get my foot that high. For another, it had only been a few lessons, my basic kicking skills were not there. Sound familiar?

I was training quite a bit so had plenty of time to work on the kick, and I respected the challenge. I had to work out how to lean my body away so my leg could get that high, then I had to work out how to maintain my balance while making that contortion. To my delight and astonishment, after fifteen minutes of exploration I was actually cracking the heavy bag high and hard with single-motion spinning back kicks. It was a blast.

The next time we worked out I think I had forgotten about it, but sure enough Carlos took out the mitt and asked to see my spinning back kick. I nailed it pretty well and Carlos was impressed. Then it was he who pointed out that I would never throw that in my upcoming fight, asking if I knew why he had me learn it.

No, I said.

For your confidence, he explained. So you would know what you can do.

And he was right. I was now excited about the prospect of adding kicks to my repertoire. No, there was not time enough before the tournament to internalize the technique, but the future was wide open. By taking me beyond my comfort zone into a space I wrongly considered well beyond me, he had opened up a whole new frontier of martial skill for me.

All in the context of one solitary technique.

Summary

Can we do with Algebra what my martial arts teacher did with the spinning back kick? Note that I am not talking about making Algebra relevant, I am talking about Algebra as a pure exercise in which students will learn how to learn and discover that the seemingly impossible can with perseverance be performed more or less well.

When we do that, we will not need to “sell” Algebra as relevant. Students will enjoy doing Algebra and take pride in their newfound ability.

Everyone knows the importance of math. The problem is just that they cannot do it. If we can change that, students will feel pride and aim higher and we educators will be able to hold our heads high, having done our jobs. If we drop the Algebra requirement, do you have any idea how hard it will be to get it back in? Let’s just get to work on teaching Algebra better, including first making clear to students that learning Algebra, with our copious help, is their responsibility.

Is there another subject we could use for this intellectual boot camp? Perhaps. I gather Latin has quite an intricate grammar. But is some other subject also the doorstep to mathematics, science, technology and so many 21st century careers? Will acing this other boot camp also lift the aspirations of kids to more rewarding careers?

Again, students questioning Algebra do so because they are failing it. That is the real problem, and that is on what we should be working.

The Programme

Along the way above I touched on the elements of a successful TYC (or HS) Algebra programme in which the failure rates would be closer to 10% than 50%:
  • Attitude. An explicit discussion with students of their accountability, and of their responsibility for their own mastery. We will tell them what they need to do, and it is not all that hard, but they are the ones that have to do the things we recommend; beginning with...
  • ...number facts. Learn them cold.
  • Practice, practice, practice until mastery is achieved, and do not continue until mastery is achieved. Bloom identified this long ago as a big win for learning. The DragonBox experience makes clear different students will get there at different rates, but that a very high number will get there as long as they have an easy way to practice indefinitely with immediate feedback.
  • The prior element dictates that some form of automated practice and self-assessment be involved.

Apologia

  1. Is all the above self-serving because I am developing an algebra learning web site? Sure, but (a) algebra is not going anywhere, so I will do OK either way; and (b) I am not expecting anyone to accept me as an authority. Any credibility derives from the reader’s own experience of what I am merely reminding them, if I am.
  2. Well, OK, here is my experience: I taught eighth grade science/math for four years, three in the inner city, one in a small working class town. I tutored Algebra privately. I sold a similar desktop app back in the 90s and learned a lot from the feedback. I have had TYC professors speak to me of the problems they face, including students who think they should pass because they attend class reliably. Another spoke wryly of dealing with discipline problems, and this was almost twenty-five years ago. 
  3. I have playfully teased folks on-line for considering dropping Algebra. That is just my style. I understand well (a) the tragedy of kids trying to better themselves by enrolling in a TYC only to come to grief in algebra class (with plenty of student loan debt to pay off) and (b) the dismay of college professors forced to teach eighth grade math. That’s what we need to fix.


Wednesday, October 1, 2014

One Half Is Not Two Quarters? Really, CCSS?

Clicked on a tweet hawking great lessons for the toughest parts of math CCSS standards (as defined by some research or other).

There was nothing at my level of interest (Algebra) but I spotted a lesson on equivalent fractions and went for it because fraction skills matter big time in Algebra. OK, the lesson I spotted was for the third grade (remember that) but I guessed it would still get to the crux of fractions.

Not too far along I tripped over something. Looking down at the page, I thought I saw the assertion that 1/2 and 2/4 were not equivalent. In a lesson on equivalent fractions, it was quite exciting to discover 1/2 and 2/4 were not equivalent.

A, here it is, the section had a subtitle: "To find equivalent fractions, the size of the wholes must be the same." Note that this little paragraph pops out of nowhere in the middle of a lesson on 1/2 being equivalent to 1/4. If I got lost, a third grader would...?

Of course, the subtitle is wrong. The size of the whole has no bearing on the equivalence of the fractions. A fraction of a whole is not the fraction, it is the product of the fraction and the whole. Hey, let's use algebra:

Let us call two wholes x and y. If (x > y), then (1/2x > 2/4y), but the fractions 1/2 and 2/4 are still equivalent. QED.

Oh, wait, you want to call the compound product of a fraction and some whole the fraction? Let us convene a council of mathematicians to decide if .. wait. We are springing this on third-graders right in the middle of a lesson on the equivalence of fractions?

One might want to follow the lesson on equivalence by having students contemplate why a half dollar is less than a 2/4 million dollars, just in case the equivalence lesson causes confusion by misapplication, but... is that a problem?

Every time I look at the Emperor's new CCSS clothes I see lessons more confusing than inspiring, and they are often downright incorrect if one wants to look at them rigorously.

So we are in for five years of refinement of these CCSS lessons to get them right. That is understandable -- it is hard to get anything new right on the first try -- but then should not CCSS be off somewhere in an incubator being refined and tested so it can eventually win on the merits?

CCSS principles may get an "A", but its implementation is still in the first grade and those mandating it now get an "F".

Tuesday, September 16, 2014

Bug Story

You want to know what a bug is like? Friends and family all marvel at how many years I have worked on my Algebra expert system, and I have heard families of others marvel the same about others. So  here is the story of one bug report. Perhaps you will get a feel for what we do.

Notice how way leads onto way, the investigation starting with the obvious and then branching out as other things catch the eye. Apologies for the impressionistic stream of consciousness quality, but I could not both live this bug and eloquently record it. Free beers to whoever can name the two new England writers I just plagiarized.

The obvious start: Sarah my ace QA and all round general muse reported Pivotal tracker bug 78828414.

Her complaint is that the equation is obviously a contradiction, so the app is wrong when it says it is not (the red background and "Incorrect")


My first reaction was that the software should not have said "wrong", it should have said "you can do more work." But then I realized this was a mastery "Mission", and during Missions if one declares something to be the answer and it is mathematically consistent but not the final answer, it is marked wrong -- part of knowing math is knowing when one has reached the answer.

So Round #1 leads to Task #1: Instead of saying "Incorrect.", the app should say, "That work is Ok but more work was needed to reach the final answer" or something. It can still mark the problem wrong since it is a mission and we do not give second chances on missions, but it needs to be clearer that the work entered was not mathematically unsound.

I checked in the non-mission areas of the app and it did say "You can do more work."

Round #2 is between me and Sarah, with her assertion that -3t-60=-3t is obviously a contradiction. The problem I see is that -3(t+5)-45=-3t is "obviously a contradiction" to some people. What we need (and task #2 is to make explicit) is for the variable to be eliminated from the equation before pronouncing either contradiction or identity. Of course the variable remains if our result is "conditional".

Task #2 is to say not just "You can do more work", we have to explain about eliminating the variable.

So far so good, I just need to communicate better. Well, it is not "just". It matters. Over on Dan Meyer's blog software that (allegedly) gives bad feedback is (un-)justifiably taking a beating. Precise feedback is deadly important and I always fix these ambiguities as users make clear ones I have missed.

Then the wheels came off.

Rather than mess with a mission, I just went to the freestyle section and typed in my own problem: 3t-10=3t-20. Next step 3t+10=3t was marked wrong. 10=0 was marked wrong. My software just cannot do this easier problem! (I love it when the impossible happens--it is actually a clue I use in the debugging.) So...

Task #3: Fix 3t-10=3t-20 [After fixing Task 12 I thought this would just work, but I found another problem: the "Contradiction" averral button generates its answer OK but with a structural excess that throws off the engine. After fxing that (task 13), it just works.]

Task #13: have the contra and other classification buttons generate the right structure.

Plugging that problem into my batch tester used for debugging the maths engine, I used my text syntax to tell the engine that the problem answer should be "cntd:10=0" which means "the contradiction 10=0". Looking at the debug output, I see "SLVD:cntd:10=0". SLVD is used for straight equations, btw. So...

Task #4: What is with "SLVD:cntd:10=0"? Hopefully that is a feature, but even then it should be CNTD:SLVD:10=0.  [Turns out: SLVD was a bug in how I specified the test. );10m to find. ]

Anyway, back in the freestyle section when my engine rejected the correct steps I asked it to solve a similar problem. (The app could do their homework for them but will not.) But given a contradiction to emulate...it created a conditional instead. And the instructions showed it did not even try to create an exercise in conditionals, where sometimes a conditional is the result.

Task #5: If asked to solve a similar problem, do not vary the classification from the original. ie, If asked for a problem similar to a contradiction, generate a contradiction. (Task 8 may fix this, but I doubt it.) [Right, new work was needed. Teribly hard-coded, but how many oddball rpoblem types are there in the world? 20 minutes]

I had it solve the condtional anyway. In doing so, it did not use the available screen space, it started scrolling.

Task #6: Use available screen space in solved examples. [It has been a full day and dozens of lines of code. I'll make a PT story for this and the next.]

As it solved the problem and started scrolling, it did not automatically scroll down to show each new step. Not sure why, I have solved that before (pretty easy, actually).

Task #7: Make sure solved examples autoscroll to show each new step. [PT story]

When it got to the end it just said "Solved", it did not classify the solution. Perhaps this is because, looking back just now, I see it did not even create a problem with the instructions to "classify the result".

Task #8: when making a problem similar to a an equation classify problem, the new problem should have the same instructions. [Ah, it was not even trying, it was just trying to match the transformation at hand. That makes sense, but in this case was too myopic. Big overhaul, but just 20m to my surprise. What can I say? I write great code! Can I say that? NO! Sorry.]

Task #9: After Task #8, check that the tutor now classifies equations when solving similar examples. Looking at the solutions done by the engine in other sections, this should be OK. Task #8 may also fix Task #5, so we will do Task 8 first. [yep, it Just Worked(tm).]

So finally I let the test harness run on the broken problem and what do I see? The engine does not even come up with an answer. This can happen. If the problem is outside the engine's skill set it will come to a point where (a) it cannot think of anything more to do but (b) it knows it has not reached an answer -- the variable to solve for has not been isolated, for example. So it does not offer an answer at all. But then why is it telling anyone they are wrong?

Task #10: Why is the app saying wrong if it cannot solve the problem. (We should fix this first while it still cannot solve the problem so the conditions will be realistic.) [This was only because of the mistake I made setting up the test. In the actual case it was solving the problem. Pretty sure had it been unable to solve it would confess (just worked on that code last week pursuant to the Meyers blog brouhaha.]

Are we done? I wish. I do not like testing (so thank God for Sarah) but I have enough experience to have some instinct for how things can go wrong.

On most problems there is just one way one can say one is done with a problem. One avers that an expression is in simplest form, factored, or solved. So for anti-click convenience I allow the user to hit the "End" key and then I treat that as the one averral possible.

In this case the student must choose from conditional, contradiction, or unconditional. Working on the problem the engine could handle, when I got to the answer and was about to click "contradiction"  I had a thought: What will happen if I hit the "end" key? Crash? Proper message?

Silence. So...

Task #11: Tell user to pick an option (click or tab/enter) if there is more than one. Do *not* silently ignore them.

Ok, now let's see what else comes up, given the tendency of issues to exponentially explode. Everything that follows the line in the sand arose while wading through the above.

---------------------- line in the sand -----------------------------------

Test driver used leading cntd to gen the contradiction but did not strip it off to generate the operand. Lucky an infinite loop did not arise. Fixing that, we see a new problem: cntd:-10=-20 is not recognized as equivalent to cntd:10=0.

Task #12: Why  cntd:-10=-20 not recognized as equivalent to cntd:10=0? [It was just allowing for left=left and right=right or left/right and right/left. Made this smarter. 30m after a nap]

Task #14: after entering the statement, answers cntd and idnt are available, but not cond. I suspect it is being too helpful by noting that the variable has not yet been isolated. ... [OK, Kinda. It does not want them classifying the equation until it has been solved! Exactly what Sarah is trying to get away with, but with a contradiction!! So I will just always have the conditional choice enabled and then deal with premature classification -- the original report!! I love this game. 15minutes.]

Task #15: Well it says do more work if we are at 3x-5=3x-5, but it does not if we get to 3x=3x. That is odd. Investigating.

Summary

The above is as much as I was able to document. The hand-to-hand fighting continued into the night and the next day and then with three known minor (they can wait) issues remaining I declared victory and deployed.

To the kids programming at home, here are your takeaways:
  • Never fix a bug. Understand how the situation arose and how things can be arranged such that they never happen again.
  • When the user reports a "bug" that turns out to be a feature you still get an RFE: do not confuse the user that way.
  • You know how programs fail. Outlier behavior. Hit the "End" key when three endings are possible. Afraid it might not work? I know. :)
  • Never shrug off a small misbehavior. Fix it. You might be surprised what you find.
  • Never leave the unexplained unexplained. You will almost certainly be surprised at the explanation.
  • Don't even tolerate misbehavior in diagnostic tools. Run them to ground. Run everything to ground.
If that sounds like work (a) it is and (b) go ahead, ignore me, then come back in five years and read this again when you have lived the hell of software developed outside those rules.

As for ed tech, hey, the above is why you do not have very much good ed tech. Ironically, the blogger attacked one of the best in the game. At the same time he is writing new attacks they are addressing all his concerns.