Tuesday, June 21, 2011

4th TEDxSFED Talk: Dragons

Young makers presenting their inventions with poise and grace. A table-top habitat for small pets and an animatronic fire-breathing dragon... cause, you know, who DOES'T want an animatronic fire-breathing dragon?!?!?

Friday, June 3, 2011

First Post from TEDxSFED: The Freeze!

Using Student Performance Data to Evaluate Teachers

Two hundred teachers sat in the hotel’s conference room in Oakland, California, drinking coffee and getting ready for a two-day discussion on Effective Teaching.  By the end of the summit, our hope was to have some agreements about what effective teaching looked like: an idea on how the schools and the district could measure it; and a plan on what to do to highlight teachers who were are most effective and help those of us who are struggling to be more effective.

As we greeted one another and talked about the coming days, the Jazz band from one of our middle schools set up their instruments and began to play.  I turned to my colleague seated next to me and said, “This is what, ‘using student performance data’ to evaluate teachers should look like.”



Imagine for a moment that Jazz band.  There they sat, performing.  None of the students was concerned about getting an “A.”  They were concerned about being perfect.  In math, English, social studies, or science, a student may be trying to get 90% of the questions on a test right, but not the band.  If a member of the band missed only one note in ten, that wouldn’t be an “A-“, it would be a catastrophe.  Imagine the hours of “homework” each child put into this class, preparing for this test.  I can imagine there were many hours of “homework”, far more than the number many of them put into their other classes.  One of the secrets to this level of dedication was that the “homework” was actually called “practice” and that the “test” was a performance, in front of a real audience.

I think of this story as I read the report published by the Denver group of the Center for Teaching Quality’s New Millennium Initiative (NMI).  In the interests of full disclosure, I am a member of the NMI group based in the San Francisco Bay Area. Over the last year, while the Denver group grappled with teacher evaluation models, our group discussed what a better teacher induction and career advancement model might look like. 

The Denver report identified four key areas of work that the group believes will lead to a better teacher evaluation system:

  •  Developing meaningful measures of student growth (including in non-tested areas) to comprise 50 percent of a teacher’s evaluation, as required by state law
  • Defining qualifications and training for evaluators
  • Determining how to account for school conditions and student factors in a teacher’s evaluation
  • Designing an evaluation system that informs both employment decisions and professional growth and learning


This post will focus on the first key area, that evidence of student growth should comprise 50% of a teacher’s evaluation.

Frankly, I worry about so much of a teacher’s evaluation being tied to student performance.  I worry, because I fear that “evidence of student growth” too often is code for the single score on a fill-in-the-bubble test.  Over the last ten years, I’ve seen deep and rich curriculum in California become shallow and bland as schools lost their focus on student learning as the obsessed over tests scores.  I’ve seen teachers push research projects, student portfolios, and service learning to late may and June, after the high-stakes tests were over and done.  I’ve heard principals in many schools change their tunes from, “We’ll never let the test dictate what we teacher” in the 1990’s to “We already know what materials are really important; the state has put that material on the tests.”

So how do we navigate these treacherous waters?  I, like many of my colleagues at the Center for Teaching Quality, want teaching to evolve into a results-orientated profession.  We’ve chosen a new version of the old adage, “It’s not what you say; it’s how you say it.”  Instead, we say, “it’s not what you say; it’s what they hear,” meaning that it is less important what a teacher teaches and more important what a child learns.  More and more teachers are getting on board with this message.

In the June 2nd edition of the Public Agenda Alert, “Teachers and the Tests,” they mention, “The conventional wisdom among many education reformers has been that teachers resist all kinds of evaluation, but in fact they're open to a number of ideas, according to the research we conducted with Learning Point Associates. Nearly all teachers (92 percent) rated the level of student interest and engagement as an excellent or good indicator of teacher effectiveness. Teachers also gave excellent or good ratings to how much their own students learn compared with other students (72 percent) as well as feedback from principals and administrators (70 percent).”

State departments of education should take a cue from the Oakland middle school Jazz band.  We can evaluate teachers based on what children learn, but those assessments have to be composed of meaningful work.  Imagine an alternate scenario, where the jazz band read a textbook about the theory of music and instructions about how to play an instrument, then gathered in the hotel’s conference room to “wow” us by filling in the correct bubbles about music theory and instrument play.  How many hours do you imagine children would spend studying for that test?  How well would a test like that tell us about what the children are learning and how effective the music teacher is?  It’s a ridiculous scenario, for sure.  However, it is exactly the way we have been evaluating schools for the past decade and how New York and Los Angeles are currently ranking their teachers of English and math.

So how to we navigate these treacherous waters?  I think the Denver NMI group is taking the first steps; they are getting the voices of teachers into the debate.  My hope is that the education policy makers in Colorado are ready to hear.  Teachers are in favor of having our jobs evaluated, at least partly, on how well our children are doing.  I can imagine the Oakland music teacher being willing to allow his evaluators listen to a performance in October, then another in May, and then stand behind the progress his students have made.

Other teachers, in other subject areas, could take the same stand.  While I still say, “NO” to having myself, my school, or my children judged based on how many bubbles they filled in correctly, I stand behind their research essays.  “Read one of their essays in October,” I told my administrator last year.  “Read another one in May, then you’ll know how well I’m teaching my kids.”  If I were an English teacher, I would say the same thing about my kids’ essays or poetry.  If I were a science teacher, I would say the same thing about my kids’ experiments and lab write-ups.  If I were a math teacher, I would say the same thing about my kids’ projects.

We really can use student-learning data to see how well students are learning and how well teachers are teaching.  The trick is to be more concerned with the quality of that data than the ease with which we gather the data.