Undue focus on test scores (and grades) can lead to practices that actually inhibit student learning. This is not a new problem, but in the age of high stakes testing, the problem has been exacerbated. (I’m talking about the age of compliance agreements under IASA and the implementation of NCLB, Race to the Top, and ESSA.) However, there are ways to shift the focus of various groups from test scores more towards learning. The “simple” solution is for educators to just use best practices, but obviously that needs some elaboration, so here I go.
Student Motivation to Learn and Formative Assessment
People do (or try to do) what they are motivated to do – this applies to any human activity, including student learning. Teaching practices can have considerable influence on students’ motivation to learn. My over-simplified solution to the problem of students being more concerned about test scores and grades than on learning is for teachers to follow effective and proven formative assessment practices. Formative assessment is not a tool. It is a multi-step instructional process. Refer to Lesson 2 for an introduction to formative assessment. One can also Google the names of such formative assessment gurus as Dylan Wiliam, a British “educationalist,” Margaret Heritage at UCLA, or Caroline Wylie at ETS. And the person who has probably given the most attention to the relationship between formative assessment and student motivation to learn is Rick Stiggins. His concept of assessment for learning places considerable emphasis on this relationship.
My focus here is on the evidence-gathering step of formative assessment and grading practices. It is my contention that teachers for years have programmed students to be more attentive to grades (on tests or for courses) than to learning. At a conference thirteen years ago, I mentioned that most student work gathered during the learning relative to specific learning targets should not be graded. That led to the response and commonly held opinion, “If I don’t grade everything I ask my students to do, they won’t take it seriously.” Right…therein lies the problem. First, formative assessment evidence gathering occurs during the learning – likely before the students have reached the level of proficiency relative to learning targets they will reach by the end of an instructional unit. It’s simply unfair to count such work toward the students’ grades. That work should lead to descriptive feedback and possibly adjustments to instruction to address learning gaps before summative measures are used that do count toward course grades.
Formative Assessment | Summative Assessment |
A chef tasting soup and deciding it needs more salt | A restaurant guest tasting the soup and saying, “Wow, this soup is good!” |
A student driver being quizzed by a friend on the contents of the driver’s manual and a driving instructor observing the student’s performance behind the wheel and providing feedback | A driver’s license candidate taking the written and performance components of the state’s driver’s test to demonstrate the knowledge and skills needed to earn a license |
My wife and I met with my daughter’s math teacher once and asked what she seemed to be having trouble with. The teacher then presented us with a printout listing every score she obtained on all kinds of things during the previous week. The list filled a full page. The teacher pointed out one of several multiple-choice quiz scores exemplifying her performance. It showed that she had gotten only 4 answers right while the class average was 7 right. I really couldn’t pin the problem down any further and wondered if the teacher and my daughter could have benefitted from his seeing actual work by my daughter – rich evidence that could lead to effective feedback and corrective action. But I behaved and kept my mouth shut. By the way, as a former elementary and high school math teacher, I tried to help my daughter, but she didn’t want to listen to her father. (Sound familiar?) She didn’t want me to explain anything to help her understand the material; she just wanted to know how to get the answers. I failed as a tutor of my own child.
I have no doubts that pre-college grading practices have a lot to do with the college readiness of our high schoolers. My third son struggled badly in his freshman year of college. (This is where the comedian would ask his son if his problem was ignorance or apathy, and the son would respond, “I don’t know and I don’t care.”) I simply asked my son what the problem was and he responded, “It was those tests.” I offered, “You mean your whole grade in a course was based on two mid-term exams and a final or on a mid-term, a paper, and a final?” He said that was it exactly. He recognized that his pre-college experience was that everything he did was graded, and he could always counter a bad test score by getting better scores on other less demanding work or by getting extra credit, sometimes for non-academic activity. His teachers graded everything. He didn’t care what it was he didn’t understand on the test he blew. Of course, in college, there was no re-taking of a flunked exam or extra credit opportunity.
These stories are absolutely true. (Hey, if Piaget could base much of his theory on observations of his own children, an n of 3, then I don’t feel too bad about my n of 4.) In my daughter’s case, I didn’t see a teacher gathering good evidence or providing good feedback or adjusted instruction. My youngest son’s story was that he wasn’t prepared to take responsibility for his own learning as is required in college where he was responsible for learning the material before taking an exam and there was no hand holding and there were no second chances. Students becoming responsible for their own learning is an important aspect of formative assessment. That takes reprogramming. The first time a teacher tells his or her students a test or work product will not be graded, the students may well “kiss it off.” But once they see the same concepts and skills in the subsequent test that counts, they might decide to better prepare themselves the next time. Or perhaps the teacher can be more up front in the reprogramming and let the students know that the material covered in the ungraded activity will be what they will see later in a graded test. The point is there should be no surprises, no pop quizzes that count, and no average-killing zeros for incomplete work. These are unfair and demoralizing. Students can be expected to take responsibility for their learning and feel better about themselves as a result. By the way, I’ve just introduced four practices that inhibit student motivation and learning: grading evidence of student learning gathered for formative purposes, giving surprise quizzes, assigning zeros for incomplete or missing work, and providing means of getting acceptable grades without achieving learning targets.
Another concern that is often raised regarding formative assessment – the process, which is not frequent testing – pertains to the teacher’s time. There’s a lot more to full implementation of true formative assessment than I’ve discussed here. Let me just say that today’s instructional reform in many cases involves changes in the ways many students and teachers spend their time. There are a lot of approaches to accomplishing this – e.g., small group work, flipped classroom, online individualized learning systems. Small group work is a way of activating students’ peers as resources for learning – this is another formative assessment strategy. Students can help each other learn, and while small group work is going on, teachers may have some time to work with a few students who need some extra adult help.
So to make a long story short, my solution for making students focused more on learning than on test scores is full implementation of the proven, research-based practices associated with the instructional process of formative assessment. I might also add that occasional projects can involve small group work and be a means of enhancing student engagement and motivation to learn, and at the same time include both formative and summative measures of learning. I’ll leave project-based learning and curriculum-embedded performance assessment for another time.
Teachers and Administrators
I’ve already described the teacher’s role in enhancing student motivation to learn through the practice of formative assessment. School administrators, as instructional leaders, have an important role, too. They can facilitate and promote effective formative assessment practices in two ways. First, grading practices are often not a matter of individual teacher choice. School policy sometimes dictates them. Thus, it is likely up to the principal to implement improved grading practices that do not inhibit student learning.
Second, as I’ve said on many occasions over the years, the key to promoting formative assessment is not found in test publishers’ catalogues, but rather it is through professional development. And effective professional development is ongoing, on-the-job, and collaborative. Teacher teams meeting for a couple hours each week to share ideas about lessons and student activities and to evaluate and discuss student work obtained during them would be a worthy goal. Initially, it may be helpful to have instructional coaches lead teacher teams in these collaborative efforts. I’ve often heard how difficult it is to find time for such collaboration in the school schedule, yet I’ve also seen instances when it was accomplished through creative approaches to scheduling. Again, it’s school administrators who can spearhead creative scheduling, teacher professional development, and the engaging of instructional coaches. Incidentally, Michigan has an interesting program called Formative Assessment for Michigan Educators (FAME). It is a professional learning initiative sponsored by the Michigan Department of Education that promotes teacher collaboration and planning for effective formative assessment practice. A cadre of Michigan educators serves as coaches for site-based learning teams of teachers and administrators in Michigan schools.
Now what about the teachers’ and administrators’ own focus on student learning versus test scores? Under the Race to the Top program’s requirements, student achievement had to “weigh heavily” in teacher evaluations. On the surface, this seemed reasonable – a teacher should be able show that his/her students have learned. However, “weigh heavily” translated to a specific weight assigned to student achievement growth in the computation of a teacher evaluation score – e.g., 50 percent. This meant the other factors contributing to the score had to be quantified as well. The quality of some of these measures was questionable. Furthermore, the are problems with the use of the student growth or “value-added” scores themselves. How much of the academic growth of a group of students is due to a particular teacher? Does a social studies course involve students in reading and writing, for example, and wouldn’t that mean student growth in social studies is, in part, related to the effectiveness of the language arts teacher? Also, there is the problem of the differing populations of students served by schools across a state. Value-added models applied statewide can statistically control for some of the effects of varying student populations, but they cannot take into account all the unique contexts and supports pertaining to a particular teacher and his or her assignment.
I do believe that student achievement, as an ultimate goal of teaching, should be considered in the evaluation of teaching effectiveness. However, I also believe teacher evaluations should be human judgments by immediate supervisors who consider student achievement data along with information on a whole host of other factors. It’s interesting that we have been so concerned about our students’ higher order thinking skills, yet we tried to remove human judgment from the teacher evaluation process. The Race to the Top requirements no longer apply, meaning the added pressure on teachers tied to student test results has been lifted, provided state and district leadership have adopted more appropriate and defensible approaches to evaluations of teacher effectiveness.
Parents
It’s understandable that parents would be concerned about the grades earned by their children on pieces of work and in their courses. Hopefully, they view these as indicators of learning providing reasons for praise and/or improvement efforts. Like teachers, however, parents have a role in enhancing students’ motivation to learn. I believe how they play that role is influenced in part by the teachers. Back to my personal case studies, as a parent of four, I recall attending quite a few teacher presentations to parents at the start of school years. Out of a large number of these, I only remember one at which the teacher shared something about his particular approach to teaching. He was a history teacher, and what he described was actually the Socratic method intended to encourage cooperative argumentative dialogue among students and stimulate critical thinking. As I followed my kids through the school, I found that this teacher was true to his word.
Unfortunately, almost all the rest of the teachers’ presentations were dominated by descriptions of grading practices – specifically how much each of a variety of measures counted toward a student’s course grade – for example: homework 10 percent, quizzes 25 percent, tests 35 percent, paper/project 20 percent, and perhaps 10 percent for class attendance, participation or behavior. Of course, it was the teachers’ hope that parents would make sure their kids met all the requirements by turning in all homework assignments and attending all classes so that no quizzes or tests were missed and no other assigned tasks were left undone. It was frustrating to go from one classroom to the next on parent night, following the students’ schedules, only to hear that same pitch about grading practices and little else. I often wondered if these teacher’s presentations to parents indirectly led to students’ even greater obsession with test scores and grades.
This is not to say that the teachers didn’t provide some information on the course content… they did. But it was extremely rare for parents to hear something like what I heard from that history teacher. Given the de-emphasis of grading with respect to student work for formative purposes as I described earlier, wouldn’t it have been nice to hear about that and other ways the teachers would address students’ motivation to learn and students taking responsibility for their own learning. Helping their children understand this instructional process, which may be very different from what the parents experienced, would be a wonderful way for parents to support the teachers.
So there you have it … my explanation of how to get folks more focused on student learning and less on test scores and grades – good, research-based instructional practice through the multi-step process of formative assessment; effective practice relative to teacher professional development and evaluation; and effective communication to parents. What are your thoughts?