Code Red: The Danger of Data-Driven Instruction

Code Red: The Danger of Data-Driven Instruction

Susan B. Neuman

Our most vulnerable students are measured, examined, rubricked, labeled—and denied the meaningful instruction they need.

Start with a big helping of data-driven instruction, toss in a dash of close reading, and this is what you get: A reading curriculum that’s failing our most vulnerable children and sucking the life out of meaningful, content-rich education for young learners.

Don’t take my word for it. Just enter a 4th grade classroom in a high-poverty district in a large urban city. The first thing you’ll notice is an interactive white board prominently displaying a color-coded spreadsheet of the data collected throughout the year, with columns too numerous to count. Each row represents a student’s scores on benchmark tests, running records, state language arts tests, and an amalgam of other assessments. Trevor’s scores, like those of several other students in the classroom, show “1” out of a possible “4” in every column. The scores are coded in red. Needless to say, this does not mean “go.”

Code Red: The Danger of Data-Driven Instruction

At Northern Heights Elementary School,1  a typical school in a highly diverse urban neighborhood, scores have been hovering around the 13th percentile in English language arts for years. The new principal has moved aggressively to data-driven instruction and has created spreadsheet after spreadsheet, tracking “just about everything.” She has encouraged the school’s teachers to share data with their students on a regular basis. “I think that looking at data, when they can compare themselves to the whole group, really motivates them,” this leader says. “They can see that some of them are at 30 percent, some of them aren’t even at 10 percent, and now they know where to go next.”

Somehow this logic escapes me. It’s even harder to parse when Trevor is asked by his teacher to explain to me why he’s always in the red. “I have a hard time reading,” he tells me, with a quiet voice of despair.

For two years, our research team at New York University2  has studied how data-driven instruction is enacted by 4th grade and 7th grade literacy teachers in nine New York City schools serving low-income students. Using case study analyses, we’ve interviewed principals, coaches, consultants, and teachers and observed classrooms and grade-level meetings in these schools. We’ve concluded that data-driven instruction can distort the way reading is taught, harming the students who need high-quality instruction the most.

Data-Driven Instruction in Theory

Data-driven instruction is based on a theory of action that goes like this: Data collection can lead to more deliberate and systematic analysis of student work, which in turn can lead to more differentiated approaches to instruction that highlight individual students’ strengths while working on their weaknesses, which can lead to greater student learning. This process is intended to create a carefully calibrated road map for instructional moves that will promote higher achievement.

But what instructional moves does data-driven instruction encourage? As any teacher will tell you, time is the scarcest resource in schools. Students who live in low-income neighborhoods may need more instructional time to acquire the background knowledge essential to comprehending complex materials—but that doesn’t mean more time spent doing mindless worksheets focused on basic skills. In fact, research suggests that this kind of instruction might widen achievement gaps by limiting students’ opportunity to learn (Schmidt & McKnight, 2012).

Instead of worksheets, these students need content-rich instruction that provides skill-building opportunities (Neuman, Pinkham, & Kaefer, 2016). Constant interruptions, over-reliance on seatwork, and uninteresting instructional practices can claim the time that would be better spent on content-rich learning. To illustrate, consider the following snapshots of the data-driven classrooms we observed.

Data-Driven Classrooms in Reality

Ms. Robb’s lesson today is on sequencing. The topic was recommended by the school’s data coach, who has analyzed every item on the state English language arts test. The lesson begins with the whole 4th grade class gathered in front of the white board, which shows a portion of text. “We’re going to work on the words finally, first, then, and last on pages 59–60. I’ll read the text aloud, and if you see one of the sequence words, underline it on your worksheet.”

After about 15 minutes, the students are sent back to their desks with a couple more worksheets. They are instructed to repeat the exercise and discuss their answers with the group at their table. As they work, Ms. Robb writes down notes on a clipboard, such as “used vocabulary” and “underlined sequence words.” She stops to ask, “What is a hypothesis?” (apparently from a previous lesson). One student answers, “An educated guess.”

“Very nice,” says Ms. Robb. “What steps did Sam follow to prove his hypothesis?” A student answers by putting his head on his desk. Ms. Robb writes something down on her clipboard.

Throughout the hour-long lesson, there are more worksheets, with a brief respite for a guided reading exercise in which students in small groups read aloud, or attempt to, with little conversation about the book or its content. Before students line up to go to lunch, they must turn in an exit slip answering the question, How does analyzing a sequence of events help our understanding of the text?

Every six weeks, there are more tests and more item analyses. The row of “reds” for Trevor and several of his peers grows longer. Yet Ms. Robb, her data coach, and her principal aren’t discouraged. When I ask why, Ms. Robb answers, “While a student who scores in the red remains at the lowest level in the class, this doesn’t mean that the student is not growing. He is learning, but his status does not change. He’s still in the lowest quartile throughout the entire year.”

If you think Ms. Robb is an extreme case, you’re wrong. Although some of the activities might differ, there are striking similarities in Ms. Franklin’s 4th grade classroom at Downtown Elementary. Here too, there is an alphabet soup of measures that include running records and degrees-of-reading assessments along with an array of math tests. But unlike Ms. Robb, Ms. Franklin has become an aficionado of rubrics, creating a set of criteria for virtually every assignment in addition to the spreadsheet of assessments. In all, she has created more than 10 data sets. She shows us with pride how she has learned to use Excel to carefully weight various assignments and enter them into her gradebook.

During a day’s lesson on close reading, she will add to this corpus by writing sticky notes to individual children about skills they need to work on, also noting the skills on her clipboard. “You can give them a little something to think about at that moment,” she explains. “It’s kind of like a quick hit-and-run with the lesson.”

Many of the otherwise highly capable teachers and principals we interviewed questioned the emphasis on testing, but they felt driven by external pressures. As one principal noted, “It’s sad to say, but for the students to survive the ELA exam, we need to build stamina. You’re talking about three days of straight reading for long periods. Passages that they’re not interested in. What adult would be willing to do that?”

In Mr. Hanson’s 4th grade classroom at East Side Elementary, students spend more than 45 minutes a day on independent reading to develop the stamina for the week-long set of tests. As I observe the class, one student sits quietly at his desk holding a book up, apparently not reading much because the pages are never turned. He’s just sitting there. He has a stack of other books next to him that he’s also not reading. The book he’s holding is filled with cartoons. For the entire 45 minutes of independent reading, he just stares into space.

Later in the day, I e-mail the teacher to ask about this student, hoping to learn how the data were being used to support his learning. The response from Mr. Hanson goes like this:

Traymar is the boy you were asking about. He is not on grade level, but he is generally well-behaved. He goes to special education classes. When he’s in my room, he is responsible for the same work that the rest of the class is. Sometimes the independent reading is a challenge for Traymar because his stamina doesn’t last long. We need to work on stamina. He’s a good boy who needs many reminders, but listens and tries.

As an afterward, he writes, “I hope this was clear and helpful to you.”

Unfortunately, it was. What became all too clear is that struggling readers like Traymar and Trevor are not receiving the instruction they need to become successful learners. They will be measured, examined, assessed, rubricked, and labeled. And through it all, they will remain in the red, consigned to an instructional regime that’s bereft of content and meaningful instruction.

Perhaps the saddest and most telling evidence is on the bulletin boards you see when you first enter the doors of Northern Heights, displaying this year’s English language arts scores. As the principal tells it, “When I saw the scores at the 13th percentile last year, before I took over, I thought we could only go up. Looks like I was wrong.” The scores are now at the 8th percentile.

Changing Course: Data-Driven Instruction 2.0

Many would agree that data-driven instruction has gone awry. Even our political leaders have begun to recognize its flaws (Emma, 2015). Those flaws lie not only in the extraordinary amount of time taken away from meaningful instruction, but also in what the data-driven regime conveys to students like Trevor and Traymar. It stamps them as failures, stuck in the red.

What is the solution? Arguably, the theory underlying data-driven instruction makes sense. Providing a road map through assessment should help teachers plan instruction to meet students’ needs, leading to better achievement. From my perspective, it would be wrong to simply abandon the policy. Instead, we should ask how we can make data-driven instruction work in a way that places students’ interests, goals, and achievements at center stage. Here are some recommendations.

Don’t Try to “Motivate” Students with Data

Standardized assessment data can be useful to teachers at the beginning of the school year, helping them establish an overview of students’ reading performance and plan instructional strategies. Test score data can provide a basic road map for determining areas that are relatively strong and others that need strengthening.

These data, however, are not particularly helpful to the students—nor are they effective motivators. Struggling readers know they’re struggling readers. They do not need to see this confirmed every day.

Don’t Teach to Test Items

Standardized assessment creators develop item pools to represent skills to be tested. For example, in a vocabulary assessment, we might find the following item: “V-1 contains the phrase a big garage. Circle the item that means the same, or nearly the same, as garage.” In this test item, the vocabulary word garage is only an indicator of the student’s vocabulary knowledge. The word has been selected to differentiate student responses on the assessment to establish norms on a distributional curve.

Nevertheless, many schools conduct exhaustive item analyses of standardized tests, examining students’ responses to individual test items for the purposes of teaching this information. All of a sudden, we find garage on the weekly vocabulary list—not because it’s tied to any meaningful instruction, but because it’s on the test. The irony is, of course, that when students take an equivalent form of the test at the end of the year, garage might be replaced with the word bountiful.

Item analyses are useful to test designers for purposes like examining the quality of the items, eliminating ambiguous or misleading items, or identifying specific areas of course content that need greater emphasis. But to teach individual words or skills well, teachers need to focus on a much more comprehensive set of understandings, including developing background knowledge, applying it to text, and predicting what might come next. Students don’t develop deep comprehension skills through quick hit-and-runs. They learn these skills through carefully crafted, systematic instruction.

Be “Data-Informed,” Not “Data-Driven”

Schools should use student work to inform instruction, monitoring progress while maintaining a focus on teaching. In one of the schools we observed, for example, weekly grade-level meetings focus on examining students’ work. Gone is the examination of vast amounts of test score data. Rather, teacher teams use student work as data in action, asking themselves, “What are our key teaching points for the next week?” and then coming back the following week and asking, “Were we successful?” and if so, “How do we build on students’ learning?” Together, they talk about ways to support students who are struggling. These teachers describe what they do as data-informed teaching, recognizing that the purpose of monitoring student progress is to fine-tune instructional moves to enable all students to be successful.

Broaden the Definition of Data

Our research project began with a working definition of data as “recorded information on student learning,” with a focus on what could be written down or systematically collected to inform instruction. As we worked in schools, we found that this definition was consistent with schools’ conception of data.

But like others, we were too narrow in our definition. This working definition excludes the softer side of data—the looks on students’ faces, the tenor of a rich discussion, or the smiles and signs of joy when students are learning something new. For the highly capable teacher, these observations are data.

In fact, these observations may be the most valuable data for helping us understand what students—especially struggling readers—are telling us. Their slouching, staring into space, and sleeping are indicators of disengagement and discouragement. With our relentless testing and teaching to the test, we have ignored the data that lie right before us. Although they’re only in 4th grade, some of these children have already given up.

It’s Not Too Late

Despite more than a decade of data-driven instruction, scores in reading achievement for U.S. students have remained flat, and for struggling readers, have actually declined (National Assessment of Educational Progress, 2015). This paradox has become increasingly apparent to policymakers and district leaders, who are now calling for adjustments to reduce the burden of testing.

But when you observe students who are struggling to read, you realize that these adjustments cannot come soon enough. These students are being relentlessly measured and consigned to learning the lowest levels of decontextualized skills. They are receiving the unintended message that reading has no real meaning, no delight, and no purpose other than answering one or two questions that are duly recorded on a clipboard.

Students are also learning to self-identify as “winners” or “losers.” The winners may receive a curriculum rich in content with opportunities for critical reflection and analysis. The losers, in contrast, are likely to be at a double disadvantage, receiving neither a content-rich curriculum nor the appropriate skills to comprehend complex text. Missing out on meaningful content instruction will have major consequences for these students as they continue their schooling (Schmidt, Burroughs, Zoido, & Houang, 2015). In our efforts to ameliorate inequity, we may be contributing to a larger divide that will be more difficult to cross in the future.

It’s not too late to change course. As one principal noted, “Data, data, data, you can’t get around it. But I try as a leader to encourage the faculty to be mindful of themselves as teachers. Mindful of their kids, not just as learners, but as people. Give them a good moment. Give them a good day.”

Sounds like good advice to us all.

References

Emma, C. (2015, October 24). Education Department: Too much testing, partly our fault. Politico. Retrieved from www.politico.com/story/2015/10/education-department-too-much-testing-215131

National Assessment of Educational Progress. (2015). Mathematics and reading assessments: National overview. Retrieved from www.nationsreportcard.gov/reading_math_2015/#reading?grade=4

Neuman, S. B., Pinkham, A., & Kaefer, T. (2016). Improving low-income preschoolers’ word and world knowledge: The effects of content-rich instruction. Elementary School Journal, 116(4), 652–674.

Schmidt, W., Burroughs, N., Zoido, P., & Houang, R. (2015). The role of schooling in perpetuating educational inequality: An international perspective. Educational Researcher, 44(7), 371–386.

Schmidt, W., & McKnight, C. (2012). Inequality for all: The challenge of unequal opportunity in American schools. New York: Teachers College Press.

Endnotes

1  Names of schools and individuals are pseudonyms.

2  The Data in Use research project, funded by the Spencer Foundation, is conducted by Joseph McDonald (principal investigator), James Kemple, and Susan B. Neuman at the Steinhardt School of Culture, Education, and Human Development, New York University. More information is available at http://steinhardt.nyu.edu/research_alliance/research/projects/data_in_use

Susan B. Neuman is professor of childhood education and literacy development at New York University in New York City.