When used thoughtfully, data can help us make informed decisions that support student success. But data can also be incredibly harmful when they reinforce untrue and negative stereotypes about our students.
For Dr. Ivory A. Toldson, the solution to this BS—"bad stats”—is rethinking how we approach data. A professor of counseling psychology at Howard University and editor-in-chief of The Journal of Negro Education, Dr. Toldson is especially concerned with how bad data negatively impact Black male students. In his 2019 book No BS (Bad Stats): Black People Need People Who Believe in Black People Enough Not to Believe Every Bad Thing They Hear about Black People, Dr. Toldson dispels several myths about Black male students by reframing the data that supposedly inform them. He also presents a framework that educators, administrators, researchers, and policymakers can use to rethink their own approach to using data for educational equity.
Earlier this month, Education Northwest’s chief executive officer, Patty Wood, sat down with Dr. Toldson over Zoom to discuss how to filter out “bad stats,” where to find good data, and ways to center our most valuable data sources: our students. The interview has been edited for clarity and length.
Patty Wood: We have access to more and more data now than ever before, and yet many of our educators and stakeholders still have what you describe as “bad stats” about Black students and communities. What are “bad stats?”
Dr. Ivory Toldson: Bad stats are data points that are dumbed down to a quick sound bite. They're meant to invoke a certain reaction and make a point. Bad stats are often very misleading. They are given without context, or only with context created by the one who's sharing the statistic.
In my book, I talk about the bad stat that there are more Black men in prison than in college—which is not true. Advocates for criminal justice reform will give that stat within the context that we need to do something about mass incarceration, but someone else may use the same stat to argue that Black males are more prone to criminal behaviors. They use the exact same stat but package it the way they want, to sell the story that they want to sell. It ignores the full picture of all the young Black men who have found themselves unfairly in the criminal justice system, as well as those who are college bound. All of these important issues get lost in the noise of people throwing out these bad stats. It can become hyperpartisan.
There's a term that I thought right before this call—the “memeification” of statistics. We're in this culture now because of social media. We want things quickly, and we have a lot of people trying to sell an agenda, as opposed to coming up with meaningful solutions. They take one data point, put it in a meme, and circulate it. Then it just repeats itself.
PW: Your book presents a framework for using data to achieve educational equity for Black students. The framework is based on W.E.B. DuBois’ philosophy of “using people to represent numbers, rather than using numbers to represent people.” What are the pitfalls of using numbers to represent Black students?
IT: Numbers become a proxy so you don't have to deal with the real thing. You can insult numbers. You can criticize numbers and not feel like you're doing anything wrong. But when you see stories—when you see people—it forces you to think about more than just a number.
If you see a high dropout rate, you can talk bad about that statistic and have moral invectives that delve into insults and slights. But when you're talking to a child who dropped out for whatever reason—an unintended pregnancy, something going on at home, a learning disorder that went undetected and untreated for most of their academic career, homelessness—that child is represented in the dropout rate.
High dropout rates at a particular school only tell us there’s something that we need to fix. They don’t tell us why the rates are so high or how to fix them. The only way to know is to go to the school and start talking to people. When you get their stories, you have a better understanding of the solutions.
PW: In your book, you note that data that should be predictors of academic success often become determinants instead. That really resonated with me as a policymaker. How do we deal with that problem of using data as a determinant rather than a predictor?
IT: We've come to a reasonable conclusion that certain indicators—achievement tests, the ACT, GPAs—predict something. Some of the ways the predictions have been calculated have some scientific merit, so we feel a certain degree of confidence that if we take a student’s GPA and ACT score, we can predict how well they do in college.
But at some point, we have to step back and scrutinize these measures. We don't have enough people asking questions about the validity of achievement tests. Many are constructed to generate a normal distribution: a mean at the top of the bell curve, some students doing very well, and some students at the opposite end. Schools start using a test that’s just been developed to separate students into different categories—but the test hasn't been in use long enough for us to make any reasonable conclusions about its validity. Then we start teaching to the test, and when all students start to do well, it messes up the bell curve. At that point, instead of saying, “Okay, great…the students are doing better,” we redo the test to create that bell curve again.
Now, there’s a bit of absurdity to that. We’re saying that helping students do better on the test is not really the goal. Instead, the goal is to create a stratification in the system so that we can pick out the high achievers, middle achievers, and low achievers—so we can reward the high achievers and punish the low achievers. That's the system as it's designed right now.
PW: How can people look at refined, valid data beyond standardized tests, beyond what's in the normal operating procedures of the educational system, that do tell the stories of the students’ lives? What indicators can identify students that need extra help without labeling them—and without using those indicators as determinants?
IT: Part of the problem is that we don't do a good job of interpreting the data that we already have. If we look at attendance data through a biased lens—thinking that poor kids are not going to school because they lack motivation or structure at home, but that wealthy kids miss class because they are enriching themselves outside of school—then we don't serve the students well.
There are also lots of other data points. The National Center for Education Statistics (NCES) is where we get the National Assessment of Educational Progress (NAEP). Every school district is sensitive to NAEP scores. But NCES collects mountains of data…the NAEP is only one piece. The High School Longitudinal Study asks students all types of things about their experiences in high school: their relationships with their teachers, the types of organizations they belong to, whether their schools have mental health services and drug counseling. Every few years, NCES also conducts a parent engagement survey of about 50,000 parents. You can use those data to get an indicator of the types of interactions parents have with schools and how that relates to their children’s performance.
These are just two examples. There's lots and lots of data. And so the question becomes, why aren't we using all this data we’re collecting? We know that we have the capacity to use it because we use achievement data. So why aren't we using the holistic indicators and data points the way that we should?
PW: What about the main source of the data itself? Going back to the beginning of this conversation about using people to understand data, how do we incorporate student voice?
IT: We can think of the current system as a pyramid: The top makes decisions that permeate down and ultimately reach the students, who are at the bottom. Another way we could structure the system is as a circle with the students right in the middle. As you spread out, you have families, and then teachers, and then the leadership all around. With students at the center, their stories go to the people who talk to them the most—in most cases, teachers. Those teachers send data to school leaders, who then send it out to district leaders. Instead of leaders being charismatic figures at the top, they are data managers on the perimeter, taking in all this data coming out from the center and feeding it back into the system to benefit everyone. I think that’s the way the system needs to be reworked to amplify student voice.
PW: How can educators and policymakers use qualitative data—bringing that human element back into the work—without over-relying on anecdotal data?
IT: The first thing we have to do is destigmatize anecdote. If you have the numbers, you need anecdotes to understand what those numbers represent. We can do a lot more if we acknowledge the incompleteness of the data and that the only way to complete the picture is to get the proper story.
There have always been different schools of thought about how to advance students, as people, towards their highest level of achievement. The cognitive and behavioral perspective believes in punitive discipline, that people who are doing wrong need course correction to think and act differently. Then there's the humanistic approach, which believes that people grow based on relationships. Humanists philosophically believe that students need a certain type of relationship with their teachers, with other classmates, with school leadership. These people have always been in schools, but their agenda has not risen to the level of policy. These people do good in their immediate sphere of influence, but they aren't in a position to change the system.
I think that, right now, a lot of the more humanistic people are closer to students—they're the activist, socially conscious teachers. Sometimes they are school leaders and can transform a school, but they operate in a bubble, so their school does well while the district maintains the status quo. That's why I believe in an inside-out approach, as opposed to top-down. With more teachers in leadership positions, there's a better use of anecdotes and stories across the system. Data points work in concert with the stories to create this big picture. Overall, there's a philosophy of humanism. We want to know the ‘why’ behind the data, and we think about the relationships that we're building as a strategy towards getting students to be more successful.
PW: Can you share any easy guidelines for identifying and filtering out bad data?
IT: When you get data, you should ask questions, not draw conclusions. The smaller the data point, the more questions you should ask. If you read an article with a lot of data, you'll come up with some conclusions, but you still want to ask questions. If you get a quick sound bite, you should have nothing but questions—no conclusions at all.
Think about how you receive data on your own child. Imagine that your child is in the fifth grade, but an assessment said they are functionally at a fourth-grade level. But you know your child, and you care about your child, and what the assessment told you is inconsistent with your experience with your child. You're going to ask a lot of questions: What's the assessment? Is it valid? How long have they been using this assessment? And then you'll talk to your child: What was your mood when you took this test? Did you blow it off or did you understand what the test was about? And then you might ask their teachers if they noticed anything that might have given them pause. You're trying to pick apart the whole thing.
If you get a data point that says that the majority of Black children are reading behind grade level, you should ask the same questions. What was the test? Under what conditions were they given the test? Who is administering the test? Just like with your own child, after you dig in, you may find that there is something that needs to be corrected. But what you find based on those questions will put you in the best position to genuinely help.