When Expertise Became the Problem

A few years ago, I had just finished presenting qualitative analysis from a recent project to a colleague, a smart, credentialed person who was nearly finished with her Ed.D. degree program and who told anyone who would listen that she absolutely loved qualitative research. After I was finished, she had a question: "What is coding?" I explained my process, how I went from an initial code list based on an extensive review of the research literature, and edited multiple times during data collection to reflect how our thinking shifted. But before I could go any further, she stopped me, “No, I mean, what does ‘coding’ mean?” Last I heard, she was a VP at an edtech firm.

I don't tell that story to embarrass her. I tell it because she is not an outlier. I am finding that she is, increasingly, the norm. And understanding how we got here matters more than most of us want to admit. I have been working in and around education for most of my adult life, as a teacher, program designer, evaluator, and consultant. The field I entered and the field I work in today share a vocabulary and a set of recurring problems, but in most other ways, it is barely recognizable as the same place.

The shift I keep returning to is this that somewhere along the way, expertise stopped being the point. Passion became the credential and empathy became the qualification. Lived experience turned from more a valuable input (which it unquestionably is) into an epistemological trump card that ended conversations rather than enriching them. And the people who had spent years developing deep knowledge of research, pedagogy, organizational change, and education policy found themselves repositioned as obstacles instead of resources.

This didn’t happen by accident. In fact, it reflects a pattern playing out well beyond education. Across medicine, journalism, public health, and government, trust in expertise has eroded steadily. Some of that erosion was earned, because we have evidence of institutions failing people, research being manipulated, and credentialed voices that were used to serve interests other than the public good. Skepticism of expertise, in that context, is not irrational, but there is a meaningful difference between healthy skepticism and the wholesale rejection of the idea that knowing things deeply matters. Education, as it often does, absorbed this cultural shift and then institutionalized it.

How Expertise Gets Devalued

In education, the institutions that accelerated this shift had amazing marketing teams and a compelling pitch. Why spend years training teachers when you could take the brightest graduates from elite universities, run them through a summer program, and have them in classrooms by Fall? The implicit argument (rarely stated openly) was that teaching didn't actually require specialized expertise, because intelligence and enthusiasm were sufficient substitutes for formation.

I call them the acronym groups, the organizations who “disrupted” educator preparation programs and the followers who saw a great market opportunity. They proliferated over the past three decades, and their influence spread well beyond teacher preparation. They produced a generation of education leaders who had been taught, by the very model that trained them, that deep expertise was optional and a few months of preparation was plenty. A teacher credential suddenly mattered more than what the credential was supposed to represent (basic eligibility for entry into the field).

Their research, when they bothered to produce it, was telling. For years the standard defense was some version of, “Our candidates are no worse prepared than those from traditional programs.” Pause on that framing for a moment. It is not a claim about excellence, or even a claim about adequacy. It is a carefully worded argument for acceptable mediocrity, dressed up as evidence. Ironically as the years passed those same groups made two competing arguments: most testing that leads to entry to teaching profession is pointless, and our candidates do no worse on those pointless measures than others. Once you start to pull a thread, the whole philosophy behind the models start to unravel.

States handed these programs millions of dollars, and they still do, because the headline writes itself. Persistent teacher shortages? Here is an organization that can put a warm body in a classroom in months. Nobody demanded outcome data, so nobody tracked what happened after the placement. Nobody asked whether the veteran teachers in those schools were spending half their energy absorbing underprepared colleagues instead of teaching students. The cost was real, but it just didn't show up anywhere that anyone was measuring. And then next year, when there are more teacher shortages, rather than evaluating root causes we just hand more money to the acronym groups to continue filling seats.

What Gets Lost

The downstream effect on research literacy has been significant and is genuinely underappreciated. Education has a well-documented problem with research. Practitioners often distrust it or find it irrelevant, policymakers misuse it, and academics produce it in forms that serve tenure cases better than classrooms. None of that is new, but what is new is the specific shape of the illiteracy.

On the quantitative side, it has become almost socially acceptable for education professionals to announce that they hate statistics. Not complex multivariate modeling, but basic concepts like mean, distributions, and basic descriptive measures that any serious practitioner should be able to read and interpret. The announcement is treated as charming self-deprecation rather than a professional gap. It is said with a laugh and often a hint of pride. “I’m not associated with research, so I can speak to line workers” became the implied message. Research becomes demonized, and then rejected entirely.

The qualitative side is more insidious, because qualitative research is engaging and its conclusions often feel intuitive. People connect with stories and they trust findings that match their experience. So qualitative research gets embraced loudly and enthusiastically by people who have no idea what makes it rigorous (the “research” side of “qualitative research”). They do not know what coding is or why it is necessary. They could not tell you what data saturation means, why it matters, or what distinguishes a well-designed interview protocol from a conversation with a predetermined conclusion. But they love qualitative research the way some people claim to love jazz, as a vibe, not a discipline.

The person who taught my colleague that this level of ignorance was acceptable is almost certainly a product of the same shortcut pipeline. Credentialed and overly confident, but hollowed out at the center.

The Partnership Problem

I believe there is one incredibly well-intentioned assumption that allows all of this to continue: “Every little bit helps.” The problems in education are real and large. Teacher shortages, achievement gaps, and funding inequities are real. So the instinct is to welcome any partner willing to show up. More hands, bigger table, shared mission, right? Who has time to be selective when the needs are this urgent?

The problem with this mindset is that a bad intervention is not neutral. It consumes resources, displaces better alternatives, produces no accountability because no one demanded any in the first place, and when it fails (quietly, without headlines) the interpretation is almost never that the intervention was wrong. Instead, we tend to conclude that the problem was harder than expected. So the program gets renewed and another check gets written.

Here is the hard truth: not all partners are equal. Some are actively doing harm, and the fact that their intentions may be good does not change that harm. The differentiator, in my observation, is not any single practice but an entire orientation toward the work. The organizations that genuinely improve schools enter with a clear theory of action, a specific, testable explanation for why their approach should produce the results they're claiming to pursue. They collect data, evaluate their own work honestly and regularly, care about implementation, and adjust when the evidence calls for it. They treat local voice as essential contextual input rather than either an obstacle to overcome or the only data point that matters. They build capacity inside the system, so that what they leave behind is not dependency on an outside partner but genuine local expertise that compounds over time. They also stay long enough to find out whether it worked.

What Different Looks Like

ASU's Next Education Workforce model has an explicit design principle that there is no one-size-fits-all approach, because the school's own context drives every decision. They contracted an evaluator and built a research agenda that tracks outcomes over time. Public Impact, a North Carolina-based organization, has spent two decades developing and refining a distributed leadership model that places excellent teachers in roles where they support their colleagues from within. The capacity doesn't leave when the consultants do; it stays, because it was never imported from outside to begin with.

These organizations are not perfect, because none are. But they share something meaningful with each other and with the best practitioners I have encountered across my career: they take root cause analysis seriously, they design solutions that are specific rather than universal, and they measure what happens both when they are there and after they leave.

I want to be honest about something. The argument I am making is not popular in all corners of this field, and I understand why. Arguing in favor of expertise can sound like arguing against community voice, against the knowledge that comes from lived experience, against the teachers and parents and students who know things that no research study will ever capture. But let’s not pretend this was a purely altruistic trend; it benefits specific groups, and unfortunately, students are rarely one of those groups.

My argument is not for a return to irrelevant research that helps university tenure decisions more than school problems. Lived experience, local knowledge, and community voice all matter. The question is whether we have built a field that can hold all of those things alongside rigorous expertise or whether we have decided, quietly and without much public debate, that expertise is something we can afford to let go.

You know my answer: I think we let it go and are paying a price that we have not yet fully calculated. I think the organizations getting it right are the ones that never accepted that tradeoff in the first place. The problems in education are genuinely hard. They deserve partners who take them seriously enough to actually know things.

Next
Next

Proxies, Fads, and the Swiss Army Knife Problem in Education