Three Words We Keep Mixing Up
Influential. Popular. Impactful.
More and more, these three words are used interchangeably, as if they're just three different ways of saying "good." In reality, they each describe a distinct relationship between a person, an idea, and an audience. Confusing or conflating them has real consequences for how we evaluate what's working in education and cultural institutions.
There's a common move that is used on HS Student Council campaign posters: “Free Beer! Now that I have your attention... Vote for Bill!”
The punchline is that the hook and the substance have nothing to do with each other, it is simply an effort to grab attention. A version of this is playing out right now in education and cultural institutions. We have confused three things that are genuinely, consequentially different: being popular, being influential, and being impactful.
Popular means people want it. Attendance is high, the waitlist is long, the post had a lot of impressions. Popularity is a measure of demand, and demand is shaped by many things, including novelty, comfort, timing, social proof, and (sometimes) quality. On its own, popularity tells you nothing about whether anything meaningful happened as a result.
Influential means it changed how people think. An influential idea shifts a mental model, reframes a problem, or introduces a concept that someone carries forward. Influence operates at the level of cognition and can happen quietly. Influence can take years to trace. It does not require a crowd.
Impactful means it changed what people do. By extension, it also changes what others experience. Impact is behavioral and exists downstream. It shows up in classrooms, gallery interactions, and organizational decisions. It is the hardest one to measure and the easiest one to fake.
The three can absolutely overlap. The best professional development, in fact, is all three: people want to be there, it shifts their thinking, and it changes their practice. That combination is much rarer than we pretend, though, and our failure to distinguish between them is not innocent.
The Edu-Fluencer Problem
There is a cottage industry built on popularity masquerading as influence and impact. You’ve seen it before: short-form content, high production value, zero citations, and lots of “common sense” opinions. This includes posts that tell teachers they already know best, experts are out of touch, and the answer was inside them all along.
It’s important to recognize that this content didn't suddenly appear in a vacuum. Teachers have been demoralized for decades, underpaid, over-scrutinized, ignored, and blamed for failures that are well above their pay grade. Into that environment steps the edu-fluencer, with something institutions largely stopped providing: the experience of being seen.
In the context of modern teaching, and the increasingly difficult situations we seem to place teachers in, feeling seen is undeniably important. But content optimized for affirmation has a very low ceiling. It won’t push you past the point where growth gets uncomfortable. It stays warm and validating. It presents individualized instruction as a revelation when it's been a fixture of educational research for decades. It takes a nuanced debate about grading and flattens it into a rallying cry. It treats burn-out as a point of view rather than a symptom.
Heavy on popularity. Little to no influence. Certainly no impact.
The Institution Problem
The temptation now is to point our fingers at social media, but before doing that, it's worth turning the lens inward. How does your institution measure the success of professional development? If the answer involves waitlists, post-session satisfaction scores, or the phrase "people really seemed engaged," then you are measuring popularity. You are ignoring whether anyone learned anything, changed their behavior, or delivered a different experience to students or visitors. If popularity is all you care about, you can simply offer donuts at your next session, and you’ll probably get more people in the door.
For cultural institutions, a more complicated question is how do you count attendance? If a networking event held in proximity to an exhibition counts toward that exhibition's numbers, you are not measuring engagement with ideas. You are counting bodies in a building and choosing to pretend it’s impact. This is the equivalent of counting a happy hour event as being the same as session attendance at a conference.
How does your annual report characterize school group visits? If it lists numbers of students without any indication of what those students experienced, learned, or took with them, you are telling a story about popularity, or the demand for your programming, and dressing it up as evidence of impact. There are loads of ways to get kids in the door, but not all of them are valuable.
None of this is accidental. Popularity is easy to count, whereas impact is hard to measure and even harder to act on when the findings are inconvenient. Over time, institutions drift toward metrics that make them look good and are easy to collect, rather than metrics that may tell something true.
Why This Matters and What Would It Look Like to Do This Honestly?
When we mistake popularity for impact, we risk defunding the things that actually work. The PD that challenges practice gets cut because it scored lower on satisfaction surveys than the one that validated everyone's existing instincts. The exhibition that demands something of visitors draws smaller numbers than the one that asks nothing, and the numbers are what the board sees.
Popularity without impact is often a grief response to institutional neglect. Mistaking it for influence is how the neglect becomes chronic.
We can change our focus with a few steps. PD can be evaluated by who wanted to attend AND by what changed in practice six weeks later. Attendance figures can be accompanied by learning outcome data, even when that data is humbling. Institutions can ask hard questions about their own programming rather than inflating the numbers by cleverly including an event that is more social in nature.
With these changes, the focus is on the people we serve (teachers, students, visitors, families, etc.) rather than as evidence of our own relevance.
Popularity is comfortable. Impact is accountable. Pick one.