“We already knew that.”
I frequently receive complaints from readers about findings in scientific papers being common sense or obvious. And yes, it’s true: science often confirms what we’ve long suspected or seen in practice. By its nature, science is slow and methodical. It doesn’t chase novelty for novelty’s sake. It seeks to verify, quantify, and understand patterns—often in complex, real-world systems where intuition can be misleading.
But that doesn’t mean we shouldn’t do it.
In fact, the apparent obviousness of a result doesn’t make the evidence any less important. In conservation science, especially, where interventions often affect both ecosystems and human communities, assumptions can lead to ineffective—or even harmful—strategies. Systematic evidence helps replace well-meaning guesswork with informed action.
That’s what we set out to explore with Mongabay’s Conservation Effectiveness series several years ago. We wanted to know: what does the science actually say about what works in conservation? To find out, our team undertook a deep dive into six widely used strategies—forest certification, payments for ecosystem services, community-based forest management, terrestrial protected areas, marine protected areas, and environmental advocacy.
These approaches are common tools in the global conservation toolbox. They’re often portrayed as proven solutions. But our investigation revealed a different reality: for many of these strategies, the evidence base was surprisingly thin. A large share of the studies we reviewed lacked the rigor needed to establish causation—whether the strategy itself actually produced the observed environmental or social outcome. Many studies could only offer correlations. Some strategies hadn’t been studied much at all.
That doesn’t mean these tools don’t work. It just means we don’t always know for sure how well they work, under what conditions, or why. And in a field where resources are scarce and the stakes are high, that uncertainty matters.
Of course, conservation doesn’t happen in a lab. Practitioners often rely on local knowledge, trial-and-error, or the strategies that attract funding or political support. And that, too, is part of the evidence landscape—observations, anecdotes, and lived experience all have value. But scientists have long warned against relying solely on intuition. Just as we expect medical treatments to be backed by research, so too should we expect conservation strategies to be informed by the best available science.
Since we published the series, the field has moved forward. Initiatives like the Conservation Evidence Project at the University of Cambridge are helping to build a stronger, more accessible evidence base. But researchers still note a gap between science and practice. Too many conservation decisions are made without consulting the research—or without monitoring outcomes at all.
And while success stories make for compelling headlines and glossy reports, failures are too often ignored. Yet learning what doesn’t work is just as essential to improving outcomes. Without that learning, we risk repeating the same mistakes—or mistaking “common sense” for effectiveness.
In conservation, the obvious still deserves to be tested. Because lives, livelihoods, and entire ecosystems depend on getting it right.