In 2006, Paul Ferraro and Subhrendu Pattanayak issued an urgent warning: conservation lacked the causal evidence needed to know what actually works. This mattered because decades of conservation efforts were failing to stall the decline in biodiversity around the world, suggesting that scarce funding was potentially being diverted to well-intentioned but ineffective efforts, rather than toward approaches with demonstrable impact — hence the title of their paper, “Money for nothing?” The message was clear: conservationists needed to start examining whether their actions were actually causing the desired effects. A classic study published two years later showed why this mattered. In 2008, Kwaw Andam and colleagues, including Ferraro, found that protected areas were less effective at reducing deforestation than earlier research had claimed. The problem was that the earlier studies hadn’t accounted for the fact that protected areas are often created far from roads and towns, places where deforestation is already less likely. By failing to account for this location bias, protected areas appeared more effective simply because they were located in places that were less likely to be deforested in the first place. The protected area example illustrates the pitfalls of relying on correlation to infer impact. Most of us are familiar with the refrain that “correlation is not causation,” yet correlation remains seductive simply because it’s easier to observe that two things happen together, than to prove that one caused the other. We may observe that forests inside protected boundaries remain standing while surrounding forests disappear. But without ruling…This article was originally published on Mongabay
From Conservation news via this RSS feed


