Most survey work I do produces estimates that inform policy discussions. The data matters, but it matters indirectly. A cropping area estimate feeds into a planning document that eventually shapes a budget allocation. Verification design for results-based financing is different. The numbers directly determine whether money changes hands. That changes everything about how you approach the statistics.

I've been working with NIRAS and Nefco on the Modern Cooking Facility for Africa (MCFA), a programme that provides results-based payments to service providers distributing clean cooking solutions across multiple African countries. There are 26 service providers operating in diverse markets, each claiming results that trigger disbursements. My job is to implement the sampling and analysis framework that determines whether those claimed results are real. It's a form of statistical auditing, and the stakes are concrete. If the verification methodology is too lenient, you're disbursing funds for results that didn't happen. If it's too stringent or poorly designed, you're penalising providers who are actually delivering, and you undermine the entire incentive structure the programme depends on.

What makes this different from standard survey design is the adversarial dimension. In a typical agricultural survey, there's no financial incentive for a farmer to misreport their rice yield. In results-based financing, service providers have a direct financial motivation to overstate their numbers. The sampling design has to account for that. You need sufficient sample sizes to detect meaningful discrepancies within each provider, not just across the programme as a whole. You need to think about how providers might strategically present their strongest cases. And you need to build in procedures for handling contested results, because when payments depend on verification outcomes, every methodological choice will be scrutinised.

This work has reinforced something I keep coming back to: statistics is most useful when it has real consequences. In this case, the sampling frame isn't an academic exercise. It's the mechanism that keeps a multi-million dollar programme honest.

← Back to Writing Share on LinkedIn