top of page


Like British sportspeople and the overuse of acronyms, evidence-based policy and systemic change are in fashion. For the last decade these trends have increasingly been the flavours of the month but no one seems to have realised that, when put together, they taste a bit odd. There is, then, a cognitive dissonance within donor agencies; they have become the beast with two heads!

The need for development programmes to consider not just a problem but the causes of that problem and the causes of those causes etc. is increasingly recognised by development agencies and NGOs. Call it systemic change, M4P, complexity theory or whatever en vogue term emerges next week, the value of understanding poverty as the result of a number of interlinked constraints in interlinked systems or markets is increasingly recognised as the most likely way to make the changes that are brought about by development programmes stick. In other words, in order to achieve sustainability and scale in poverty alleviation, you have to address constraints which cause that poverty within the broader system in which it was produced. This must be done in a way that results in a permanent change in the adaptive capacity of the system; not a one off pot of cash for businesses, governments, associations or other local partners to address their personal and immediate needs.

In parallel, there is an increasing recognition of the value in measuring results. While there is dispute in the way that this should be achieved, development is no longer the sector where good money is thrown after bad because all charity is good charity. There is a demand to know what works in reducing poverty. However, the tightest measurement requires laboratory conditions and the world is far from a sterile and controlled environment.

So if a development programme is to affect sustainable poverty alleviation for large numbers of people through embracing the complexity of markets and systems and implementing multiple complimentary activities to do so, we need new ways to monitor and measure the success of the approach. Programmes need to be accountable to their donors and an approach needs to demonstrate its value. However, just because measuring results is more difficult in such programmes should not be a reason to abandon the approach all together. Don't throw the baby out with the bathwater, if current measurement techniques are inappropriate, let's pull the plug and get some new ones!

You can find out more about the issues raised in this blog post in a paper available here.


bottom of page