The one that Australian aid forgot? Trying to put the R back in MERL

By Priya Chattier and Lavinia Tyrrel

Spent last week at the second of three workshops on how to better use research in aid and development[1]. Lots of good discussion, but seemed we were all tip-toeing around the elephant in the room: does anyone else care about this (prioritising research and analysis in aid) other than us?

Academic photo
No caption needed…photo credit: Benita Epstein, here

On one hand (and before the academics launch a call to arms) we know the answer is unequivocally ‘yes’. Many donors and governments put money and time into commissioning and communicating research and analysis – usually with the explicit aim of wanting their aid to achieve more impact. Think USAID’s Monitoring, Evaluation, Research and Learning Innovations (MERLIN) Program,  DFAT’s Aid Evaluation Policy and Innovation Strategy, and DFID’s ‘Evidence into Action Team. There are also many individuals and institutes (see here and here) who’s sole aim is to communicate research and positively influence Australian aid policy and practice.

But on the other hand, this rhetoric does not always carry through to practice. As Lisa Denney aptly summed up: the reality is, once project implementation begins and if budgets are cut, funding and time reserved for research and analysis are often the first to go[2]. Especially when donors, tax-payers and those higher up in the aid system require that whatever fledgling Monitoring, Evaluation, Research and Learning (MERL) budget remains be put towards generating unambiguous, clear, aggregable data about results and performance to meet accountability demands. [Lisa does a good job of outlining the reasons for in her blog if you want to know more].

Yet timely, relevant and practical evidence and research is core to making good decisions about where and why to spend aid funds and adapt as you go. And we are not the only ones thinking this. In their 2018 study, Research for Development Impact (RDI) (somewhat depressingly) found that most officials in Australian aid felt that support for research, and the degree to which it was being used, had declined over recent years. In part due to pressure for the aid program to be more reactive and responsive to whole-of-government priorities.

So how to get the R back in MERL? The aid effectiveness argument is that research, defined as the production of new knowledge or using existing knowledge in new way, improves how aid programs are designed, implemented and evaluated. The broad argument here is that research can help us understand complex problems, test different aid modalities or ‘solutions’, be a way to empower local views in the aid cycle, help projects adapt as they go based on what works what doesn’t and why, and weigh up the costs and benefits of different investment choices.

But this argument might have become obsolete. Talking in terms of research and analysis for aid effectiveness (at least in Australian policy circles) is a bit of ‘preaching to the converted’. Those who want to debate what makes aid effective are generally not the ones needing convincing. The policy rhetoric has moved into a place more closely linked to foreign policy, national interest and whole of government imperatives. The argument for valuing research and analysis in Australian aid increasingly needs to also be made on other grounds as well.

  • The case could be made on efficiency or economic grounds – valuing and using research to assess which modalities, financing mechanisms, sectors etc achieve strong value for money and savings for tax payer investment. While this is not bad thing in and of itself, the risk here is the research agenda becomes peppered by cost benefit and value for money analyses, with little room left for more qualitative methods which seek to understand political and social complexity.
  • Enabling the “greater responsiveness” rhetoric for Australian aid maybe another angle: especially for research methods which can enable a quick (weeks or months) turn around – e.g. action research and applied research methods, rapid cycle evaluation, case studies etc. Challenge here is ensuring that space and funding is still reserved for research which requires longer time frames to undertake methodological design, data analysis and the contestation of results.
  • Research can also be a way of supporting local ownership of aid design and delivery (and an inclusion agenda in disguise). For example through co-design of research questions and problems, and the use of participatory data collection methods to work with local actors to identify ‘solutions’ (RDI’s latest paper has good practical ideas here). This approach may be especially relevant to programs (such as the forthcoming PNG Transition to Health (PATH) design) which wants to see a transition of direct service delivery responsibilities from donors to the Government.
  • Lastly, there may be an argument in the new international development policy for research to help Australian aid situate itself for understanding and responding to long-term trends (not just the here and now). If Australia is serious about supporting the long-term drivers of “security, stability, prosperity and resilience in the region” – then country and regional investment choices and strategies need to be underpinned by sound analysis of what confluence of factors contribute to these goals at national and sub-national levels. As well as research which seeks to understand new and emerging trends Australia and the region may not have foreseen.

If research continues to be put in the “nice to have” bucket, rather than be seen as core to aid and development decision making, the consequences will run far deeper than missing out on a few extra glossy publications here and there. Research is a pre-requisite if aid and development programs want to focus on tackling the most challenging development problems, adapt based on what’s working what isn’t and why, and constantly search for new and more efficient ways to make use of public funds.

***

[1] This is all part of an action research project funded by the Research for Development Impact (RDI) Network and implemented by La Trobe University (LTU) and Praxis Consulting. In a nutshell, the project brings together 13 aid organisations (think DFAT through to Oxfam) to unpack what the barriers and opportunities are to using evidence and research in aid and development.

[2] This was the sentiment echoed by the majority of the over 80 program staff which we surveyed as part of our contribution to the RDI Action Research project. Results and a full report will be released in February 2020.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s