Beneficiary Feedback in Evaluation
Independent report
DFID – Department for International Development
15 May 2015 :: 62 pages
PDF, 1.08MB: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/427742/Beneficiary-Feedback-Feb15.pdf
.
The purpose of this paper is to analyse current practice of beneficiary feedback in evaluation and to stimulate further thinking and activity in this area. The paper builds on current UK commitments to increasing the voice and influence of beneficiaries in aid programmes. It has been commissioned by the Evaluation Department of the UK Department for International Development (DFID).
Evidence base
The paper builds on:
:: A review of over 130 documents (DFID and other development agencies), including policy and practice reports, evaluations and their Terms of Reference, web pages, blogs, journal articles and books;
:: Interviews with 36 key informants representing DFID, INGOs, evaluation consultants/ consultancy firms and a focus group with 13 members of the Beneficiary Feedback Learning Partnership;
:: Contributions from 33 practitioners via email and through a blog set up for the purpose of this research (https://beneficiaryfeedbackinevaluationandresearch.wordpress.com/) and;
:: Analysis of 32 evaluations containing examples of different types of beneficiary feedback.
It is important to note that the research process revealed that the literature on beneficiary feedback in evaluation is scant. Yet, the research process revealed that there is a strong appetite for developing a shared understanding and building on existing, limited practice.
.
The report contains 5 key messages.
Key Message 1: Lack of definitional clarity has led to a situation where the term beneficiary feedback is subject to vastly differing interpretations and levels of ambition within evaluation.
It has been noted that there is a lack of uniform understanding as to the concept of beneficiary feedback within the international development sector generally (Jump 2013). This paper confirms that this is also true for evaluation specifically. While there is a growing interest in beneficiary feedback in programme implementation, no prior study of beneficiary feedback in evaluation was found.
Key Message 2: There is a shared, normative value that it is important to hear from those who are affected by an intervention about their experiences. However, in practice this has been translated into beneficiary as data provider, rather than beneficiary as having a role to play in design, data validation and analysis and dissemination and communication.
This largely extractive process brings risks for rights based working, learning, evaluation rigour and robustness, as well as the meeting of ethical standards that one might expect.
Key Message 3: It is possible to adopt a meaningful, appropriate and robust approach to beneficiary feedback at key stages of the evaluation process, if not in all of them.
The paper proposes a simple, practical framework for beneficiary feedback in evaluation that can be used to apply a structured and systematic approach that cuts across all stages of evaluation – from design to dissemination. The framework takes the form of a matrix…that evaluation commissioners and practitioners can use to map different types of beneficiary feedback onto each of the different stages of evaluation. This will support them in making choices as to which type of beneficiary feedback is most appropriate in the given evaluation context
Key Message 4: It is recommended that a minimum standard is put in place. This minimum standard would require that evaluation commissioners and evaluators give due consideration to applying a beneficiary feedback approach at each of the four key stages of the evaluation process.
Where decisions are taken not to solicit beneficiary feedback at one or more stages, it is reasonable to expect that this is justified in evaluation design to be clear that the decision to exclude beneficiaries from the evaluation process is one of design rather than of omission. Quality assurance processes should integrate this standard, and methodology papers should explain the rationale.
The framework fits in with existing evaluation principles, as well as within DFID’s systems and policies. It does not require a new set of principles. It does, however, require explicit consideration of these principles, particularly ethical principles. This will improve the chances of moving away from extractive data collection to ethical and meaningful feedback.
Key Message 5: A beneficiary feedback approach to evaluation does not in any way negate the need to give due consideration to the best combination of methods for collecting reliable data from beneficiaries and sourcing evidence from other sources.
As with any evaluation, consideration will need to be given to how to: avoid elite capture and bias; ensure diverse views, including those of women and men, are heard; develop a robust sampling protocol and; defend cost effectiveness proposals and the generalizability of findings.
Concluding thoughts
It is time to move beyond the normative positioning around beneficiary feedback as “a good thing” towards explicit and systematic application of different types of beneficiary feedback throughout the evaluation process. The current approach to beneficiary as data provider raises important methodological and ethical questions for evaluators. The paper highlights these and shows that it is possible to adopt a meaningful, appropriate and robust approach to beneficiary feedback at key stages of the evaluation process, if not in all of them. It is suggested that the framework proposed is both reasonable and achievable and will be a useful tool for evaluation commissioners as well as practitioners.