The Learning Impact Fund’s remit is to identify, fund and evaluate programs that will raise the academic achievement of children in Australia, especially those from economically disadvantaged backgrounds. The fund aims to support the growth of those programs which have been shown to work best at raising achievement. A Learning Impact Fund grant intends to achieve two outputs:
- A well-delivered intervention that has the potential to improve the academic achievement of children;
- A robust independent evaluation of the intervention, which includes an estimate of its impact on achievement, an estimate of its cost per student, and a rating of the strength of the evaluation.
Our approach to evaluation
Our approach builds on the successful approach of the Education Endowment Foundation, which has been funding programs and rigorous evaluations in England since 2011.
A central aim of Evidence for Learning is to improve knowledge and extend the evidence-base on what works and why to raise the achievement of Australian students. To achieve this, all Learning Impact Fund projects will be rigorously evaluated by independent experts in educational research. These evaluations will be funded by the Learning Impact Fund.
The impact of projects on achievement will be evaluated, where possible, using randomised controlled trials, with a linked process evaluation to understand the elements of successful delivery. Evaluations will be conducted by one of Evidence for Learning's independent panel of evaluators.
Evidence for Learning takes a cumulative approach to evaluation. Thus the size of evaluation, and therefore the number of schools or projects that we would require grantees to work with, will be determined by what we already know or whether there is a need to either pilot a new approach or demonstrate that an intervention can work at scale.
Program scale in schools
The diagram above shows where in a program’s lifecycle the Learning Impact Fund intends to work. Two main features of a program will determine its place along this continuum:
- The degree to which it has been well-defined and codified; and
- The strength of the evidence that it is effective.
The Learning Impact Fund does not support early-stage programs that have not yet been well-defined nor delivered outside of one school.
Types of trial
- Pilot trials: aim to support the final codification of a program through an independent developmental evaluation.
- Efficacy trials: support well-codified, promising programs to be delivered in more schools and to test whether they can deliver on their promise when they are delivered as intended, often with the direct support of the original developers.
- Effectiveness trials: support programs that have demonstrated efficacy to be delivered at an even larger scale and to test whether they can continue to deliver good results for students when they are delivered in a more scalable model, often without the direct support of the original developers.
Evidence for Learning believe, programs that have demonstrated impact in the effectiveness stage may be good candidates for system investment and support for further scale. However, the Learning Impact Fund does not intend to support programs in this scale-up phase.
We want to share our research about what works to raise student achievement. As projects progress, we will work with our partners to integrate the results of all evaluations into the summary of evidence for practitioners in the Teaching & Learning Toolkit.
As projects are completed, we will feature evaluation reports and examples of approaches that work, along with notes about how they work. Our approach to evaluation is rigorous and transparent. All the research we commission will be published on this website, regardless of the outcome.