Evidence for Learning: Unlocking education’s implementation black box

Unlocking education’s implementation black box

The Learning Impact Fund trials give insight into which educational approaches do or don’t work.
Author
E4L
E4L

For the past three years, Evidence for Learning commissioned four research trials (three randomised controlled trials or RCTs and one pilot study) to evaluate the impact of programs and approaches on students’ educational outcomes. Associate Director Pauline Ho who leads our Learning Impact Fund, shares some of her reflections and insights from managing the Learning Impact Fund projects and partnering program developers, evaluators and systems across three states, 250 schools and 12 different organisations, including state departments. 

Blog •6 minutes •

In aviation, black boxes’ are used to collect critical information about each flight, including altitude, airspeed, vertical acceleration and fuel flow. The black box data is examined to determine what happened and how. In a similar way in education, we know what critical elements are needed to encourage learning (e.g. learning materials, instruction, content knowledge, assessment) although what exactly needs to happen to make a difference to student learning outcomes at scale is a field of study still in development.

The Learning Impact Fund trials, the process and its results gave us a unique opportunity to peer inside this education black box. Through understanding what educational approaches work (or don’t work), it sheds light on how we make the complex link between pedagogy and how this impacts student outcomes.1

The research trials set out to:

  1. Run robust evaluations of promising programs to provide an estimate of its impact on achievement in months’ worth of learning impact, an estimate of its cost per student, and a rating of the strength of the evaluation.
  2. Detail and act on the evaluation steps and approach developed a priori (planned before an intervention starts) and justify decisions of changes made in an ongoing way.
  3. Translate and communicate key findings and results in plain English reports for the use of the profession.

The combined power of impact and process findings

Evidence for Learning reports the findings of both an impact and process evaluation in all our trials. The impact evaluation tells us the difference a program made: Did it make an additional impact above and beyond everyday classroom practices. We report the outcomes in effect sizes translated into months’ worth of learning to make it practical and accessible for educators.

Educational evaluations, however need to go beyond impact results alone. The qualitative data from the process evaluation sheds light into the rich and complex picture of what happens when an intervention is delivered in the day to day realities of classrooms and schools. Qualitative data may include observations of the intervention being implemented, interviews with students, teachers, school leaders and parents about possible barriers and successes, as well as feasibility of implementing the intervention in schools. These findings were vital in explaining the program’s overall outcomes.

The implementation black box is worth considering if we want to know what exactly needs to happen to benefit student learning in different contexts. We share our reflections and learnings below.

Implementation matters – a lot!

By understanding and measuring whether an intervention has been implemented with fidelity, educators can gain a better understanding of how and why an intervention works, and the extent to which outcomes can be improved.

How we implement a program, intervention or approach is important.

There are two things to consider:

  1. Program readiness: How is the program delivered in schools? What does the evidence tell us about the program and its implementation readiness (is there evidence to support the dosage and time in program needed, sufficient resourcing, and support the schools)?
  2. Implementation fidelity: Are schools able to integrate the intervention or program within their school context? Is there capacity to achieve the recommended level of fidelity (time tabling, resources, involvement and training)?

In relation to the first point on program readiness, the core question is not only focus on what the program offers, but how we support schools to implement the program. Even the most well-designed program, with evidence of clear benefits can fail if we do not prepare well for implementation.2

The second point on implementation fidelity speaks to the mantra that faithful implementation is critical to understand program effectiveness. Implementation fidelity refers to the degree to which an intervention or programme is delivered as intended. When programs are implemented with fidelity, the yield average effect sizes can be two to three times higher compared to programs not implemented with fidelity.3 In a study of a parent training program, it was found that when the program was implemented with high fidelity, the parenting practices improved significantly, but the effect was much less when implementation fidelity was low.4

It is not possible to make fair or valid conclusions about a program when it is not implemented as intended. For instance, if only a small proportion of students received the sessions required or when sessions were not carried out as intended. Small sample sizes also make it difficult to make confident conclusions about a program or intervention. The core issue is we need to be clearer on the active ingredients’ that make a program work better in schools: Are schools able to implement the approach and its recommended dosage? What levels of compliance do we need to adhere to see benefits of student learning occur?

Educators are more likely to improve their fidelity of implementation if they understand what elements are needed for it to succeed, how they have strongly implemented some of the active ingredients implemented correctly and which active ingredients they need to improve.5 Importantly, if these active ingredients were untested, we should make preliminary assessment about these before trialling or implementing it more widely.

Evidence for Learning continues to build on these insights, working closely with program developers, evaluators and systems to identify potential challenges and successes before full implementation occurs.

Navigating a flight path to our destination

As a profession, we regularly discuss the need to scale and sustain good practices within and across a system. When effective strategies (that we know work!) are scaled up from one classroom, one school, to become common standards of practice to inform the hundreds of thousands of students across regions and nationally, it is perhaps where real difference can happen.

Navigating the path from here, we should consider:

  • How do we get better at assessing implementation readiness, and the challenges and successes might prior to implementing and testing it more widely?
  • How do we help programs achieve outcomes, place them in context to successfully translate teaching strategies into student outcomes?
  • How do we support schools to achieve the level of fidelity that is needed for learning benefits to occur? And address early signs of implementation gaps as soon as they arise?
  • How do research partners (program developers, evaluators, brokers, systems) make fair and effective evaluation decisions, particularly to the technical aspects of the design to support a high-quality process?
  • What are the active ingredients that are needed to see outcomes (and to inform future course corrections) in an intervention?
  • How do we get better at harnessing schools as research partners; from being practice experts, to recruitment and in interpreting results?
  • How can schools, policymakers and government leaders use the results to collaboratively discuss practice and learning?

Just like air traffic controllers and pilots who need to ensure a clear flight path, schools and systems need to unlock the black box of programs and approaches and their successful implementation in schools. In a way, these first independent research trials gave us insights into this complex process, but a glimpse into a promising future in Australian education.

Find out more about the Learning Impact Fund research trials here.

References

1. Ho, P. (2018). Thinking Maths – what does the evidence say worked and what should schools consider?. http://www.educationtoday.com.au/article/Thinking-Maths – 1482

2. Vaughan, T., Borton, J., & Sharples, J. (2019). School improvement: Sowing the seeds of success. https://www.teachermagazine.com.au/articles/school-improvement-sowing-the-seeds-of-success

3. Durlak, J. A., & DuPre, E. P. (2008). Implementation Matters: A Review of Research on the Influence of Implementation on Program Outcomes and the Factors Affecting Implementation. American Journal of Community Psychology, 41, 237 – 350.

4. Thomas, R.E., Baker, P., Lorenzetti, D. (2007). Family-based programmes for preventing smoking by children and adolescents. Cochrane Database Syst. Rev. (1), CD004493 . http://dx.doi.org/10.1002/14651858.CD004493.pub2

5. Sharples, J., Albers, B., Fraser, S., Deeble, M., & Vaughan, T. (2019). Putting Evidence to Work: A school’s Guide to Implementation. https://evidenceforlearning.org.au/guidance-reports/putting-evidence-to-work-a-schools-guide-to-implementation/