To increase the educational return on investment, schools and systems should engage with evidence. We focus on the interaction of two processes in an evidence ecosystem to improve efficacy and efficiency:
- Impact Evaluation Cycle within schools and;
- The wider evidence chain involving education researchers and policy makers.
The two cycles feed into and build on each other to improve overall evidence availability, accessibility, and use within schools over time – and ultimately, to improve educational outcomes in Australia.
'Creating evidence ecosystems … requires co–ordinated efforts from a wide range of stakeholders [but] it is imperative that professionals drive these developments. Yes, policymakers have a responsibility to ensure there is a coherent overall system, and indeed, researchers have a duty to produce high quality research, yet it is frontline professionals who … should be at the heart of evidence–informed practice.'
(Sharples, J. Evidence for the Frontline, 2013)
Impact Evaluation Cycle
Impact Evaluation is an evidence-informed cycle of selecting, implementing and evaluating an intervention to create an intentional improvement within the school.
A impact evaluation cycle is one that engages with evidence and data in a process of continuous improvement.
We assert that, to improve educational outcomes, impact evaluation is needed by and in schools.
'School leaders need to be continually working with their staff to evaluate the impact of all on student progression. Leaders need to create a trusting environment where staff can debate the effect they have and use the information to devise future innovations… Schools need to become incubators of programs, evaluators of impact and experts at interpreting the effects of teachers and teaching on all students.'
(Hattie, J. What Works Best in Education: the Politics of Collaborative Expertise, 2015)
Wider evidence chain
Engagement with evidence involves a range of interactions, including accessing, interpreting and interrogating data and research, designing and running relevant trials, measuring and evaluating outcomes, creating and substantiating hypotheses, consulting stakeholders and communicating and sharing emerging evidence.
Our Program Logic is structured around this chain in order to monitor the impact of our projects.
Effective impact evaluation cycles within schools, however, cannot occur if the wider evidence chain –which governs the general availability and accessibility of evidence that schools can use –is weak.
It is this model and way of thinking and working that we believe will see an aggregate improvement in educational outcomes for students in Australia.