Evidence for Learning: Why randomisation is not detrimental

Why randomisation is not detrimental

When trying to determine what impact an intervention is having on students learning, we need a point of comparison.
Author
E4L
E4L

Why randomisation is not detrimental

Blog •5 minutes •

When education policymakers introduce a new initiative, it is assumed it will have a positive benefit for students. No one would spend resources on something that doesn’t work, so it would be unfair not to give this new initiative to as many schools and students as possible, right?

This mindset comes from a good, optimistic place, but it often interferes with the ability to accurately evaluate new education interventions. There is a natural reaction against the idea of not all students’ having the same conditions for learning. In trying to determine what impact an intervention is having on students learning, it’s important to have a comparison against business as usual. This means that some schools will not get the intervention (at least initially), and ideally these schools will be randomly chosen.

It’s tough to select only some schools or students to trial a new education intervention, but it’s important to remember with anything new, we don’t actually know that it works yet. Even if it’s accepted that some schools or students will not receive the intervention initially, there’s usually an impulse to make sure the intervention is given to schools based on presumed need. But unless schools are chosen randomly, it will be difficult to measure impact and make a valid case that the intervention has worked.

This is because a change doesn’t take place within a vacuum, when something new is introduced into a school it is replacing or adding to a previous model and its effectiveness is shaped by that particular school environment. For example, if students are taken out of class for a literacy or mathematics catch up program they will have a different day of learning than if this change occurred outside of class time. Another example is if there is a new intervention to reform mathematics teaching, it will probably have a different effect in a school with experienced mathematics teachers than in a school with teachers teaching out-of-field. Interventions may be beneficial for some areas of learning and not others and may impact different aged students in different ways. This information will not be identified unless the population is randomised.

There are three main benefits to using a randomised controlled methodology that we will discuss in this blog:

  • Remove selection bias
  • Simplified conclusions
  • Incorporation in meta-analysis (Hutchison & Styles, 2010).

Remove selection bias

Selection bias is when there are meaningful differences between the students receiving the changed conditions for learning and those who are not. These differences are hard to control for in other types of studies like those outlined in Evidence for Learning’s Hierarchy of Evidence (Deeble & Vaughan, 2018) in Figure 1. For example, in a quasi-experimental design (having a control group that is not randomised), you will always have questions about whether the change in learning resulted from the intervention or was something inherently different between the two groups from the beginning. For example, a school that is more likely to make a change or try a new way of learning may have a culture that is different to another school that is less likely to adopt a change. The only way you can be sure that there are no differences between the students is to randomise them.

Hierarchy of evidence2

Figure 1: Hierarchy of Evidence (Deeble & Vaughan, 2018, p. 4)

Simplified conclusions

Impact from an education initiative doesn’t just come from its immediate recipients; learning from the evaluation of what worked and what didn’t can be even more influential. It is therefore important to have clear communication of evaluation findings. Educators have reported the usefulness of having a clear evidence base like that within the Teaching & Learning Toolkit (Education Endowment Foundation, 2018) that they can easily draw from with key measures like months’ worth of learning impact, cost and evidence security. Frances Roberts, Head of Curriculum at Bounty Boulevard State School says:

Associate Professor Mark Rickinson (2016) wrote that practicing educators:

Through using randomised controlled methodology, we can make simplified conclusions which help evidence-informed decision making. This can then be incorporated into the wider evidence base within meta-analysis (combining studies that have effect sizes) so that educators can look at all the evidence in the world grouped together.

Incorporation in meta-analysis

If an evaluation has a randomised controlled methodology, then it can be incorporated into meta-analysis. This means it can be included with other studies from around the world about the same subject matter, so we can draw a conclusion about whether that approach is helpful to learning. There are many examples of this including Evidence for Learning’s Teaching & Learning Toolkit (Education Endowment Foundation, 2018), John Hattie’s work (Hattie, 2009, 2012) and Marzano (Marzano, Waters, & McNulty, 2005) to name just a few.

We are not saying that this is the only type of evidence that matters. For Evidence for Learning, our Learning Impact Fund (Evidence for Learning, 2018) uses randomised controlled methodology to look at the impact of programs such as Thinking Maths (due to be released in September 2018), and MiniLit and QuickSmart Numeracy (due in late 2018). Alongside the team of statisticians for each of the independent evaluators (including ACER and Melbourne Graduate School of Education) there is a team of qualitative researchers to determine how the changed learning conditions has impacted the students’ learning. Both Tanya and Katie, in their work as evaluators, have used both qualitative and quantitative methodology as they work to answer different education research questions. The narrative of the change is important when thinking about what is important to make something work within different settings.

Conclusion

We hope that a growing number of policymakers and educators will see the usefulness of randomisation in evaluating the impact of a change in learning conditions. We are not advocating for this to be the only way that research is conducted as a program/​change in approach needs to be at a certain stage of development in order for this type of research to make sense. The benefit of randomisation is that it does remove selection bias, allows for simplified conclusions to be communicated and the results can also be incorporated into meta-analysis.

Dr. Tanya Vaughan is the Associate Director at Evidence for Learning. She is responsible for the product development, community leadership and strategy of the Teaching & Learning Toolkit.

Katie Roberts-Hull is the Director of Policy at Learning First. She works on how high-performing systems have reformed early school education, and on evaluating professional learning programs across systems.

References

Deeble, M., & Vaughan, T. (2018). An evidence broker for Australian Schools. Centre for Strategic Education, Occassional Paper(155), 1 – 20. Retrieved from http://www.evidenceforlearning.org.au/index.php/evidence-informed-educators/an-evidence-broker-for-australian-schools/

Education Endowment Foundation. (2018). Evidence for Learning Teaching & Learning Toolkit: Education Endownment Foundation. Retrieved from http://evidenceforlearning.org.au/the-toolkit/

Evidence for Learning. (2018). The Learning Impact Fund. Retrieved from http://evidenceforlearning.org.au/lif/

Hattie, J. (2009). Visible Learning: A synthesis of over 800 meta-analysis relating to achievement. London: Routledge.

Hattie, J. (2012). Visible Learning for Teachers. New York and Canada: Routledge.

Hutchison, D., & Styles, B. (2010). A guide to running randomised controlled trials for educational researchers. Slough: NFER.

Marzano, R. J., Waters, T., & McNulty, B. A. (2005). School leadership that works: From research to results: ASCD.

Rickinson, M. (2016). Communicating research findings. In D. Wyse, N. Selwyn, E. Smith, & L. E. Suter (Eds.), The BERA/SAGE Handbook of Educational Research. London: Sage.