Evidence for Learning: Responding to reservations about meta-analyses: Part 2

Responding to reservations about meta-analyses: Part 2

The second piece in our series addressing reservations about the validity and usefulness of meta-analyses in education.
Author
E4L
E4L

Responding to reservations about meta-analyses: Part 2

Blog •6 minutes •

In a post on this topic two weeks ago, I began my response to Deb Netolicky’s thoughtful reservations about the Teaching & Learning Toolkit (the Toolkit) and the methodology that underpins it. I promised further posts to respond to some of the other reservations she mentioned, so I’d like to pick up on a second reservation. I’m joined here by my colleague, Associate Director Dr. Tanya Vaughan.

Over-simplification and losing context

Deb notes that the Toolkit has the potential to over-synthesise limited quantitative data to the point of distorting original findings, and ignore the limitations, qualities and complexities of the synthesised studies.’ This concern about losing the complexity in original studies is echoed in some of the other objections Deb cites, too. She notes that Simpson (2017) points out that meta-analyses often do not compare studies with the same comparisons, measures and ranges of participants.’ She also refers to Snook et al’s (2009) argument that when averages are sought or large numbers of disparate studies are amalgamated … the complexity of education and of classrooms can be overlooked.’

Value of highlighting patterns

These are all legitimate concerns, and we need to be constantly alert to the risk of oversimplification. However, the strength of meta-analysis as a research tool, and the purpose of the Toolkit more broadly, is to ensure that significant patterns present within academic research are accessible to teachers so that they can use it to make better decisions for the benefit of their students. To identify these patterns, some of the qualities and complexities’ of the underlying studies must necessarily be lost from view temporarily.

An analogy may be helpful here. As a teenager in the American west, I was a keen rock climber. On one climb in the Grand Canyon, I ran to the base of a cliff and started to make my way up, without taking time from a distance to look for the best route. About two-thirds of the way up, I ran out of hand-holds and foot-holds and could no longer make further progress. I was, luckily, able to make my way back down, but taking a few minutes to look at the whole cliff before I started the ascent would have been immensely helpful in identifying an appropriate route.

The Toolkit tries to give that wide view of the whole cliff. You still need to have the acumen and skill to manage the individual hand-holds and foot-holds in each climb, but seeing the possible best routes to the top gives you a greater chance of success in getting there. There are always risks involved in simplifying complex evidence, but we believe those risks are outweighed by the significant value of that evidence to inform professional decision-making.

Simplifying without over-simplifying

We also believe that meta-analysis can simplify without over-simplifying, and the Toolkit methodology has been designed to avoid many of the common pitfalls.

Pitfall 1: Combining multiple outcomes

One common pitfall that could be seen to lead to over-simplification is the combination of studies with multiple student outcomes into one effect size. The included studies do draw on different measures of student outcomes, so the Teaching & Learning Toolkit could be seen to succumb to this pitfall. However, in light of what some argue is the narrowing of the curriculum (Caldwell and Vaughan, 2011) based on a focus on just one type of outcome, the weighted mean effect size reflects the diversity of the impact of, for example, feedback. Looking at the meta-analysis studies underpinning the Toolkit’s feedback strand (see the feedback strand’s Technical Appendix), we can get an idea why this diversity of outcome measures is useful rather than something to be feared. One meta-analysis included in the Feedback strand has measured the impact of feedback on motor-skill development, and another is focused on student writing outcomes. Both types of skills are important for students to develop, and providing details on studies measuring both means that teachers can see that feedback is effective to help students develop a range of desirable skills. Where a Toolkit strand naturally applies to a narrower range of student outcomes, included studies do focus on a narrower range of measures. For example, the Toolkit’s reading comprehension strategies strand only includes studies that measure reading outcomes.

Pitfall 2: Valuing all studies equally

A second common pitfall that might lead to over-simplification is assuming all included meta-analyses should have the same influence on the final estimate of progress. The Toolkit avoids this by calculating, where possible, a weighted mean effect size. The weighted mean effect size gives greater emphasis (i.e. increased input into the effect size) to those studies that are more representative of the whole target population. Technically, this means giving more weight to studies with a smaller standard error, which is a proxy measure of the number of students in a study. (Generally as the number of students increases the standard error decreases). By taking this approach, the Toolkit gives gives greater weight to studies that provide more representative results.

These features of the methodology underpinning the Toolkit mean that its high-level conclusions do not over-simplify the complexities of the underlying research. They give an accurate wide-angle view of the cliff, so to speak.

To continue with the analogy, taking a wider look at the cliff can give you a sense of where other climbers have had success before. Similarly, the Toolkit can only tell us what has been successful, on average, with other pupils, in other schools, and in a range of different contexts. It conveys what has worked”, rather than what will work somewhere else. It draws on the experiences of many hundreds of thousands of students and teachers around the world to give school leaders and teachers a sense of the best bets’ for impact. Where the Toolkit indicates something is a good bet, and schools identify a good match with their priorities, there will still be a need to adopt or implement a given intervention in ways that are sensitive to local context and increase the chances of success. It is this combination of external rigorous research evidence with professional judgement – knowledge of students, context, and challenges – that really makes the difference.

Other evidence resources give direction

When considering context, it is also important to note that the Toolkit is only one type of resource that distills insights from the existing evidence base. In her post, Deb quotes an objection from Dylan Wiliam: Meta-analysis is simply incapable of yielding meaningful findings that leaders can use to direct the activities of the teachers they lead.” As I indicated in my last post, the Toolkit is not intended to dictate or direct professional decisions in schools. Other resources more like what Wiliam calls for – distillations that can give direction – are often required to complement the Toolkit. As one example, our colleagues at the EEF and at the US Department of Education’s Institute of Education Sciences produce practice guides on a variety of topics. These practice guides go a step beyond the Toolkit. They don’t just indicate the average effect of, say, Teaching Assistants, but explain, based on current, reliable evidence, what to do in a school to get the most positive impact from Teaching Assistants. More detailed resources, based on rigorous research from the field of implementation science, support school leaders through the hard work of assessing whether the recommendations are likely to work for them and, if so, adapting the recommendations into their local context. To return to the climbing analogy, practice guides and their accompanying materials are like guides’ for what kind of hand-holds to look for and how to know whether you’re ready to ascend a particular kind of route.

As the availability of these types of resources increases — yes, we’re working on developing Australian practice guides— we hope the evidence presented in the Toolkit will take on even more practical value for teachers across Australia. In developing these resources, the input and insight of practitioners is essential to ensure that schools don’t fall into the trap Deb warns us about of not taking the time to consider what might work where, for whom, and under what conditions.”

The Toolkit is one of a suite of resources that, when combined with professional knowledge and skill, can help schools use evidence to get great outcomes for their students. As new, better and more accessible evidence becomes available, we hope this suite of supporting materials will help all teachers and students climb toward their goals.

We’ll return with one final post to address two further reservations about meta-analysis.

In the meantime, you can read Responding to reservations about meta-analyses: Part 1.

John Bush is the Associate Director of Education at Social Ventures Australia and part of the leadership team of Evidence for Learning. In this role, he manages the Learning Impact Fund, a new fund building rigorous evidence about Australian educational programs.

Dr. Tanya Vaughan is an Associate Director at Evidence for Learning. She is responsible for the product development, community leadership and strategy of the Toolkit.