The growth of the evidence-based movement is the best thing to happen to education in decades and has led to genuine change for the better in our schools. The most exciting part of this is that it has been a bottom-up movement, led by the profession. Many people and organisations have helped pave the way (ResearchEd deserves a special mention), and in England one of the key initiatives has been the setting up of the Education Endowment Foundation.
The EEF of course contributes in a number of ways. One is running large-scale evaluations of educational interventions in a way and at a scale previously unprecedented in English education. The Research Schools Network is another very valuable initiative. Then there is the dissemination of evidence, not least through what is probably EEF’s most influential product, the Teaching and Learning Toolkit. This is a very user-friendly tool, which summarises a large volume of research clearly. It is an impressive endeavour and a useful first port of call in exploring evidence-based approaches. That said, there are in my view a number of issues with the toolkit that make it less valuable than it may at first sight appear. Many of these are inherent in looking at evidence as a checklist.
The toolkit is essentially a summary of meta-analyses. Meta analysis is a statistical technique in which the findings from different studies are aggregated to come to an average effect size. This is useful, as in many fields there are a vast number of studies which are both hard for us to look at in full, and may reach contradictory findings, which can make it difficult to tell what the overall impact of an intervention is. Meta-analysis has therefore become an increasingly popular method across the sciences. It is widely used in medicine and psychology, for example.
Meta-analysis is not without its problems, however, and some of these are heightened by the weaknesses of educational research in general. One of these is that if we are to effectively look at the impact of an intervention, and aggregate findings in a meta-analysis, we first need to agree on the meaning of the concept. While in medicine this is usually the case, in education there is a tendency for different researchers to use different definitions. Think of concepts like feedback or motivation for example. This problem is compounded by the fact that education researchers also often develop their own bespoke instruments rather than using established scales, as is more often the norm in psychology. This means that it can be hard to determine whether different studies we are aggregating are actually looking at the same thing.
The use of meta-analysis as the basis also limits the evidence we look at on any one intervention, as not all areas have been subject to many meta-analyses, and not all meta-analyses are of high quality (for methodological weaknesses of meta-analysis see, for example, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3868184/)
This points to a broader problem of a list of approaches like that in the toolkit, which puts together things that cannot straightforwardly be equated, such as comparing classroom practices to organisational factors. Some categories are so broad as to be virtually meaningless, like ‘digital technologies’ or ‘early years interventions’. It is hard to imagine what the impact ratings of such broad categories actually mean in practice. It is also in some cases unclear to what extent the findings truly generalise across phases and contexts.
Another danger of this and similar lists of intervention is that they can encourage a pick-and-mix model of educational intervention. This is unlikely to be successful. Education is complex, and interventions are embedded in broader school contexts and cultures. Implementation is key, and relies on capacity and fit with the school’s values and ethos. Picking an intervention encourages the silver-bullet fallacy, which has long plagued educational policy and practice.
All of this does not mean that the toolkit and similar lists of effective approaches are not useful. They can provide a very good entry point in thinking about evidence-informed approaches. However, they need to be part of a thoughtful approach to improvement that takes the school’s own capacity, culture and context into account, and looks critically at claims made. It would be wrong to see any list as the definitive picture of the state of the art in educational research. Lists of effective interventions can be a starting point, but should never be the end.
Good thoughtfull reflection on using research for improving education.
It would also be nice to elaborate in a future post on the focus on measurable short term effects of interventions in education research in stead of what is to be achieved in the long run. We are becoming more and more skilled at effectively teaching small sets of skills, but are these the ones the kids need, and isn’t our great way of teaching preventing them from learning skills they need more?
Examples: it seems that science teachers in the first years of secondary school let students less and less read textbooks for themselves, in stead design their lessons with presentations, fast feedback excercises and so. Kids get used to relying on the route the teachers set, lose the need to design their own learning trajectory and don’t develop their reading and learning skills.
In languages we improve the teaching of grammar, but does that make them more capable of communicating in the language or was the time better spent on watching movies (didn’t research show that vocabulary and practice matter more than grammar?). More kids are more capable on applying the skills they learned, but are they ndeed better off in the long run?
LikeLike