Do Studies Really Show?
On the unreliability of education research
Jason Caros | October 17, 2023
Have you ever heard someone make a point by saying, “Studies show…” or “The research says…?” These phrases are often used in the field of education, as a speaker tries to reinforce what is perceived to be a good practice, or when someone is trying to implement some “new and improved” way to teach children. These phrases are rooted in good scientific practices, as we all benefit from research done well. With that said, when I hear someone utter those phrases, especially in education settings (and in politics), a siren immediately goes off in mind because it seems to me that too many educators or education policymakers who quote studies do not seem to know that studies are not created equal, and their outcomes are not always reliable or demonstrably true. As a matter of fact, if people make claims and simply say “studies show” or “research says” and do not have sound evidence to support the claims, they may be engaged in a type of fallacious reasoning, perhaps a correlation/causation fallacy, or some other. What do I mean?
When a study is published in a scientific journal, it describes the researchers’ methods so that others can copy and build upon the research. When another group of researchers conduct a follow up study based on the original, they have attempted to replicate it. When the result from the second study is consistent with the first, it is more likely to represent a reliable finding. When the second group of researchers finds different or inconsistent results, this may suggest that the original research was not reliable or faulty in some way. There is some nuance that could be added here regarding sample sizes (small sample sizes prevent researchers from drawing wider conclusions based on the original research), use of proper methods, etc., but for the purposes of this reflection, the key thing to recognize is that just because research is published doesn’t mean that it has been replicated and that it is reliable. In other words, when you see a study cited in an article, journal or social media post, the claims in it may not be reliable. In fact, it is likely unconfirmed and ought to be regarded with some level of suspicion.
In a book by a former NPR science reporter named Richard Harris called Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hope, and Wastes Billions, Harris explains what every scientist knows—studies mean little if they are not done correctly and if they are not replicated by other researchers in the field. To underscore the point, just because the results of a study appear in a journal, even a reputable one, it does not mean that the results have been replicated and confirmed. So, my usual retort to the phrases “studies show” or “research says” is: “Has the study been replicated? Is it confirmed research?” Harris points out that in most of the studies that are replicated, over 60%, are faulty. Even in hard sciences, a good amount of the published research is unreliable, but in the softer sciences (i.e. social sciences) it is worse.
John Ioannidis, professor of medicine, epidemiology, and population health, of statistics and biomedical data science at Stanford University, a greatly cited and universally recognized authority on research methods and practices and generating reliable evidence, says that the majority of studies that make their way into journals, even serious ones, are sloppy. He attributes this, in part, to insufficient training in the areas of statistics and methodology, which negatively impact the ability of others to replicate previous studies. He and others also discuss biases and ethical issues that help explain the lack of replication in research, some of which include submission bias, editor bias and journal publication policy bias, funding bias, and novelty-equals creativity bias, as well as plain old immorality—regarding ethics, read Science Fiction: The Crisis in Research.
But surely, out of the thousands of studies that appear in science journals, the majority must have been replicated and shown to be reliable. Right? Actually, no… Most published studies are not replicated, and a 2016 survey conducted by Nature, one of the premier science journals, showed that out of 1,576 researchers who took the journal’s survey on reproducibility or replicability of research, more than 70% tried and failed to replicate another scientist’s research. This lack of replication is seen across fields and is reported by multiple research groups that have investigated this phenomenon, which is known as the “Replication Crisis” or “Replicability Crisis.”
Interestingly and sadly, non-replicable studies are cited and used often—a couple of economists from the University of California analyzed studies in the top economics, psychology and science journals and found that studies that were not replicated were more likely to be cited than studies that had been replicated, and that the influence of the non-replicated papers is growing as time goes on.
In which areas is replication least observed? While replication of studies in the hard sciences (e.g. Chemistry), and in the field of medicine, are strikingly infrequent, two of the least replicated studies are in the areas of Psychology and Education. A 2012 review conducted by a research team on the publication history of the top 100 psychology journals found that only 1.07 percent of publications were replications, and a 2014 review of education studies published in the field’s top 100 journals found that only 0.13 percent are replications (out of 164,589 articles published, only 221 were replications).
0.13 percent? A good deal less than 1 percent of education studies are replicated! And now, finally, we get the crux of this message. One of the hallmarks of classical education is a rejection of a good deal of 20th and 21st century progressive “reforms” that have negatively impacted our schools. American education declined as our schools rejected the collected wisdom from the past and as many modern educators and policymakers adopted novel and untested ideas. With that said, classical school educators do not reject everything modern. Following thinkers and researchers like E.D. Hirsch and Daniel Willingham from The University of Virginia, we affirm confirmed (replicated) education research, such as the research that has come from the field of cognitive science about how students learn and the practices that favor long-term memory, and we use some of the tools from modernity. My great hope is that classical education will continue to grow and blossom in this country and that over time other schools will return to more tried and true practices. There is a glimmer of hope, but this is a subject for another occasion.
To be clear, I do not reject science, nor do I reject scientists. While teaching is primarily an art, the knowledge gained from good and reliable research ought to inform classical educators in their practices. Additionally, our students should learn the fundamentals of effective research. Many scientists and researchers are doing good and valuable work, so it is important for us in schools to recognize the good in it, to question unconfirmed research and fallacious claims, especially when educators and policymakers attempt to make changes in school, and to always use discernment and distinguish truth from tall tales.
Jason Caros is a husband and father, and he serves as Associate Superintendent for Academics for Founders Classical Academies. He served as a classical school headmaster for twelve years at Founders Classical Academy of Lewisville