This is the sixth in a series of essays surrounding the EdTech Efficacy Research Symposium, a gathering of 275 researchers, teachers, entrepreneurs, professors, administrators, and philanthropists to discuss the role efficacy research should play in guiding the development and implementation of education technologies. This series was produced in partnership with Pearson, a co-sponsor of the symposium co-hosted by the University of Virginia’s Curry School of Education, Digital Promise, and the Jefferson Education Accelerator. Click through to read the first, second, third, fourth, and fifth pieces.
Economists define a collective action problem as one in which a collection of people (or organizations) each have an interest in seeing an action happen, but the cost of any one of them independently taking the action is so high that no action is taken — and the problem persists.
The world of education swirls with collective action problems. But when it comes to understanding the efficacy of education technology products and services, it’s a problem that costs schools and districts billions of dollars, countless hours, and (sadly) missed opportunities to improve outcomes for students.
Collectively, our nation’s K-12 schools and institutions of higher education spend more than $13 billion annually on education technology. And yet we have a dearth of data to inform our understanding of which products (or categories of products) are most likely to “work” within a particular school or classroom. As a result, we purchase products that often turn out to be a poor match for the needs of our schools or students. Badly matched and improperly implemented, too many fall short of their promise of enabling better teaching — and learning.
It’s not that the field is devoid of research. Quantifying the efficacy of ed tech is a favorite topic for a growing cadre of education researchers and academics. Most major publishers and dozens of educational technology companies conduct research in the form of case studies and, in some cases, randomized control trials that showcase the potential outcomes for their products. The What Works Clearinghouse, now entering its 15th year, sets a gold standard for educational research but provides very little context about why the same product “works” in some places but not others. And efficacy is a topic that has now come to the forefront of our policy discourse, as debates at the state and local level center on the proper interpretation of ESSA’s mercurial “evidence” requirements. Set too high a bar, and we’ll artificially contract a market laden with potential. Miss the mark, and we’ll continue to let weak outcomes serve as evidence.
The problem is that most research only addresses a tiny part of the ed tech efficacy equation. Variability among and between school cultures, priorities, preferences, professional development, and technical factors tend to affect the outcomes associated with education technology. A district leader once put it to me this way: “a bad intervention implemented well can produce far better outcomes than a good intervention implemented poorly.”
After all, a reading intervention might work well in a lab or school — but if teachers in your school aren’t involved in the decision-making or procurement process, they may very well reject the strategy (sometimes with good reason). The Rubik’s Cube of master scheduling can also create variability in efficacy outcomes: Do your teachers have time to devote to high-quality implementation and troubleshooting, and then to make good use of the data for instructional purposes? At its best, ed tech is about more than tech-driven instruction. It’s about the shift toward the use of more real-time data to inform instructional strategy. In some ways, matching an ed tech product with the unique environment and needs of a school or district is a lot like matching a diet to a person’s habits, lifestyle, and preferences: Implementation rules. Matching matters. We know what “works.” But we know far less about what works where, when, and why.
Thoughtful efforts are underway to help school and district leaders understand the variables likely to shape the impact of their ed tech investments and strategies. Organizations like LEAP Innovations are doing pioneering work to better understand and document the implementation environment, creating a platform for sharing experiences, matching schools with products, and establishing a common framework to inform practice — with or without technology. Not only are they on the front lines of addressing the ed tech implementation problem, but they are also on the leading edge of a new discipline of “implementation research.”
Implementation research is rooted in the capture of detailed descriptions of the myriad variables that undergird your school’s success — or failure — with a particular product or approach. It’s about understanding school cultures and user personas. It’s about respecting and valuing the insights and perspectives of educators. And presenting insights in ways that enable your peers to know whether they should expect similar results in their school.
Building a body of implementation research will involve hard work on an important problem. And it’s work that no one institution — or even a small group of institutions — can do alone. The good news is that solving this rather serious problem doesn’t require a grand political compromise or major new legislation. We can address it by engaging in collective action to formalize, standardize, and share information that hundreds of thousands of educators are already collecting in informal and non-standard ways.
The first step in understanding and documenting a multiplicity of variables across a range of implementation environments is creating a common language to describe our schools and classrooms in terms that are relevant to the implementation of education technology. We’ll need to identify the factors that may explain why the same ed tech product can thrive in your school but flop in my school. That doesn’t mean that every educator in the country needs to document their ed tech implementations and impact. It doesn’t require the development of a scary database of student or educator data. We can start small, honing our list of variables and learning, over time, what sorts of factors enable or impede expected outcomes.
The next step is translating those variables into metadata, and creating a common, interoperable language for incorporating the insights and experiences of individuals and organizations already doing similar work. We know that there is demand for information and insights rooted in the implementation experiences and lessons of peers. If we build an accessible and consistently organized system for understanding, collecting, and sharing information, we can chip away at the collective action problem by making it easier and less expensive to capture — and share — perspectives from across the field.
The final step is addressing accessibility to shared insights, facilitating a community of connected decision makers who work together both to call upon the system for information and to continue to make contributions to it. Think of it as a Consumer Reports for ed tech. We’ll use the data we’ve collected to hone a shared understanding of the implementation factors that matter — but we’ll also continue to rely upon lived experiences of users to inform and grow the data set. Over time, we can achieve a shared way of thinking about a complex problem that has the potential to bring decision-making out of the dark and into a well-informed, community-supported environment.
My work with colleagues at the first-ever EdTech Efficacy Research Symposium found that a growing number of providers, organizations, and associations are already working with educators to crowdsource efficacy data. And educators across the country are already doing this work in informal but valuable ways. Bringing these efforts together and creating a more standard approach to their collection and dissemination is a critical step toward improving decision-making. My observation from both research and discussion with the field is that the effort is not only deeply needed — it also already enjoys great support. If we take collective action, we can develop a democratic approach to improving the fit between ed tech tools and the educators who use them.
This series is produced in partnership with Pearson. The 74 originally published this article on January 2nd, 2018 and it was re-posted here with permission.