Teaching and Learning blog

Explore insights, trends, and research that impact teaching, learning, and leading.

Explore posts in other areas.

PreK-12Pearson studentsProfessional

  • blog image alt text

    90%+ first-call resolution, and powerful support for GGU's teaching mission

    By Golden Gate University-San Francisco, CA

    SUCCESS STORY

    World-class support for 5,000+ busy adult learners

    To make higher education work for its students, many of whom are working professionals, Golden Gate University (GGU) offers flexible programs both online and at four campuses. Even its in-person courses are extensively enhanced with robust web components, and some have evolved towards flipped learning models.

    Both GGU’s students and its instructors are deeply reliant on the university’s online LMS and other systems. However, they have diverse expertise, and equally diverse hardware, ranging from old laptops to the newest smartphones.

    Students with full-time jobs often set aside nights and weekends for schoolwork. Most GGU faculty work professionally in the fields where they teach, bringing a wealth of experience and enthusiasm. Both students and teachers often need help desk support, especially as GGU has integrated more robust web functionality into courses—and neither group has time to wait for answers.

    As Doug Geier, GGU’s Director of eLearning and Instructional Design, puts it, “We provide really good support for our instructors and students, but we rely on the help desk to fill a critical need.”

    GGU’s small internal help desk responds during weekday business hours, focusing not only on technical help, but also calls requiring involvement from administrative offices. To fill the gaps, GGU chose Pearson, which seamlessly extends GGU’s own help desk, presenting its services as part of GGU. Through this close partnership, the help desk delivers 24x7x365 support for virtually any technical problem, regardless of location or device.

    GGU chooses to pay on a per inquiry basis, smoothly ramping up whenever it needs more help—for example, at the start of each trimester, when new students must quickly solve login or compatibility issues.

    Pearson’s reporting helps both partners identify emerging trends in support calls and escalations, flag individuals who need more training, find opportunities to improve, uncover student or faculty retention issues, and improve course quality to support GGU’s teaching mission.

    GGU’s Pearson help desk consistently exceeds 90% first-call resolution, so students and faculty can quickly move forward with their work. GGU’s Geier notes that some calls the help desk can’t resolve are due to issues it can’t control. “When that happens, Pearson can take the calls, offer some assurance as to when it’ll be fixed, and make sure our students and faculty don’t feel like they’re all alone. And sometimes Pearson’s help desk is first to know of a problem, and [they] tell us so we can follow up more rapidly.”

    Working together for more than six years, Pearson and GGU have built a trusted collaborative partnership with multiple benefits. “We reached out to Pearson as we integrated Turnitin to improve student writing and prevent plagiarism, and when we recently deployed a new video platform,” says Geier. “Pearson’s wide higher education support capabilities are becoming ever more critical as we continually expand the utility of our LMS and online course environment.”

    “Pearson’s help desk is incredibly responsive,” Geier concludes. “Their service is top-notch, it’s customizable, and it’s helped us come a long way in how we work with students and faculty. Pearson does more than just provide services: this is a true partnership.”

    Pearson’s help desk is incredibly responsive. Their service is top-notch, it’s customizable, and it’s helped us come a long way in how we work with students and faculty. Pearson does more than just provide services: this is a true partnership.

    Doug Geier, Director of eLearning and Instructional Design
    Golden Gate University

    To learn more about Golden Gate University’s help desk services, read the full success story.

    Read the full success story

  • blog image alt text

    Tapping into G-R-I-T to enhance students' 'burn to learn'

    By Paul G. Stoltz, Ph.D.

    Helping students effectively harness their GRIT comes down to the difference between telling them about it and equipping them with the tools to acquire and grow it. I recently experienced the stark contrast between mere advising and actual “equipping” when I failed my own godson at a critical time.

    How? Well, instead of helping him tap into his GRIT in substantive and productive ways, I fell into the “sympathetic (if meaningless) advice trap.” Let my failure illuminate our path.

    As a first-term, out-of-state freshman at a challenging four-year university with a rigorous major, my godson has plenty on his plate and no shortage of distractions. But when the deadliest fires in California’s history surrounded his hometown of Napa, being away from home took on new meaning to him.

    Even though his family and pets were safe and their most precious possessions secured, summoning the drive and the discipline to slog through calculus homework seemed overwhelming and unimportant to him. He simply stopped doing it, and even when he tried to apply himself to him, his commitment soon waned.  This was understandable given the circumstances, but not ideal.

    So, what did I do? I checked in with him, offered some mouldy cliches and bland old platitudes like, “Thank goodness they’re safe”; “Don’t hesitate to call me anytime”; and “It’s always good to remember: It could be so much worse.”  Nice? Yes. Heartfelt? Definitely. But I could have done so much better by him. I missed my moment.

    What I didn’t do was serve up the harder truth. I didn’t take this critical opportunity to help him realize that “stuff happens,” adversity strikes, and moments like these—when it feels like life is grabbing you and strenuously pulling you away from your educational goals—are both the key tests of your GRIT and the opportunities to significantly grow and apply it to things that matter.

    Every student experiences some combination of rigorous academics, relational breakups, family issues, health concerns, roommate dramas, bureaucratic headaches, personal injustices, scheduling conflicts, emotional hardships, financial stress, external pressures, and existential angst while pursuing a college degree. This is a long list, but worthy path is strewn with struggles!

    My godson didn’t need my warm but vague advice as much as he needed the essential, practical tools to truly own—to dig deeper and better in order to unwaveringly pursue—his learning and his goals in the midst of his struggle. How could I have helped? I should have pointed him to the GRIT questions.

    Each and every component of GRIT—Growth, Resilience, Instinct and Tenacity—is critical, and individuals must fully engage with them to truly own and achieve worthwhile educational goals.

    Consider these four facets of GRIT and the questions I, a teacher, a counselor, or anyone can ask about each one to help students own their learning, their goals, and their lives in good times and bad.

    G–Growth

    The propensity to seek out fresh ideas, perspectives, input, and advice to accelerate and enhance one’s progress toward one’s long term, difficult goals.

    Growth is about going after one’s goals and finding out what one needs to know in order to get there better and faster. It shifts a student from being a victim or a passenger to being the driver at the helm of his journey. This dimension of GRIT accelerates growth, learning, and momentum, while reducing the kind of frustration and exasperation that lead many to fall short or quit.

    • What new resources might you tap into to get some clarity and support around your goal?
    • Who could you talk to, both inside and outside of school, who could offer you the best, freshest wisdom on this issue or concern?
    • Do you notice that as you keep attempting to achieve your goal, the effort seems to be making you stronger and allows you to imagine new strategies to get where you want?

    R—Resilience

    One’s capacity to not just overcome or cope with, but to make constructive use of adversity.

    One of the big wake up calls in education is: Adversity is on the rise everywhere, and resilience truly matters. Support and resources are external. Resilience is internal. Resilience is not about bouncing back. That’s not good enough.

    It’s about harnessing adversity, using it as fuel to end up better off because of the increased strength and knowledge that comes from working through and overcoming a difficult obstacle. There is no better place for a student to learn and master this distinction than in higher education.

    • While you perhaps can’t control this situation, what facets of this situation can you at least potentially influence?  Of those, which one(s) matters most to you?
    • How can you step up to make the most immediate, positive difference in this situation?
    • How can you use your experience of struggling against this adversity to actually fuel your next attempt to reach your goal?

    I—Instinct     

    One’s propensity to pursue the best goals in the most effective ways.

    Arguably one of the most consistent and potent contributors to student failure, dropouts, or underperformance is a lack of Instinct. The vast majority of students waste tremendous energy, time, and effort pursuing less than ideal goals in less than optimal ways. That’s why so many lose their way or quit. That’s why it’s important to ask:

    • What adjustment(s) can you make to your goal to have it be even more compelling and clear for you?
    • What specific tweaks or shifts can you make to how you are pursuing your goal to best accelerate and/or enhance your chances of achieving it?
    • As you think about your goal (e.g. graduation), in what ways might you be wasting your precious time, energy, and/or effort?  If you could do less of one thing and more of another to most dramatically enhance your chances of success, what would that look like?

    T–Tenacity

    The sheer relentlessness with which one pursues one’s most important, long-term, difficult goals.

    This is the classic, traditional definition of basic grit. But as the world education wakes up to the hard reality that more tenacity is not always a good thing, we have an opportunity to infuse the qualitative aspects of GRIT. These include two continua, Good versus Bad GRIT, and Effective versus Ineffective GRIT.

    Pretty much every student has expended considerable Tenacity on the wrong stuff, or in less than optimal ways. The more students master how to funnel the right of Tenacity and overall GRIT toward their most worthy goals, the more likely they are to thrive and succeed.

    • If you utterly refused to quit, and were to give this goal your best-ever effort, how would you attack it even better this time?
    • How can you re-engage toward and go after your goal in a way that is most beneficial, even elevating, to those around you?
    • If your life depended on you sticking to and achieving this goal, what steps would you take now, that you’ve not yet taken?

    How do we equip students to stay on path, no matter what occurs—from natural disasters to simple, everyday adversity?  Growth, Resilience, Instinct, and Tenacity spell more than GRIT. They spell ownership. And they transcend plain old advice (even the god-fatherly kind).

    While each of these dimensions is powerful on its own, when we weave them together they become the four, actionable facets of GRIT that not only fortify students, but can also permanently instill in them a lifelong sense of ownership for learning, making important decisions, and for contributing something of value to their own lives and their society.

     
  • blog image alt text

    3 steps to upgrade your GRIT in education

    By Paul G. Stoltz, Ph.D., Author

    Grit, it is a powerful tool to help you achieve your goals, but as we know, it can sometimes fall short. Worse yet, using it the wrong way can backfire, even lead to real trouble. Consider this “fall short” and “backfire” conversation I overheard just last week.

    “What’s your grit and resilience strategy?” the Provost at a premier regional college asked his cross-town colleague at a college fundraising dinner I recently attended. The question instantly caught my ear and my eye. I was struck by both the ease with which this clearly loaded question fell from his lips, as well as the relaxed assumptiveness with which it was received.

    “Ah, well, you know, there’s so much talk and information about grit out there now, but honestly, we’re not sure what we think about it yet. Of course we’ve had our people watch the videos, read the books, start talking about to each other it more…at least the basics, you know? But frankly, results seem mixed, at best.

    Get this! We had one student repeatedly camp on the doorstep of the Registrar’s Office, apparently in an effort to get his grade changed, because he thought he could get what he wanted just be refusing to take no (or a bad grade) for an answer. When it was explained to him repeatedly that this wasn’t the best strategy and his grade was actually determined by his professor, the student somewhat deafly responded, ‘Got too much grit to quit!'”

    “That’s an amazing story,” the Provost replied. “Good to know. Honestly, you’re way ahead of us. We’re still exploring all the options on what we’re might pursue with grit, but your example will definitely help.”

    So what’s your grit and resilience strategy for your institution? And how do you avoid the dreaded and increasingly common “mixed results” or backfire conundrum? How do you minimize the potential downside of students misusing their and maximize the vital upside that will make them successful and productive? Here are three simple steps to Upgrade Your GRIT™ in Education.

    Step One: Shatter the “More is Better” Grit Myth

    Arguably one of the most dangerous assumptions when it comes to grit is the burgeoning belief that “more is better, more is more”. It’s nearly everywhere. “We just gotta show more grit!”, Dabo Swinney, Clemson University’s football coach declared after a heartbreaking loss.

    In another instance, I was asked by a faculty member at a Texas university, “Dr. Stoltz, how do we help our students grow and show more grit?” This is not an uncommon question. One I hear more and more.

    However, if just having more grit is so desirable, consider this simple provocation. First, think of the most dangerous person you’ve ever heard of or known. Second, ask yourself how much grit—determination, passion, and effort—they showed in pursuit of their nefarious goals. Next, ask yourself, is grit always and necessarily a good thing? For everyone? In all situations?

    The truth is that helping  our students build higher and higher levels of grit guarantees next to nothing. Worse yet, it can lead to disaster.  In truth, many students have plenty of grit. That’s not the issue. Their quantity of grit is not  what’s getting in their way. It’s the quality of their GRIT that may be hobbling their efforts, progress, and success.

    To free yourself from the “more is better” myth, ask yourself and/or your team a simple question. What matters more – the quantity or quality of your students’ grit? When it comes to the kind of students we want to grow, the kind of lives we’d like them to live, and contributions we’d like them to make in the world, do we want them touse their growth mindset, resilience, instinct and tenacity to not merely achieve their goals but also to show their consideration for other people, for their environment, and for the general good?”

    Ready for a bizarre, if not impossible statistic? I’ve asked this exact question of more than 500,000 people across six continents, and one hundred percent respond resoundingly with “Quality!” 100 percent. That’s stunning. And each time I test it, I get the same result: When it comes to GRIT, remember– Quantity is what we require, but Quality takes us higher.

    Step Two: Foster Smart GRIT

    “But I worked really hard on this!” How many times have students used said that do defend work or a test wasn’t as good as it should be. Don’t forget its anemic sibling, “I stayed up all night (or “spent all weekend’) studying for this test!” “Doesn’t my effort count?” they complain.

    What I sometimes call “Smart” and “Dumb” GRIT can be re-labeled “Effective” and “Ineffective” GRIT. Does urging our students to just try harder, to pour more effort and energy into the task always lead to the best results? More importantly, does it best serve our students as they try to make progress in an occasionally puzzling world? What if, instead, we taught them how to use ever-more thoughtful, intelligent, effective GRIT—the kind that accelerates and enhances their success—especially for the most daunting, long-term, challenging assignments, projects, and tasks?

    Shifting students’ focus from a concern with “how much or how hard can I try” to asking the questions “How else can I achieve my goal?” and “How can I do this even better?” can lead to profound revelations for them. By encouraging them to consider rational, creative, or more efficient alternatives when they get stuck or new ways to solve problems that might yield an even greater result, we begin to equip our students for the adversity-rich, highly demanding world of work, where they will be rewarded mainly for how well they achieved their goals, not the how much sheer effort or drive they expended in their pursuit.

    Step Three: Grow Good GRIT

    Ever see that high achieving student whose classmates find him hard to be around or to work with? What about the ones who, the higher their marks, the lower their classmates’ desire to pay attention to their comments or be part of that student’s group project?

    We’ve all experienced the boss, colleague, or student who has plenty of GRIT but goes after goals in ways that hinder, even hurt others. Consider the powerful difference between Bad and Good GRIT. Bad GRIT happens when a person goes after goals in ways that are intentionally or unintentionally detrimental to others. Good GRIT is of course the opposite: its hallmark is pursuing goals in ways that take other people and their goals into consideration or working in teams in ways that allow all participants to benefit. Pretty much everyone I know, me included, has demonstrated Bad GRIT, despite the best of intentions. That’s pretty humbling.

    Good GRIT happens when we go after our goals in ways that are ultimately beneficial, and ideally elevating to those around us–this attitude is often described by none other than rock star  Bruce Springsteen as he ends his concerts: “Nobody wins unless everybody wins.”

    Teaching students the difference between Good and Bad GRIT is arguably one of the most potent and important lessons we can impart. Awakening them to the power and potential of Good GRIT is elemental to us graduating not just decent students, but good citizens.

    Long after they return their caps and gowns, it is the quality of our students’ GRIT that determines how they will navigate life’s ups and downs and what kind of mark they will make in their community, their workplace, and their world.

     

  • blog image alt text

    The Networked University

    By Denis Hurley, Director of Future Technologies, Pearson

    From tomorrow through Friday (31 Oct-3 Nov), you can visit Pearson’s booth (#401) at Educause to learn about how the student of the future may navigate her learning experiences through networked universities with the assistance of Pearson’s digital products and services.

    This scenario is based on The Networked University: Building Alliances for Innovation in Higher Education, written by Jeff Selingo, which imagines institutions of higher education strengthening their own offerings and improving learner outcomes through greater collaboration rather than competition.

    Pearson’s partnership with IBM Watson, our mixed reality applications created for Hololens, and our digital badging platform Acclaim are just a few of the ways we are empowering students to make the most of emerging technologies.

    Since its inception, the Future Technologies program at Pearson has explored many of these technologies while considering how our education systems can evolve. We continue to scan the horizon for new opportunities, and we are always learning.

    If you are unable to attend Educause, check out the video below and follow Olivia’s journey from discovery and enrollment through lifelong learning:

  • blog image alt text

    Chirons will lead us out of the AI Technopanic (and you can be a chiron)

    By Denis Hurley, Director of Future Technologies, Pearson

    Now more than ever, faster than ever, technology is driving change. The future is an unknown, and that scares us. However, we can overcome these fears and utilize these new technologies to better equip ourselves and steer us in a positive direction.

    Language evolves, and understanding these changes is crucial to learning how to communicate effectively. Like almost all change, it’s best to embrace it rather than try in vain to reject it.

    For example, it appears as though I’m on the losing side in the popular definition of the term “mixed reality.” Sorry, Mr. Milgram — I’ve given in.

    Technopanic

    A technopanic is extreme fear of new technology and the changes that they may bring. Consider the Luddites, who destroyed machinery in the early 19th century. The only constant is change, so they had little success slowing down the Industrial Revolution. In recent history, think of Y2K. This was a little different because we feared that new technology had been embraced without our full understanding of the consequences. Of course, we proceeded into the new millennium without our computer systems plunging civilization back into the Dark Ages.

    Last year, the BBC compiled a list of some of history’s greatest technopanics. One of my favorites was the fear that telephone lines would be used by evil spirits as a means of entry into unsuspecting humans who were just trying to walk grandma through how to use her new light bulbs.

    Our current technopanic is about artificial intelligence and robotics. I am not saying this fear is unreasonable. We don’t know how this will play out, and it appears as though many jobs will no longer be necessary in the near future. However, expending too much energy on fear is not productive, and the most dire outcomes are unlikely. The Guardian produced this clever and amusing short about artificial intelligence:

    Working with New Technology

    The Replacements

    Narrow artificial intelligence is now prevalent, which means programs are better than humans at performing specific tasks. Perhaps the most famous example is IBM’s Deep Blue defeating Garry Kasparov, the world champion of chess at the time — in 1997. Today, complex algorithms outperform humans at driving and analyzing lab results, among many other things.

    Robots, which are stronger, larger (or smaller), and do not get bored or sick or go on strike, have been replacing humans for hundreds of years. They can fly and work through the night for days on end or longer.

    Can Humans Compete?

    Spending too much energy on searching for an answer to this question is a waste of time. We should not see progress as a competitor or as an enemy. These are tools we can use.

    Augmenting Ourselves

    Cyborgs: For many people, this is the word that will come to mind when reading the phrase above above it. While the word makes us think think of science fiction, we have been implanting devices in our bodies for decades. But we can now control artificial limbs directly from our brains, bypassing the spinal cord.

    More “extreme” cyborgs do exist, such as Neil Harbisson, who can hear colors via an antenna implanted in his skull. Transhumanists aim to overcome human limitations through science and technology.

    Becoming a cyborg is not practical, desirable, or even feasible for many of you. It’s also not necessary.

    Cobots: A cobot is a robot designed to work interactively with a human in a shared workspace. Lately, some people have been using the word to refer to the human who works with robots or to the unified entity itself.

    I don’t think the new definition of this word is useful. When referring to a specific type of robot, it has practical use.

    Centaurs: After Kasparov lost to Deep Blue, he understood the potential of humans working with machines. He created a new form of chess called “centaur chess” or “freestyle chess.” Teams can consist of all humans, all algorithms, or a combination (a centaur). The champion has almost always been a centaur. Kasparov saw the value of combining what humans do best with what machines do best.

    We Should Strive to Be Chirons

    In Greek mythology, centaurs tended to be unruly, amoral, and violent. When considering a blend of human abilities and machine abilities, a potential outcome is losing our sense of humanity.

    Chiron was a sensitive and refined centaur in Greek mythology. He taught and nurtured youth, most notably, Achilles.

    In the context of maintaining sanity through this technopanic and, more generally, coping with technological change, Chiron embodies the centaur we should aspire to.

    In regard to how we should manage technology-induced fear (reaction, interaction, and creative acceptance), this would be the third stage. We all need to strive to be chirons. For our own sake, this is critical to lifelong learning. For the sake of our youth, we need to be able to demonstrate constructive and responsible use of technology.

    At Educause 2017, we will explore how new technologies can impact the future of higher education and student success. Discover opportunities to engage with Pearson at the conference and drive these critical conversations.

     

  • blog image alt text

    Is ed tech really working? 5 core tenets to rethink how we buy, use, and measure new tools

    By Todd Bloom, David Deschryver, Pam Moran, Chrisandra Richardson, Joseph South, Katrina Stevens

    This is the fifth in a series of essays surrounding the EdTech Efficacy Research Symposium, a gathering of 275 researchers, teachers, entrepreneurs, professors, administrators, and philanthropists to discuss the role efficacy research should play in guiding the development and implementation of education technologies. This series was produced in partnership with Pearson, a co-sponsor of the symposium co-hosted by the University of Virginia’s Curry School of Education, Digital Promise, and the Jefferson Education Accelerator. Click through to read the firstsecondthird, and fourth pieces.

    Education technology plays an essential role in our schools today. Whether the technology supports instructional intervention, personalized learning, or school administration, the successful application of that technology can dramatically improve productivity and student learning.

    That said, too many school leaders lack the support they need to ensure that educational technology investment and related activities, strategies, or interventions are evidence-based and effective. This gap between opportunity and capacity is undermining the ability of school leaders to move the needle on educational equity and to execute on the goals of today’s K-16 policies. The education community needs to clearly understand this gap and take some immediate steps to close it.

    The time is ripe

    The new federal K-12 law, the Every Students Succeeds Act, elevates the importance of evidence-based practices in school purchasing and implementation practices. The use of the state’s allocation for school support and improvement illustrates the point. Schools that receive these funds must invest only in activities, strategies, or interventions that demonstrate a statistically significant effect on improving student outcomes or other relevant outcomes.

    That determination must rely on research that is well designed and well implemented, as defined in the law. And once implementation begins, the U.S. Department of Education asks schools to focus on continuous improvement by collecting information about the implementation and making necessary changes to advance the goals of equity and educational opportunity for at-risk students. The law, in short, links compliance with evidence-based procurement and implementation that is guided by continuous improvement.

    New instructional models in higher education rely on evidence-based practices if they are to take root. School leaders are under intense pressure to find ways to make programs more affordable, student-centered, and valuable to a rapidly changing labor market. Competency-based education (the unbundling of certificates and degrees into discrete skills and competencies) is one of the better-known responses to the challenge, but the model will likely stay experimental until there is more evidence of success.

    “We are still just beginning to understand CBE,” Southern New Hampshire University President Paul LeBlanc said. “Project-based learning, authentic learning, well-done assessment rubrics — those are all good efforts, but do we have the evidence to pass muster with a real assessment expert? Almost none of higher ed would.”

    It is easy to forget that the abundance of educational technology is a relatively new thing for schools and higher ed institutions. Back in the early 2000s, the question was how to make new educational technologies viable instructional and management tools. Education data was largely just a lagging measure used for school accountability and reporting.

    Today, the data can provide strong, real-time signals that advance productivity through, for example, predictive analytics, personalized learning, curriculum curating and delivery, and enabling the direct investigation into educational practices that work in specific contexts. The challenge is how to control and channel the deluge of bytes and information streaming from the estimated $25.4 billion K-16 education technology industry.

    “It’s [now] too easy to go to a conference and load up at the buffet of innovations. That’s something we try hard not to do,” said Chad Ratliff, director of instructional programs for Virginia’s Albemarle County Schools. The information has to be filtered and vetted, which takes time and expertise.

    Improving educational equity is the focus of ESSA, the Higher Education Act, and a key reason many school leaders chose to work in education. Moving the needle increasingly relies on evidence-based practices. As the Aspen Institute and Council of Chief State School Officers point out in a recent report, equity means — at the very least — that “every student has access to the resources and educational rigor they need at the right moment in their education despite race, gender, ethnicity, language, disability, family background, or family income.”

    Embedded in this is the presumption that the activities, strategies, or interventions actually work for the populations they intend to benefit.

    Educators cannot afford to invest in ineffective activities. At the federal K-12 level, President Donald Trump is proposing that, next year, Congress cut spending for the Education Department and eliminate many programs, including $2.3 billion for professional development programs, $1.2 billion for after-school funds, and the new Title IV grant that explicitly supports evidence-based and effective technology practices in our schools.

    Higher education is also in a tight spot. The president seeks to cut spending in half for Federal Work-Study programs, eliminate Supplemental Educational Opportunity grants, and take nearly $4 million from the Pell Grant surplus for other government spending. At the same time, Education Secretary Betsy DeVos is reviewing all programs to explore which can be eliminated, reduced, consolidated, or privatized.

    These proposed cuts and reductions increase the urgency for school leaders to tell better stories about the ways they use the funds to improve educational opportunities and learning outcomes. And these stories are more compelling (and protected from budget politics) when they are built upon evidence.

    Too few resources

    While this is a critical time for evidence-based and effective program practices, here is the rub: The education sector is just beginning to build out this body of knowledge, so school leaders are often forging ahead without the kind of guidance and research they need to succeed.

    The challenges are significant and evident throughout the education technology life cycle. For example, it is clear that evidence should influence procurement standards, but that is rarely the case. The issue of “procurement standards” is linked to cost thresholds and related competitive and transparent bidding requirements. It is seldom connected with measures of prior success and research related to implementation and program efficacy. Those types of standards are foreign to most state and local educational agencies, left to “innovative” educational agencies and organizations, like Digital Promise’s League of Innovative Schools, to explore.

    Once the trials of implementation begin, school leaders and their vendors typically act without clear models of success and in isolation. There just are not good data on efficacy for most products and implementation practices, which means that leaders cannot avail themselves of models of success and networks of practical experience. Some schools and institutions with the financial wherewithal, like Virginia’s Albemarle and Fairfax County Public Schools, have created their own research process to produce their own evidence.

    In Albemarle, for example, learning technology staff test-bed solutions to instructional and enterprise needs. Staff spend time observing students and staff using new devices and cloud-based services. They seek feedback and performance data from both teachers and students in response to questions about the efficacy of the solution. They will begin with questions like “If a service is designed to support literacy development, what variable are we attempting to affect? What information do we need to validate significant impact?” Yet, like the “innovators” of procurement standards, these are the exceptions to the rule.

    And as schools make headway and immerse themselves in new technologies and services, the bytes of data and useful information multiply, but the time and capacity necessary to make them useful remains scarce. Most schools are not like Fairfax and Albemarle counties. They do not have the staff and experts required to parse the data and uncover meaningful insights into what’s working and what’s not. That kind of work and expertise isn’t something that can be simply layered onto existing responsibilities without overloading and possibly burning out staff.

    “Many schools will have clear goals, a well-defined action plan that includes professional learning opportunities, mentoring, and a monitoring timeline,” said Chrisandra Richardson, a former associate superintendent for Montgomery County Public Schools in Maryland. “But too few schools know how to exercise a continuous improvement mindset, how to continuously ask: ‘Are we doing what we said we would do — and how do we course-correct if we are not?’ ”

    Immediate next steps

    So what needs to be done? Here are five specific issues that the education community (philanthropies, universities, vendors, and agencies) should rally around.

    • Set common standards for procurement. If every leader must reinvent the wheel when it comes to identifying key elements of the technology evaluation rubric, we will ensure we make little progress — and do so slowly. The sector should collectively secure consensus on the baseline procurement standards for evidence-based and research practices and provide them to leaders through free or open-source evaluative rubrics or “look fors” they can easily access and employ.
    • Make evidence-based practice a core skill for school leadership. Every few years, leaders in the field try to pin down exactly what core competencies every school leader should possess (or endeavor to develop). If we are to achieve a field in which leaders know what evidence-based decision-making looks like, we must incorporate it into professional standards and include it among our evaluative criteria.
    • Find and elevate exemplars. As Charles Duhigg points out in his recent best seller Smarter Faster Better, productive and effective people do their work with clear and frequently rehearsed mental models of how something should work. Without them, decision-making can become unmoored, wasteful, and sometimes even dangerous. Our school leaders need to know what successful evidence-based practices look like. We cannot anticipate that leader or educator training will incorporate good decision-making strategies around education technologies in the immediate future, so we should find alternative ways of showcasing these models.
    • Define “best practice” in technology evaluation and adoption. Rather than force every school leader to develop and struggle to find funds to support their own processes, we can develop models that can alleviate the need for schools to develop and invest in their own research and evidence departments. Not all school districts enjoy resources to investigate their own tools, but different contexts demand differing considerations. Best practices help leaders navigate variation within the confines of their resources. The Ed Tech RCE Coach is one example of a set of free, open-source tools available to help schools embed best practices in their decision-making.
    • Promote continuous evaluation and improvement. Decisions, even the best ones, have a shelf life. They may seem appropriate until evidence proves otherwise. But without a process to gather information and assess decision-making efficacy, it’s difficult to learn from any decisions (good or bad). Together, we should promote school practices that embrace continuous research and improvement practices within and across financial and program divisions to increase the likelihood of finding and keeping the best technologies.

    The urgency to learn about and apply evidence to buying, using, and measuring success with ed tech is pressing, but the resources and protocols they need to make it happen are scarce. These are conditions that position our school leaders for failure — unless the education community and its stakeholders get together to take some immediate actions.

    This series is produced in partnership with Pearson. The 74 originally published this article on September 11th, 2017, and it was re-posted here with permission.

  • blog image alt text

    Communicate often and better: How to make education research more meaningful

    By Jay Lynch, PhD and Nathan Martin, Pearson

    Question: What do we learn from a study that shows a technique or technology likely has affected an educational outcome?

    Answer: Not nearly enough.

    Despite widespread criticism, the field of education research continues to emphasize statistical significance—rejecting the conclusion that chance is a plausible explanation for an observed effect—while largely neglecting questions of precision and practical importance. Sure, a study may show that an intervention likely has an effect on learning, but so what? Even researchers’ recent efforts to estimate the size of an effect don’t answer key questions. What is the real-world impact on learners? How precisely is the effect estimated? Is the effect credible and reliable?

    Yet it’s the practical significance of research findings that educators, administrators, parents and students really care about when it comes to evaluating educational interventions. This has led to what Russ Whitehurst has called a “mismatch between what education decision makers want from the education research and what the education research community is providing.”

    Unfortunately, education researchers are not expected to interpret the practical significance of their findings or acknowledge the often embarrassingly large degree of uncertainty associated with their observations. So, education research literature is filled with results that are almost always statistically significant but rarely informative.

    Early evidence suggests that many edtech companies are following the same path. But we believe that they have the opportunity to change course and adopt more meaningful ways of interpreting and communicating research that will provide education decision makers with the information they need to help learners succeed.

    Admitting What You Don’t Know

    For educational research to be more meaningful, researchers will have to acknowledge its limits. Although published research often projects a sense of objectivity and certainty about study findings, accepting subjectivity and uncertainty is a critical element of the scientific process.

    On the positive side, some researchers have begun to report what is known as standardized effect sizes, a calculation that helps compare outcomes in different groups on a common scale. But researchers rarely interpret the meaning of these figures. And the figures can be confusing. A ‘large’ effect actually may be quite small when compared to available alternatives or when factoring in the length of treatment, and a ‘small’ effect may be highly impactful because it is simple to implement or cumulative in nature.

    Confused? Imagine the plight of a teacher trying to decide what products to use, based on evidence—an issue of increased importance since the Every Student Succeeds Act (ESSA) promotes the use of federal funds for certain programs, based upon evidence of effectiveness. The newly-launched Evidence for ESSA admirably tries to help support that process, complementing the What Works Clearinghouse and pointing to programs that have been deemed “effective.” But when that teacher starts comparing products, say Math in Focus (effect size: +0.18) and Pirate Math (effect size: +0.37), the best choice isn’t readily apparent.

    It’s also important to note that every intervention’s observed “effect” is associated with a quantifiable degree of uncertainty. By glossing over this fact, researchers risk promoting a false sense of precision and making it harder to craft useful data-driven solutions. While acknowledging uncertainty is likely to temper excitement about many research findings, in the end it will support more honest evaluations of an intervention’s likely effectiveness.

    Communicate Better, Not Just More

    In addition to faithfully describing the practical significance and uncertainty around a finding, there also is a need to clearly communicate information regarding research quality, in ways that are accessible to non-specialists. There has been a notable unwillingness in the broader educational research community to tackle the challenge of discriminating between high quality research and quackery for educators and other non-specialists. As such, there is a long overdue need for educational researchers to be forthcoming about the quality and reliability of interventions in ways that educational practitioners can understand and trust.

    Trust is the key. Whatever issues might surround the reporting of research results, educators are suspicious of people who have never been in the classroom. If a result or debunked academic fad (e.g. learning styles) doesn’t match their experience, they will be tempted to dismiss it. As education research becomes more rigorous, relevant, and understandable, we hope that trust will grow. Even simply categorizing research as either “replicated” or “unchallenged” would be a powerful initial filtering technique given the paucity of replication research in education. The alternative is to leave educators and policy-makers intellectually adrift, susceptible to whatever educational fad is popular at the moment.

    At the same time, we have to improve our understanding of how consumers of education research understand research claims. For instance, surveys reveal that even academic researchers commonly misinterpret the meaning of common concepts like statistical significance and confidence intervals. As a result, there is a pressing need to understand how those involved in education interpret (rightly or wrongly) common statistical ideas and decipher research claims.

    A Blueprint For Change

    So, how can the education technology community help address these issues?

    Despite the money and time spent conducting efficacy studies on their products, surveys reveal that research often plays a minor role in edtech consumer purchasing decisions. The opaqueness and perceived irrelevance of edtech research studies, which mirror the reporting conventions typically found in academia, no doubt contribute to this unfortunate fact. Educators and administrators rarely possess the research and statistical literacy to interpret the meaning and implications of research focused on claims of statistical significance and measuring indirect proxies for learning. This might help explain why even well-meaning educators fall victim to “learning myths.”

    And when nearly every edtech company is amassing troves of research studies, all ostensibly supporting the efficacy of their products (with the quality and reliability of this research varying widely), it is understandable that edtech consumers treat them all with equal incredulity.

    So, if the current edtech emphasis on efficacy is going to amount to more than a passing fad and avoid devolving into a costly marketing scheme, edtech companies might start by taking the following actions:

    • Edtech researchers should interpret the practical significance and uncertainty associated with their study findings. The researchers conducting an experiment are best qualified to answer interpretive questions around the real-world value of study findings and we should expect that they make an effort to do so.
    • As an industry, edtech needs to work toward adopting standardized ways to communicate the quality and strength of evidence as it relates to efficacy research. The What Works Clearinghouse has made important steps, but it is critical that relevant information is brought to the point of decision for educators. This work could resemble something like food labels for edtech products.
    • Researchers should increasingly use data visualizations to make complex findings more intuitive while making additional efforts to understand how non-specialists interpret and understand frequently reported statistical ideas.
    • Finally, researchers should employ direct measures of learning whenever possible rather than relying on misleading proxies (e.g., grades or student perceptions of learning) to ensure that the findings reflect what educators really care about. This also includes using validated assessments and focusing on long-term learning gains rather than short-term performance improvement.

    This series is produced in partnership with Pearson. EdSurge originally published this article on April 1, 2017, and it was re-posted here with permission.

     

  • blog image alt text

    Technical & human problems with anthropomorphism & technopomorphism

    By Denis Hurley, Director of Future Technologies, Pearson

    Anthropomorphism is the attribution of human traits, emotions, and intentions to non-human entities (OED). It has been used in storytelling from Aesop to Zootopia, and people debate its impact on how we view gods in religion and animals in the wild. This is out of scope for this short piece.

    When it comes to technology, anthropomorphism is certainly more problematic than it is useful. Here are three examples:

    1. Consider how artificial intelligence is described like a human brain, which is not how AI works. This results in people misunderstanding its potential uses, attempting to apply it in inappropriate ways, and failing to consider applications where it could provide more value. Ines Montani has written an excellent summary on AI’s PR problem.
    2. More importantly, anthropomorphism contributes to our fear of progress, which often leads to full-blown technopanics. We are currently in a technopanic brought about by the explosion of development in automation and data science. Physically, these machines are often depicted as bipedal killing machines, which is not even the most effective form of mobility for a killing machine. Regarding intent, superintelligent machines are thought of as a threat not just to employment but our survival as a species. This assumes that these machines will treat homo sapiens similar to how homo sapiens have treated other species on this planet.
    3. Pearson colleague Paul Del Signore asked via Twitter, “Would you say making AI speak more human-like is a successful form of anthropomorphism?” This brings to mind a third major problem with anthropomorphism: the uncanny valley. While adding humanlike interactions can contribute to good UX, too much (but not quite enough) similarity to a human can result in frustration, discomfort, and even revulsion.

    Historically, we have used technology to achieve both selfish and altruistic goals. Overwhelmingly, however, technology has helped us reach a point in human civilization in which we are the most peaceful and healthy in history. In order to continue on this path, we must design machines to function in ways that utilize their best machine-like abilities.

    Technopomorphism is the attribution of technological characteristics to human traits, emotions, intentions, or biological functions. Think of how people may describe a thought process like cogs in a machine or someone’s capacity for work may be described with bandwidth.

    A Google search for the term “technopomorphism” only returns 40 results, and it is not listed in any online dictionary. However, I think the term is useful because it helps us to be mindful of our difference from machines.

    It’s natural for humans to use imagery that we do understand to try to describe things we don’t yet understand, like consciousness. Combined with our innate fear of dying, we imagine ways of deconstructing and reconstructing ourselves as immortal or as one with technology (singularity). This is problematic for at least two reasons:

    1. It restricts the ways in which we may understand new discoveries about ourselves to very limited forms.
    2. It often leads to teaching and training humans to function as machines, which is not the best use of our potential as humans.

    It is increasingly important that we understand how humans can best work with technology for the sake learning. In the age of exponential technologies, that which makes us most human will be most highly valued for employment and is often used for personal enrichment.

    There may be some similarities, but we’re not machines. At least, not yet. In the meantime, I advocate for “centaur mentality.”

     

  • blog image alt text

    Can Edtech support - and even save - educational research?

    By Jay Lynch, PhD and Nathan Martin, Pearson

    There is a crisis engulfing the social sciences. What was thought to be known about psychology—based on published results and research—is being called into question by new findings and the efforts of individual groups like the Reproducibility Project. What we know is under question and so is how we come to know. Long institutionalized practices of scientific inquiry in the social sciences are being actively questioned, proposals put forth for needed reforms.

    While the fields of academia burn with this discussion, education results have remained largely untouched. But education is not immune to problems endemic in fields like psychology and medicine. In fact, there’s a strong case that the problems emerging in other fields are even worse in educational research. External or internal critical scrutiny has been lacking. A recent review of the top 100 education journals found that only 0.13% of published articles were replication studies. Education waits for its own crusading Brian Nosek to disrupt the canon of findings. Winter is coming.

    This should not be breaking news. Education research has long been criticized for its inability to generate a reliable and impactful evidence base. It has been derided for problematic statistical and methodological practices that hinder knowledge accumulation and encourage the adoption of unproven interventions. For its failure to communicate the uncertainty and relevance associated with research findings, like Value-Added Measures for teachers, in ways that practitioners can understand. And for struggling to impact educational habits (at least in the US) and how we develop, buy, and learn from (see Mike Petrilli’s summation) the best practices and tools.

    Unfortunately, decades of withering criticism have done little to change the methods and incentives of educational research in ways necessary to improve the reliability and usefulness of findings. The research community appears to be in no rush to alter its well-trodden path—even if the path is one of continued irrelevance. Something must change if educational research is to meaningfully impact teaching and learning. Yet history suggests the impetus for this change is unlikely to originate from within academia.

    Can edtech improve the quality and usefulness of educational research? We may be biased (as colleagues at a large and scrutinized edtech company), but we aren’t naïve. We know it might sound farcical to suggest technology companies may play a critical role in improving the quality of education research, given almost weekly revelations about corporations engaging in concerted efforts to distort and shape research results to fit their interests. It’s shocking to read efforts to warp public perception on the effects of sugar on heart disease or the effectiveness of antidepressants. It would be foolish not to view research conducted or paid for by corporations with a healthy degree of skepticism.

    Yet we believe there are signs of promise. The last few years has seen a movement of companies seeking to research and report on the efficacy of educational products. The movement benefited from the leadership of the Office of Education Technology, the Gates FoundationLearning AssemblyDigital Promise and countless others. Our own company has been on this road since 2013. (It’s not been easy!)

    These efforts represent opportunities to foment long-needed improvements in the practice of education research. A chance to redress education research’s most glaring weakness: its historical inability to appreciably impact the everyday activities of learning and teaching.

    Incentives for edtech companies to adopt better research practices already exist and there is early evidence of openness to change. Edtech companies possess a number of crucial advantages when it comes to conducting the types of research education desperately needs, including:

    • access to growing troves of digital learning data;
    • close partnerships with institutions, faculty, and students;
    • the resources necessary to conduct large and representative intervention studies;
    • in-house expertise in the diverse specialties (e.g., computer scientists, statisticians, research methodologists, educational psychologists, UX researchers, instructional designers, ed policy experts, etc.) that must increasingly collaborate to carry out more informative research;
    • a research audience consisting primarily of educators, students, and other non-specialists

    The real worry with edtech companies’ nascent efforts to conduct efficacy research is not that they will fail to conduct research with the same quality and objectivity typical of most educational research, but that they will fall into the same traps that currently plague such efforts. Rather than looking for what would be best for teachers and learners, entrepreneurs may focus on the wrong measures (p-values, for instance) that obfuscate people rather than enlighten them.

    If this growing edtech movement repeats the follies of the current paradigm of educational research, it will fail to seize the moment to adopt reforms that can significantly aid our efforts to understand how best to help people teach and learn. And we will miss an important opportunity to enact systemic changes in research practice across the edtech industry with the hope that academia follows suit.

    Our goal over the next three articles is to hold a mirror up, highlighting several crucial shortcomings of educational research. These institutionalized practices significantly limit its impact and informativeness.

    We argue that edtech is uniquely incentivized and positioned to realize long-needed research improvements through its efficacy efforts.

    Independent education research is a critical part of the learning world, but it needs improvement. It needs a new role model, its own George Washington Carver, a figure willing to test theories in the field, learn from them, and then to communicate them to back to practitioners. In particular, we will be focusing on three key ideas:

    Why ‘What Works’ Doesn’t: Education research needs to move beyond simply evaluating whether or not an effect exists; that is, whether an educational intervention ‘works’. The ubiquitous use of null hypothesis significance testing in educational research is an epistemic dead end. Instead, education researchers need to adopt more creative and flexible methods of data analysis, focus on identifying and explaining important variations hidden under mean scores, and devote themselves to developing robust theories capable of generating testable predictions that are refined and improved over time.

    Desperately Seeking Relevance: Education researchers are rarely expected to interpret the practical significance of their findings or report results in ways that are understandable to non-specialists making decisions based on their work. Although there has been progress in encouraging researchers to report standardized mean differences and correlation coefficients (i.e., effect sizes), this is not enough. In addition, researchers need to clearly communicate the importance of study findings within the context of alternative options and in relation to concrete benchmarks, openly acknowledge uncertainty and variation in their results, and refuse to be content measuring misleading proxies for what really matters.

    Embracing the Milieu: For research to meaningfully impact teaching and learning, it will need to expand beyond an emphasis on controlled intervention studies and prioritize the messy, real-life conditions facing teachers and students. More energy must be devoted to the creative and problem-solving work of translating research into useful and practical tools for practitioners, an intermediary function explicitly focused on inventing, exploring, and implementing research-based solutions that are responsive the needs and constraints of everyday teaching.

    Ultimately education research is about more than just publication. It’s about improving the lives of students and teachers. We don’t claim to have the complete answers but, as we expand these key principles over coming weeks, we want to offer steps edtech companies can take to improve the quality and value of educational research. These are things we’ve learned and things we are still learning.

    This series is produced in partnership with Pearson. EdSurge originally published this article on January 6, 2017, and it was re-posted here with permission.