Teaching and Learning blog

Explore insights, trends, and research that impact teaching, learning, and leading.

Explore posts in other areas.

PreK-12Pearson studentsProfessional

  • blog image alt text

    5 chats you don't want to miss from Educause

    By Caroline Leary, Manager, Pearson

    This year at Educause, Erick Jenkins, East Carolina University student and Pearson Campus Ambassador, and Jenn Rosenthal, community manager at Pearson, went behind the scenes to learn about what was top of mind for contributors to the best thinking in higher education IT.

    Erick and Jenn spoke with digital learning advocates about the latest and greatest in digital learning and what exactly that means for students, educators, and institutions.

    Together, they demystified Inclusive Access, discussed the importance of 21st century skills, engaged with cognitive tutor extraordinaire – IBM Watson, and dove into the world of AR and mixed reality.

    Catch their interviews below and let us know what roles you see technology playing in the future (near or far) of education in the comments section.


    Erick and Jenn talk with Jeff Erhlich, Director of Special Projects at Park University about what exactly Inclusive Access is (hint: it’s more than eText) and the benefits it brings to students, educators, and institutions.

    What is Direct Digital Access?

    We are sitting down to chat with Jeff Ehrlich, Park University Director of Special Projects, about Direct Digital Access. #edu17

    Posted by Pearson on Wednesday, November 1, 2017

     

    Jenn chats with Leah Jewell, Pearson’s Head of Career Development and Employability, about the Career Success Program and the importance of developing strong personal and social capabilities.

    Preparing Now: Career Success

    Chatting with Leah Jewell, Pearson's Head of Employability, about the Career Success Program.

    Posted by Pearson on Wednesday, November 1, 2017

     

    Erick gets a taste of how artificial intelligence can help students power through to success. Pearson’s Kaitlyn Banaszynski and Amy Wetzel introduce Erick to Watson – the cognitive tutor.

    Student Perspective: Watson

    East Carolina University student & Pearson intern, Erick Jenkins, is chatting with Pearson colleagues & IBM Watson experts, Kaitlyn & Amy.

    Posted by Pearson on Wednesday, November 1, 2017

     

    Jenn and Erick examine virtual patient Dave through HoloPatient using Microsoft HoloLens and chat with Mark Christian, Pearson’s Global Director of Immersive Learning about how Pearson is using AR/VR to enhance learning.

    Hololens & Immersive Learning Innovations

    We are so excited to try out the HoloLens – an example of Pearson immersive technology – and chat with Pearson's Global Director of Immersive Learning, Mark Christian.

    Posted by Pearson on Wednesday, November 1, 2017

     

    Erick sits down with Jenn and talks about how technology has played a role in his college experience.

    Student Perspective: Educational Technology

    We are live at EDUCAUSE 2017 with Pearson intern and East Carolina University student, Erick, talking about how technology has played a role in his college experience! #EDU17

    Posted by Pearson on Wednesday, November 1, 2017

     

  • blog image alt text

    How to engage tech-savvy students

    By Pearson

    From textbooks to laptops and white boards to smartboards, digital technologies continue to propel higher education forward. Instant access to information and various types of media and course materials create a more dynamic and collaborative learning experience.

    Today’s tech-savvy learners are accustomed to instructors utilizing technology to bolster curriculum and coursework. In fact, a majority of surveyed students (84%) understand that digital materials help solve for issues facing higher education, according to “Digital appetitive vs. what’s on the table,” a recent report that surveyed student attitudes on digital course materials. And many (57%) also expect the onus to fall on the institution to shift from print to digital learning tools.

    Many higher education institutions are looking for new ways to integrate technology into their coursework. Recently, Maryville University, a private institution in St. Louis, MO, developed a digital learning program that provided iPads to their students—with great results.

    94% of faculty have integrated iPads into their courses, and 87% of students agree that technology has been instrumental in their success at the school. What’s more, enrollment increased by 17.7% over two years, in part due to the Digital Learning Program, reports Inside Higher Ed.

    Learn more about how digital learning can strengthen higher education institutions with this infographic, “Digital Learning: Your best teacher’s assistant.”

  • blog image alt text

    The Networked University

    By Denis Hurley, Director of Future Technologies, Pearson

    From tomorrow through Friday (31 Oct-3 Nov), you can visit Pearson’s booth (#401) at Educause to learn about how the student of the future may navigate her learning experiences through networked universities with the assistance of Pearson’s digital products and services.

    This scenario is based on The Networked University: Building Alliances for Innovation in Higher Education, written by Jeff Selingo, which imagines institutions of higher education strengthening their own offerings and improving learner outcomes through greater collaboration rather than competition.

    Pearson’s partnership with IBM Watson, our mixed reality applications created for Hololens, and our digital badging platform Acclaim are just a few of the ways we are empowering students to make the most of emerging technologies.

    Since its inception, the Future Technologies program at Pearson has explored many of these technologies while considering how our education systems can evolve. We continue to scan the horizon for new opportunities, and we are always learning.

    If you are unable to attend Educause, check out the video below and follow Olivia’s journey from discovery and enrollment through lifelong learning:

  • blog image alt text

    Chirons will lead us out of the AI Technopanic (and you can be a chiron)

    By Denis Hurley, Director of Future Technologies, Pearson

    Now more than ever, faster than ever, technology is driving change. The future is an unknown, and that scares us. However, we can overcome these fears and utilize these new technologies to better equip ourselves and steer us in a positive direction.

    Language evolves, and understanding these changes is crucial to learning how to communicate effectively. Like almost all change, it’s best to embrace it rather than try in vain to reject it.

    For example, it appears as though I’m on the losing side in the popular definition of the term “mixed reality.” Sorry, Mr. Milgram — I’ve given in.

    Technopanic

    A technopanic is extreme fear of new technology and the changes that they may bring. Consider the Luddites, who destroyed machinery in the early 19th century. The only constant is change, so they had little success slowing down the Industrial Revolution. In recent history, think of Y2K. This was a little different because we feared that new technology had been embraced without our full understanding of the consequences. Of course, we proceeded into the new millennium without our computer systems plunging civilization back into the Dark Ages.

    Last year, the BBC compiled a list of some of history’s greatest technopanics. One of my favorites was the fear that telephone lines would be used by evil spirits as a means of entry into unsuspecting humans who were just trying to walk grandma through how to use her new light bulbs.

    Our current technopanic is about artificial intelligence and robotics. I am not saying this fear is unreasonable. We don’t know how this will play out, and it appears as though many jobs will no longer be necessary in the near future. However, expending too much energy on fear is not productive, and the most dire outcomes are unlikely. The Guardian produced this clever and amusing short about artificial intelligence:

    Working with New Technology

    The Replacements

    Narrow artificial intelligence is now prevalent, which means programs are better than humans at performing specific tasks. Perhaps the most famous example is IBM’s Deep Blue defeating Garry Kasparov, the world champion of chess at the time — in 1997. Today, complex algorithms outperform humans at driving and analyzing lab results, among many other things.

    Robots, which are stronger, larger (or smaller), and do not get bored or sick or go on strike, have been replacing humans for hundreds of years. They can fly and work through the night for days on end or longer.

    Can Humans Compete?

    Spending too much energy on searching for an answer to this question is a waste of time. We should not see progress as a competitor or as an enemy. These are tools we can use.

    Augmenting Ourselves

    Cyborgs: For many people, this is the word that will come to mind when reading the phrase above above it. While the word makes us think think of science fiction, we have been implanting devices in our bodies for decades. But we can now control artificial limbs directly from our brains, bypassing the spinal cord.

    More “extreme” cyborgs do exist, such as Neil Harbisson, who can hear colors via an antenna implanted in his skull. Transhumanists aim to overcome human limitations through science and technology.

    Becoming a cyborg is not practical, desirable, or even feasible for many of you. It’s also not necessary.

    Cobots: A cobot is a robot designed to work interactively with a human in a shared workspace. Lately, some people have been using the word to refer to the human who works with robots or to the unified entity itself.

    I don’t think the new definition of this word is useful. When referring to a specific type of robot, it has practical use.

    Centaurs: After Kasparov lost to Deep Blue, he understood the potential of humans working with machines. He created a new form of chess called “centaur chess” or “freestyle chess.” Teams can consist of all humans, all algorithms, or a combination (a centaur). The champion has almost always been a centaur. Kasparov saw the value of combining what humans do best with what machines do best.

    We Should Strive to Be Chirons

    In Greek mythology, centaurs tended to be unruly, amoral, and violent. When considering a blend of human abilities and machine abilities, a potential outcome is losing our sense of humanity.

    Chiron was a sensitive and refined centaur in Greek mythology. He taught and nurtured youth, most notably, Achilles.

    In the context of maintaining sanity through this technopanic and, more generally, coping with technological change, Chiron embodies the centaur we should aspire to.

    In regard to how we should manage technology-induced fear (reaction, interaction, and creative acceptance), this would be the third stage. We all need to strive to be chirons. For our own sake, this is critical to lifelong learning. For the sake of our youth, we need to be able to demonstrate constructive and responsible use of technology.

    At Educause 2017, we will explore how new technologies can impact the future of higher education and student success. Discover opportunities to engage with Pearson at the conference and drive these critical conversations.

     

  • blog image alt text

    Is ed tech really working? 5 core tenets to rethink how we buy, use, and measure new tools

    By Todd Bloom, David Deschryver, Pam Moran, Chrisandra Richardson, Joseph South, Katrina Stevens

    This is the fifth in a series of essays surrounding the EdTech Efficacy Research Symposium, a gathering of 275 researchers, teachers, entrepreneurs, professors, administrators, and philanthropists to discuss the role efficacy research should play in guiding the development and implementation of education technologies. This series was produced in partnership with Pearson, a co-sponsor of the symposium co-hosted by the University of Virginia’s Curry School of Education, Digital Promise, and the Jefferson Education Accelerator. Click through to read the firstsecondthird, and fourth pieces.

    Education technology plays an essential role in our schools today. Whether the technology supports instructional intervention, personalized learning, or school administration, the successful application of that technology can dramatically improve productivity and student learning.

    That said, too many school leaders lack the support they need to ensure that educational technology investment and related activities, strategies, or interventions are evidence-based and effective. This gap between opportunity and capacity is undermining the ability of school leaders to move the needle on educational equity and to execute on the goals of today’s K-16 policies. The education community needs to clearly understand this gap and take some immediate steps to close it.

    The time is ripe

    The new federal K-12 law, the Every Students Succeeds Act, elevates the importance of evidence-based practices in school purchasing and implementation practices. The use of the state’s allocation for school support and improvement illustrates the point. Schools that receive these funds must invest only in activities, strategies, or interventions that demonstrate a statistically significant effect on improving student outcomes or other relevant outcomes.

    That determination must rely on research that is well designed and well implemented, as defined in the law. And once implementation begins, the U.S. Department of Education asks schools to focus on continuous improvement by collecting information about the implementation and making necessary changes to advance the goals of equity and educational opportunity for at-risk students. The law, in short, links compliance with evidence-based procurement and implementation that is guided by continuous improvement.

    New instructional models in higher education rely on evidence-based practices if they are to take root. School leaders are under intense pressure to find ways to make programs more affordable, student-centered, and valuable to a rapidly changing labor market. Competency-based education (the unbundling of certificates and degrees into discrete skills and competencies) is one of the better-known responses to the challenge, but the model will likely stay experimental until there is more evidence of success.

    “We are still just beginning to understand CBE,” Southern New Hampshire University President Paul LeBlanc said. “Project-based learning, authentic learning, well-done assessment rubrics — those are all good efforts, but do we have the evidence to pass muster with a real assessment expert? Almost none of higher ed would.”

    It is easy to forget that the abundance of educational technology is a relatively new thing for schools and higher ed institutions. Back in the early 2000s, the question was how to make new educational technologies viable instructional and management tools. Education data was largely just a lagging measure used for school accountability and reporting.

    Today, the data can provide strong, real-time signals that advance productivity through, for example, predictive analytics, personalized learning, curriculum curating and delivery, and enabling the direct investigation into educational practices that work in specific contexts. The challenge is how to control and channel the deluge of bytes and information streaming from the estimated $25.4 billion K-16 education technology industry.

    “It’s [now] too easy to go to a conference and load up at the buffet of innovations. That’s something we try hard not to do,” said Chad Ratliff, director of instructional programs for Virginia’s Albemarle County Schools. The information has to be filtered and vetted, which takes time and expertise.

    Improving educational equity is the focus of ESSA, the Higher Education Act, and a key reason many school leaders chose to work in education. Moving the needle increasingly relies on evidence-based practices. As the Aspen Institute and Council of Chief State School Officers point out in a recent report, equity means — at the very least — that “every student has access to the resources and educational rigor they need at the right moment in their education despite race, gender, ethnicity, language, disability, family background, or family income.”

    Embedded in this is the presumption that the activities, strategies, or interventions actually work for the populations they intend to benefit.

    Educators cannot afford to invest in ineffective activities. At the federal K-12 level, President Donald Trump is proposing that, next year, Congress cut spending for the Education Department and eliminate many programs, including $2.3 billion for professional development programs, $1.2 billion for after-school funds, and the new Title IV grant that explicitly supports evidence-based and effective technology practices in our schools.

    Higher education is also in a tight spot. The president seeks to cut spending in half for Federal Work-Study programs, eliminate Supplemental Educational Opportunity grants, and take nearly $4 million from the Pell Grant surplus for other government spending. At the same time, Education Secretary Betsy DeVos is reviewing all programs to explore which can be eliminated, reduced, consolidated, or privatized.

    These proposed cuts and reductions increase the urgency for school leaders to tell better stories about the ways they use the funds to improve educational opportunities and learning outcomes. And these stories are more compelling (and protected from budget politics) when they are built upon evidence.

    Too few resources

    While this is a critical time for evidence-based and effective program practices, here is the rub: The education sector is just beginning to build out this body of knowledge, so school leaders are often forging ahead without the kind of guidance and research they need to succeed.

    The challenges are significant and evident throughout the education technology life cycle. For example, it is clear that evidence should influence procurement standards, but that is rarely the case. The issue of “procurement standards” is linked to cost thresholds and related competitive and transparent bidding requirements. It is seldom connected with measures of prior success and research related to implementation and program efficacy. Those types of standards are foreign to most state and local educational agencies, left to “innovative” educational agencies and organizations, like Digital Promise’s League of Innovative Schools, to explore.

    Once the trials of implementation begin, school leaders and their vendors typically act without clear models of success and in isolation. There just are not good data on efficacy for most products and implementation practices, which means that leaders cannot avail themselves of models of success and networks of practical experience. Some schools and institutions with the financial wherewithal, like Virginia’s Albemarle and Fairfax County Public Schools, have created their own research process to produce their own evidence.

    In Albemarle, for example, learning technology staff test-bed solutions to instructional and enterprise needs. Staff spend time observing students and staff using new devices and cloud-based services. They seek feedback and performance data from both teachers and students in response to questions about the efficacy of the solution. They will begin with questions like “If a service is designed to support literacy development, what variable are we attempting to affect? What information do we need to validate significant impact?” Yet, like the “innovators” of procurement standards, these are the exceptions to the rule.

    And as schools make headway and immerse themselves in new technologies and services, the bytes of data and useful information multiply, but the time and capacity necessary to make them useful remains scarce. Most schools are not like Fairfax and Albemarle counties. They do not have the staff and experts required to parse the data and uncover meaningful insights into what’s working and what’s not. That kind of work and expertise isn’t something that can be simply layered onto existing responsibilities without overloading and possibly burning out staff.

    “Many schools will have clear goals, a well-defined action plan that includes professional learning opportunities, mentoring, and a monitoring timeline,” said Chrisandra Richardson, a former associate superintendent for Montgomery County Public Schools in Maryland. “But too few schools know how to exercise a continuous improvement mindset, how to continuously ask: ‘Are we doing what we said we would do — and how do we course-correct if we are not?’ ”

    Immediate next steps

    So what needs to be done? Here are five specific issues that the education community (philanthropies, universities, vendors, and agencies) should rally around.

    • Set common standards for procurement. If every leader must reinvent the wheel when it comes to identifying key elements of the technology evaluation rubric, we will ensure we make little progress — and do so slowly. The sector should collectively secure consensus on the baseline procurement standards for evidence-based and research practices and provide them to leaders through free or open-source evaluative rubrics or “look fors” they can easily access and employ.
    • Make evidence-based practice a core skill for school leadership. Every few years, leaders in the field try to pin down exactly what core competencies every school leader should possess (or endeavor to develop). If we are to achieve a field in which leaders know what evidence-based decision-making looks like, we must incorporate it into professional standards and include it among our evaluative criteria.
    • Find and elevate exemplars. As Charles Duhigg points out in his recent best seller Smarter Faster Better, productive and effective people do their work with clear and frequently rehearsed mental models of how something should work. Without them, decision-making can become unmoored, wasteful, and sometimes even dangerous. Our school leaders need to know what successful evidence-based practices look like. We cannot anticipate that leader or educator training will incorporate good decision-making strategies around education technologies in the immediate future, so we should find alternative ways of showcasing these models.
    • Define “best practice” in technology evaluation and adoption. Rather than force every school leader to develop and struggle to find funds to support their own processes, we can develop models that can alleviate the need for schools to develop and invest in their own research and evidence departments. Not all school districts enjoy resources to investigate their own tools, but different contexts demand differing considerations. Best practices help leaders navigate variation within the confines of their resources. The Ed Tech RCE Coach is one example of a set of free, open-source tools available to help schools embed best practices in their decision-making.
    • Promote continuous evaluation and improvement. Decisions, even the best ones, have a shelf life. They may seem appropriate until evidence proves otherwise. But without a process to gather information and assess decision-making efficacy, it’s difficult to learn from any decisions (good or bad). Together, we should promote school practices that embrace continuous research and improvement practices within and across financial and program divisions to increase the likelihood of finding and keeping the best technologies.

    The urgency to learn about and apply evidence to buying, using, and measuring success with ed tech is pressing, but the resources and protocols they need to make it happen are scarce. These are conditions that position our school leaders for failure — unless the education community and its stakeholders get together to take some immediate actions.

    This series is produced in partnership with Pearson. The 74 originally published this article on September 11th, 2017, and it was re-posted here with permission.

  • blog image alt text

    Communicate often and better: How to make education research more meaningful

    By Jay Lynch, PhD and Nathan Martin, Pearson

    Question: What do we learn from a study that shows a technique or technology likely has affected an educational outcome?

    Answer: Not nearly enough.

    Despite widespread criticism, the field of education research continues to emphasize statistical significance—rejecting the conclusion that chance is a plausible explanation for an observed effect—while largely neglecting questions of precision and practical importance. Sure, a study may show that an intervention likely has an effect on learning, but so what? Even researchers’ recent efforts to estimate the size of an effect don’t answer key questions. What is the real-world impact on learners? How precisely is the effect estimated? Is the effect credible and reliable?

    Yet it’s the practical significance of research findings that educators, administrators, parents and students really care about when it comes to evaluating educational interventions. This has led to what Russ Whitehurst has called a “mismatch between what education decision makers want from the education research and what the education research community is providing.”

    Unfortunately, education researchers are not expected to interpret the practical significance of their findings or acknowledge the often embarrassingly large degree of uncertainty associated with their observations. So, education research literature is filled with results that are almost always statistically significant but rarely informative.

    Early evidence suggests that many edtech companies are following the same path. But we believe that they have the opportunity to change course and adopt more meaningful ways of interpreting and communicating research that will provide education decision makers with the information they need to help learners succeed.

    Admitting What You Don’t Know

    For educational research to be more meaningful, researchers will have to acknowledge its limits. Although published research often projects a sense of objectivity and certainty about study findings, accepting subjectivity and uncertainty is a critical element of the scientific process.

    On the positive side, some researchers have begun to report what is known as standardized effect sizes, a calculation that helps compare outcomes in different groups on a common scale. But researchers rarely interpret the meaning of these figures. And the figures can be confusing. A ‘large’ effect actually may be quite small when compared to available alternatives or when factoring in the length of treatment, and a ‘small’ effect may be highly impactful because it is simple to implement or cumulative in nature.

    Confused? Imagine the plight of a teacher trying to decide what products to use, based on evidence—an issue of increased importance since the Every Student Succeeds Act (ESSA) promotes the use of federal funds for certain programs, based upon evidence of effectiveness. The newly-launched Evidence for ESSA admirably tries to help support that process, complementing the What Works Clearinghouse and pointing to programs that have been deemed “effective.” But when that teacher starts comparing products, say Math in Focus (effect size: +0.18) and Pirate Math (effect size: +0.37), the best choice isn’t readily apparent.

    It’s also important to note that every intervention’s observed “effect” is associated with a quantifiable degree of uncertainty. By glossing over this fact, researchers risk promoting a false sense of precision and making it harder to craft useful data-driven solutions. While acknowledging uncertainty is likely to temper excitement about many research findings, in the end it will support more honest evaluations of an intervention’s likely effectiveness.

    Communicate Better, Not Just More

    In addition to faithfully describing the practical significance and uncertainty around a finding, there also is a need to clearly communicate information regarding research quality, in ways that are accessible to non-specialists. There has been a notable unwillingness in the broader educational research community to tackle the challenge of discriminating between high quality research and quackery for educators and other non-specialists. As such, there is a long overdue need for educational researchers to be forthcoming about the quality and reliability of interventions in ways that educational practitioners can understand and trust.

    Trust is the key. Whatever issues might surround the reporting of research results, educators are suspicious of people who have never been in the classroom. If a result or debunked academic fad (e.g. learning styles) doesn’t match their experience, they will be tempted to dismiss it. As education research becomes more rigorous, relevant, and understandable, we hope that trust will grow. Even simply categorizing research as either “replicated” or “unchallenged” would be a powerful initial filtering technique given the paucity of replication research in education. The alternative is to leave educators and policy-makers intellectually adrift, susceptible to whatever educational fad is popular at the moment.

    At the same time, we have to improve our understanding of how consumers of education research understand research claims. For instance, surveys reveal that even academic researchers commonly misinterpret the meaning of common concepts like statistical significance and confidence intervals. As a result, there is a pressing need to understand how those involved in education interpret (rightly or wrongly) common statistical ideas and decipher research claims.

    A Blueprint For Change

    So, how can the education technology community help address these issues?

    Despite the money and time spent conducting efficacy studies on their products, surveys reveal that research often plays a minor role in edtech consumer purchasing decisions. The opaqueness and perceived irrelevance of edtech research studies, which mirror the reporting conventions typically found in academia, no doubt contribute to this unfortunate fact. Educators and administrators rarely possess the research and statistical literacy to interpret the meaning and implications of research focused on claims of statistical significance and measuring indirect proxies for learning. This might help explain why even well-meaning educators fall victim to “learning myths.”

    And when nearly every edtech company is amassing troves of research studies, all ostensibly supporting the efficacy of their products (with the quality and reliability of this research varying widely), it is understandable that edtech consumers treat them all with equal incredulity.

    So, if the current edtech emphasis on efficacy is going to amount to more than a passing fad and avoid devolving into a costly marketing scheme, edtech companies might start by taking the following actions:

    • Edtech researchers should interpret the practical significance and uncertainty associated with their study findings. The researchers conducting an experiment are best qualified to answer interpretive questions around the real-world value of study findings and we should expect that they make an effort to do so.
    • As an industry, edtech needs to work toward adopting standardized ways to communicate the quality and strength of evidence as it relates to efficacy research. The What Works Clearinghouse has made important steps, but it is critical that relevant information is brought to the point of decision for educators. This work could resemble something like food labels for edtech products.
    • Researchers should increasingly use data visualizations to make complex findings more intuitive while making additional efforts to understand how non-specialists interpret and understand frequently reported statistical ideas.
    • Finally, researchers should employ direct measures of learning whenever possible rather than relying on misleading proxies (e.g., grades or student perceptions of learning) to ensure that the findings reflect what educators really care about. This also includes using validated assessments and focusing on long-term learning gains rather than short-term performance improvement.

    This series is produced in partnership with Pearson. EdSurge originally published this article on April 1, 2017, and it was re-posted here with permission.

     

  • blog image alt text

    Technical & human problems with anthropomorphism & technopomorphism

    By Denis Hurley, Director of Future Technologies, Pearson

    Anthropomorphism is the attribution of human traits, emotions, and intentions to non-human entities (OED). It has been used in storytelling from Aesop to Zootopia, and people debate its impact on how we view gods in religion and animals in the wild. This is out of scope for this short piece.

    When it comes to technology, anthropomorphism is certainly more problematic than it is useful. Here are three examples:

    1. Consider how artificial intelligence is described like a human brain, which is not how AI works. This results in people misunderstanding its potential uses, attempting to apply it in inappropriate ways, and failing to consider applications where it could provide more value. Ines Montani has written an excellent summary on AI’s PR problem.
    2. More importantly, anthropomorphism contributes to our fear of progress, which often leads to full-blown technopanics. We are currently in a technopanic brought about by the explosion of development in automation and data science. Physically, these machines are often depicted as bipedal killing machines, which is not even the most effective form of mobility for a killing machine. Regarding intent, superintelligent machines are thought of as a threat not just to employment but our survival as a species. This assumes that these machines will treat homo sapiens similar to how homo sapiens have treated other species on this planet.
    3. Pearson colleague Paul Del Signore asked via Twitter, “Would you say making AI speak more human-like is a successful form of anthropomorphism?” This brings to mind a third major problem with anthropomorphism: the uncanny valley. While adding humanlike interactions can contribute to good UX, too much (but not quite enough) similarity to a human can result in frustration, discomfort, and even revulsion.

    Historically, we have used technology to achieve both selfish and altruistic goals. Overwhelmingly, however, technology has helped us reach a point in human civilization in which we are the most peaceful and healthy in history. In order to continue on this path, we must design machines to function in ways that utilize their best machine-like abilities.

    Technopomorphism is the attribution of technological characteristics to human traits, emotions, intentions, or biological functions. Think of how people may describe a thought process like cogs in a machine or someone’s capacity for work may be described with bandwidth.

    A Google search for the term “technopomorphism” only returns 40 results, and it is not listed in any online dictionary. However, I think the term is useful because it helps us to be mindful of our difference from machines.

    It’s natural for humans to use imagery that we do understand to try to describe things we don’t yet understand, like consciousness. Combined with our innate fear of dying, we imagine ways of deconstructing and reconstructing ourselves as immortal or as one with technology (singularity). This is problematic for at least two reasons:

    1. It restricts the ways in which we may understand new discoveries about ourselves to very limited forms.
    2. It often leads to teaching and training humans to function as machines, which is not the best use of our potential as humans.

    It is increasingly important that we understand how humans can best work with technology for the sake learning. In the age of exponential technologies, that which makes us most human will be most highly valued for employment and is often used for personal enrichment.

    There may be some similarities, but we’re not machines. At least, not yet. In the meantime, I advocate for “centaur mentality.”

     

  • blog image alt text

    Can Edtech support - and even save - educational research?

    By Jay Lynch, PhD and Nathan Martin, Pearson

    There is a crisis engulfing the social sciences. What was thought to be known about psychology—based on published results and research—is being called into question by new findings and the efforts of individual groups like the Reproducibility Project. What we know is under question and so is how we come to know. Long institutionalized practices of scientific inquiry in the social sciences are being actively questioned, proposals put forth for needed reforms.

    While the fields of academia burn with this discussion, education results have remained largely untouched. But education is not immune to problems endemic in fields like psychology and medicine. In fact, there’s a strong case that the problems emerging in other fields are even worse in educational research. External or internal critical scrutiny has been lacking. A recent review of the top 100 education journals found that only 0.13% of published articles were replication studies. Education waits for its own crusading Brian Nosek to disrupt the canon of findings. Winter is coming.

    This should not be breaking news. Education research has long been criticized for its inability to generate a reliable and impactful evidence base. It has been derided for problematic statistical and methodological practices that hinder knowledge accumulation and encourage the adoption of unproven interventions. For its failure to communicate the uncertainty and relevance associated with research findings, like Value-Added Measures for teachers, in ways that practitioners can understand. And for struggling to impact educational habits (at least in the US) and how we develop, buy, and learn from (see Mike Petrilli’s summation) the best practices and tools.

    Unfortunately, decades of withering criticism have done little to change the methods and incentives of educational research in ways necessary to improve the reliability and usefulness of findings. The research community appears to be in no rush to alter its well-trodden path—even if the path is one of continued irrelevance. Something must change if educational research is to meaningfully impact teaching and learning. Yet history suggests the impetus for this change is unlikely to originate from within academia.

    Can edtech improve the quality and usefulness of educational research? We may be biased (as colleagues at a large and scrutinized edtech company), but we aren’t naïve. We know it might sound farcical to suggest technology companies may play a critical role in improving the quality of education research, given almost weekly revelations about corporations engaging in concerted efforts to distort and shape research results to fit their interests. It’s shocking to read efforts to warp public perception on the effects of sugar on heart disease or the effectiveness of antidepressants. It would be foolish not to view research conducted or paid for by corporations with a healthy degree of skepticism.

    Yet we believe there are signs of promise. The last few years has seen a movement of companies seeking to research and report on the efficacy of educational products. The movement benefited from the leadership of the Office of Education Technology, the Gates FoundationLearning AssemblyDigital Promise and countless others. Our own company has been on this road since 2013. (It’s not been easy!)

    These efforts represent opportunities to foment long-needed improvements in the practice of education research. A chance to redress education research’s most glaring weakness: its historical inability to appreciably impact the everyday activities of learning and teaching.

    Incentives for edtech companies to adopt better research practices already exist and there is early evidence of openness to change. Edtech companies possess a number of crucial advantages when it comes to conducting the types of research education desperately needs, including:

    • access to growing troves of digital learning data;
    • close partnerships with institutions, faculty, and students;
    • the resources necessary to conduct large and representative intervention studies;
    • in-house expertise in the diverse specialties (e.g., computer scientists, statisticians, research methodologists, educational psychologists, UX researchers, instructional designers, ed policy experts, etc.) that must increasingly collaborate to carry out more informative research;
    • a research audience consisting primarily of educators, students, and other non-specialists

    The real worry with edtech companies’ nascent efforts to conduct efficacy research is not that they will fail to conduct research with the same quality and objectivity typical of most educational research, but that they will fall into the same traps that currently plague such efforts. Rather than looking for what would be best for teachers and learners, entrepreneurs may focus on the wrong measures (p-values, for instance) that obfuscate people rather than enlighten them.

    If this growing edtech movement repeats the follies of the current paradigm of educational research, it will fail to seize the moment to adopt reforms that can significantly aid our efforts to understand how best to help people teach and learn. And we will miss an important opportunity to enact systemic changes in research practice across the edtech industry with the hope that academia follows suit.

    Our goal over the next three articles is to hold a mirror up, highlighting several crucial shortcomings of educational research. These institutionalized practices significantly limit its impact and informativeness.

    We argue that edtech is uniquely incentivized and positioned to realize long-needed research improvements through its efficacy efforts.

    Independent education research is a critical part of the learning world, but it needs improvement. It needs a new role model, its own George Washington Carver, a figure willing to test theories in the field, learn from them, and then to communicate them to back to practitioners. In particular, we will be focusing on three key ideas:

    Why ‘What Works’ Doesn’t: Education research needs to move beyond simply evaluating whether or not an effect exists; that is, whether an educational intervention ‘works’. The ubiquitous use of null hypothesis significance testing in educational research is an epistemic dead end. Instead, education researchers need to adopt more creative and flexible methods of data analysis, focus on identifying and explaining important variations hidden under mean scores, and devote themselves to developing robust theories capable of generating testable predictions that are refined and improved over time.

    Desperately Seeking Relevance: Education researchers are rarely expected to interpret the practical significance of their findings or report results in ways that are understandable to non-specialists making decisions based on their work. Although there has been progress in encouraging researchers to report standardized mean differences and correlation coefficients (i.e., effect sizes), this is not enough. In addition, researchers need to clearly communicate the importance of study findings within the context of alternative options and in relation to concrete benchmarks, openly acknowledge uncertainty and variation in their results, and refuse to be content measuring misleading proxies for what really matters.

    Embracing the Milieu: For research to meaningfully impact teaching and learning, it will need to expand beyond an emphasis on controlled intervention studies and prioritize the messy, real-life conditions facing teachers and students. More energy must be devoted to the creative and problem-solving work of translating research into useful and practical tools for practitioners, an intermediary function explicitly focused on inventing, exploring, and implementing research-based solutions that are responsive the needs and constraints of everyday teaching.

    Ultimately education research is about more than just publication. It’s about improving the lives of students and teachers. We don’t claim to have the complete answers but, as we expand these key principles over coming weeks, we want to offer steps edtech companies can take to improve the quality and value of educational research. These are things we’ve learned and things we are still learning.

    This series is produced in partnership with Pearson. EdSurge originally published this article on January 6, 2017, and it was re-posted here with permission.

     

  • blog image alt text

    Learning through both physical and virtual discovery

    By Denis Hurley, Director of Future Technologies, Pearson

    This morning, I read Bill McKibben’s “Pause! We Can Go Back!,” a review of David Sax’s The Revenge of Analog: Real Things and Why They Matter. My friend and mentor of twenty years, the filmmaker Jill Godmilow, emailed it to me. I immediately thought of Delicate Steve’s interview with Bob Boilen on “All Songs Considered,” and then I mentally time-traveled to 2011…

    I was in Austin in 2011 for SXSW, learning from other startups, networking, and promoting my own digital products. The interactive component of the conference ended with a “surprise” performance at the enormous Stubb’s BBQ concert venue. I reluctantly waited in line with hundreds of others, hopeful to hear something like LCD Soundsystem, who had appeared in a previous year. Once we were all inside, The Foo Fighters took the stage. Considered by many to be “the last great American rock band,” they’re just not my thing. A traveling companion saw the boredom on my face and asked, “Do you want to hear something different?”

    6th Street was dead for the first time all week (nearly all the conference attendees were at Stubb’s), and we popped into a small bar where about ten other patrons huddled near a wiry young man on a small stage. Delicate Steve began to play The Ballad of Speck and Pebble. My brain lit up. It was one of the most inspiring live performances I’ve ever heard.

    In my kitchen, six years later, while I was making applesauce with my earbuds in, Slate’s “Political Gabfest” ended, and Mr. Boilen’s voice came on to introduce Steve Marion, aka Delicate Steve, on “All Songs Considered.” Marion talked about being a “Napster kid” as well as how he was inspired to play music after his grandmother gave him a toy guitar.

    He dove into the rabbit holes of discovery that were available via the Internet to a kid living in northwestern New Jersey. Driven by curiosity and play, using the physical and virtual tools available to him, he began to create. Last year, he played slide guitar on Paul Simon’s new album, and next week, he’ll be at The Bowery Ballroom in New York City.

    In McKibbon’s review in The New York Review of Books, he comments, “Spotify’s playlists show people picking the same tunes over and over.” I believe the same was true when analogue music dominated. Virgin Megastore promoted the latest big release from one of the giant record labels.

    The difference now is that more tools — virtual and physical — are now available to us. How we use them is up to us. We need to ensure that everyone, especially young people are aware of them all and how to use them properly for discovery. Dig deep into that artists’s archive on Spotify. Flip through those old records on Bleeker Street.

    In the late 1990’s, Jill Godmilow taught me how to edit film and sound by hand while I was a student at The University of Notre Dame. I used an 8-plate Steenbeck. It was a lot of work to cut a film like that, but it helped me understand the value of a frame: 1/24 of a second.

    Now I have a child, and I try to help her understand how things work by making mechanical object available to her. She’ll pick up the hand-made kaleidoscope I brought back from London, or crank the Kikkerland music box to hear “Waltzing Matilda.” Together, we play both Minecraft and Clue. Her favorite Christmas present last month was a record player. She chooses to put on the Taylor Swift record “Red” over and over and over again. She also explores Minecraft videos made by other kids all over the world.

    Some of these interaction blend the virtual and the physical, like using the Osmo pizza game, learning math while playing, or programming Dash to wheel around the apartment, learning problem-solving.

    We can foster creativity and encourage exploration using whatever tools we have available to us. I am not advocating constant barrage of entertainment or toys — there is also value in escaping into a book or a tent in the woods — but new, digital tools are not necessarily a bad thing, and to many, they offer ways to learn and build, expanding their minds and enriching our culture.

    Explore, be weird, enjoy what you do, learn through what you enjoy. But do be careful not to lose yourself entirely into the virtual world. The physical world offers a nearly limitless amount of new experiences and adventures. These are thrilling to us because of our human nature, and even as we learn how to embrace the digital to a greater extent, we should do so to enrich our lives, not in an attempt to replace something that doesn’t need replacing.

    I will always be grateful to Jill Godmilow for showing me how to analyze the finest moving parts to a completed whole, which I often have to do in a purely digital format, where the individual elements are not so apparent. I appreciate the music from Delicate Steve, meticulously constructed with his mind and fingers through a medley of neuron-firings, Google searches, and guitar riffs.

    I am thankful that my daughter wonders at our Remington typewriter and miniature carousel, watches the interlocking pieces, and reconstructs some of these relationships with blocks on her iPad, with dominos on the table, and with her friends in the schoolyard.