Skip to content
Aurora Institute

“Learning Engineering” Making its Way in the World

CompetencyWorks Blog

Author(s): Bror Saxberg

Issue(s): Federal Policy, Modernize HEA


Deeper LearningThis post originally appeared at Getting Smart on July 26, 2016 and Bror’s Blog on July 21, 2016.

I recently (re)stumbled across an interesting article in EdSurge about using educational impact to evaluate ed-tech companies and services. It seems an obvious thing, but as the article points out, it’s not so simple to do. It reminded me of a range of efforts that are now popping up to assist us all with “learning engineering” work – applying good learning science and evidence-gathering at scale.

In my previous blog, I mentioned a range of resources that attempt to synthesize evidence-based learning for use in the field, including resources like Clark and Mayer’s E-Learning and the Science of Instruction, which I have recommended for years to many people as a great initial synthesis. (OK, Dr. Hess would be grumpy if I did not mention our own efforts along those lines, Breakthrough Leadership in the Digital Age: Using Learning Science to Reboot Schooling). There are now other efforts around to put such syntheses to work:

  • MIT’s Online Education Policy Initiative group recently released an excellent report on how various kinds of learning science should be applied in the higher education context, explicitly calling for the creation of a “learning engineering” profession to help drive evidence-based progress among faculty, who are far more likely to be expert at domain research than in applications of learning science and good psychometric practices to their own domain. (Intriguingly, the report reminds us that the term “learning engineer” is pretty old – was first used with exactly the same meaning by the Nobel Prize Winner Herb Simon back in 1967.)
  • Carnegie Mellon has fronted the creation of a Global Learning Council to pull together many leading higher education institutions around the world to work on applications of learning science at scale, how to understand cross-cultural diversity as an influence on evidence-based learning, and how to think about data security within this context. They’ve put together a Web site that includes examples of good practice as well as recommendations for action, which will evolve over time.
  • Last year saw the start of Deans for Impact, a group of 30+ deans of schools of education who’ve committed to using evidence about the performance of their teachers after they graduate to improve their program, and to teach their teachers about applying learning science in the classroom as part of their training.
  • A recent effort started by the University of Virginia called the Jefferson Education Accelerator seeks to help mid-sized start-ups work on gathering efficacy data, by connecting them to learning science researchers. They’ve recently embarked on a year-long effort to understand what’s in the way of “learning engineering” being used in higher education.
  • The Australian Davos Connection (ADC) Forum in Australia has begun a series of working groups around education, innovation and economic development in Australia. At least one of the groups, the Human Performance subgroup, has been digging in to research on learning science and motivation as part of its contribution.
  • A relatively new non-profit, Transcend Education, was created by Aylon Samouha and Jeff Wetzler. Their intention is to help reform models accelerate their progress to effectiveness at scale, in part by providing specific guidance (including frameworks and templates, eventually) on applications of learning science and good assessment practice.
  • McGraw-Hill has claimed the title of The Learning Science Company, and is digging more deeply into the data sets it is beginning to accumulate through its LearnSmart technology as well as working on incorporating better design practices into product design processes.
  • Pearson several years ago made a point to focus on efficacy of the different offerings it has around the globe, including running a series of evaluations of different products and services.
  • The Department of Education has been running a series of efforts to design Rapid Cycle Technology Evaluationframeworks and tools, culminating in a set of pilots this fall with districts to lower the barriers to gathering the right kinds of evidence for different kinds of decisions. For example, deciding if something can be used by teachers and students at scale can be done with very different study designs than trying to compare the learning impact of two different interventions.
  • There are more!

Full disclosure: A number of people (Ken Koedinger, Marsha Lovett, me, others) have been involved in more than one of the above efforts – this is one area where we’re not afraid to put our thumbs on multiple scales!

We at Kaplan are also taking this to heart. I think of myself as Kaplan’s “Chief Learning Engineer,” and my colleagues around the company are doing a number of things to deploy evidence-based learning at scale:

  • Training all our instructional designers on what evidence suggests the best solutions to learning and motivation challenges might be – working to make “learning engineering” decisions in the field, with all the constraints and challenges the real world throws at us.
  • Running a series of randomized controlled trials in different parts of the company (more than a 100 now) with the goal of finding out what interventions really make a difference to student success, improving the outcomes or the efficiency of learning or both.
  • Focusing more time and attention on the validity and reliability of learning-evidence-gathering methods, an Achilles heel for many approaches to learning, and especially for ambitions to make adaptive learning an effective reality. (For example, if your learning outcome assessments are actually reading/writing tests, not tests of the real learning outcomes you intended, you’ve got the wrong end-metric for pilots or for adaptivity.)
  • Detailed learning strategy and planning conversations between the CEO of Kaplan, Andy Rosen, the CFO, Matt Seelye, and the different general managers across the business (not just the learning leaders), to ensure the right kinds of “learning engineering” tradeoffs get made within the constraints of each set of learners, and each learning organization.

There’s a lot to do to get this right at scale – many of these threads have been around for years, but have not made it to scale. (For example, see John Bruer’s great synthesis in 1993 in the American Educator – still sounds pretty good!) However, it’s great to see more groups world-wide beginning to focus on implementation – how learning science and good evidence-gathering about learning can help materially and measurably accelerate learner success at scale.

There will come a time when we look back at how we “used to do learning,” and, just as we now look at medicine in the 19th century, wonder how we ever made progress without using the science and evidence that we can now generate. We’re not there yet – but we may be on our way.

See also:


Bror Saxberg is Vice President of Learning Science at the Chan Zuckerberg Initiative.