Fads come and go, and ‘Design Thinking’ seems to be one on the rise at the moment. It’s a process with lots of variants but, in the talks I’ve seen on the subject, and the results I’ve seen emerge from the process, I’m not wholly convinced. The problem is that we may well need less ‘design’ and more ‘thinking’. The combination is likely to dumb down the learning in favour of superficial design. Imagine applying this theory to medicine. You wouldn’t get far by simply asking patients what they need to cure their problems, you need a growing body of good research tried and tested methods, and expertise. So let’s break the Design Thinking process down to see how it works in practice and examine the steps one by one.
Empathy for the learner is an obvious virtue but what exactly does that mean? For years, in practice, this meant Learning Styles. For many it still is Learning Styles, being sensitive to learner’s differences, diversity and needs in terms of preferences. This, of course has been a disastrous waste of time, as research has shown. Other faddish outcomes over-sensitive to supposed learner needs have been Myers-Briggs, NLP and no end of faddish ideas about what we ‘think’ learners need, rather than what research tells us they actually benefit from.
Research in cognitive psychology has given us clear evidence that learners are often mistaken when it comes to judgements about their own learning
. Bjork, along with many other high quality researchers, have shown that learning is “quite misunderstood (by learners)…. we have a flawed model of how we learn and remember
”. There’s often a negative correlation between people’s judgements of their learning, what they think they have learnt, how they think they learn best – and what they’ve ‘actually’ learnt and the way they can ‘actually’ optimise their learning. In short, our own perceptions of learning are seriously delusional. This is why engagement, fun, learner surveys and happy sheets are such bad measures
of what is actually learnt and the enemy of optimal learning strategies. In short, empathy and asking learners what they want can seriously damage design.
In truth replacing a good needs analysis, including a thorough understanding of your target audience is not bettered by calling it empathy. That is simply replacing analysis with an abstract word to make it sound more in tune with the times.
Identifying learner needs and problems has led to a ton of wasteful energy spent on slicing them up into digital natives/immigrants and personas that often average out differentiation and personalisation. The solution is not to identify ideal learners as personas but provide sophisticated pedagogic approaches that are adaptive and provide personal feedback. Design thinking makes the mistake of thinking there is such a thing as ideal learners without realising that you need analysis of the target audience, not ‘averaged out’ personas.
Design Thinking seems to push people towards thinking that learning problems are ‘design’ problems. Many are not. You need top understand the nature of the cognitive problems and researched solutions to those problems. By all means define the problems but those problems but know what a learning problem is.
One area, however, where I think design thinking could be useful is in identifying the context, workflow and moments of need. So, understanding the learner’s world, their business environment. That’s fine. On this I agree, But I rarely hear this from practising ‘Design Thinking’ practitioners, who tend to focus on the screen design itself, rather than design of a blended learning experience, based on the types of learning to be delivered in real environments, in the workflow with performance support. You need a deep understanding of the technology and its limitations.
There is also an argument for having a compete set of skills on the team but this has nothing to do with design thinking. The delivery of online learning is a complex mix of learning, design, technical, business and fiscal challenges. What’s needed is balance in the team not a process that values an abstract method with a focus on ‘design’ alone.
This is the key step, where design thinkers are supposed to provide challenge and creative solutions. It is the step where it can all go wrong. Creative solutions tend to be based on media delivery, not effortful learning, chunking, interleaving, open input, spaced practice and many other deeper pedagogic issues that need to be understood before you design anything. There’s often a dearth of knowledge about the decades of research in learning and cognitive science that should inform design. It is replaced by rather superficial ideas around media production and presentation, hence the edutainment we get, all ‘tainment’ and no ‘edu’. It focuses on presentation not effortful learning.
Few design thinkers I’ve heard show much knowledge of designing for cognitive load, avoiding redundancy and have scant knowledge of the piles of brilliant work done by Nass, Reeves, Mayer, Clark, Roediger, MacDaniel and many other researchers who have worked for decades uncovering what good online learning design requires. This is also why co-design is so dangerous. It leads to easy learning, all front and no depth.
What I’ve seen is lots of ‘ideation’ around gamification (but the trivial, Pavlovian aspects of games – scoring, badges and leaderboards). Even worse is the over-designed, media rich, click-through learning, loosely punctuated by multiple-choice questions. Remember that media rich does not mean mind-rich. Even then, designers rarely know the basic research, for example, on the optimal number of options in MCQs or that open input is superior.
It is easy to prototype surface designs and get voiced feedback on what people like but this is a tiny part of the story. It is pointless prototyping learning solutions in the hope that you’ll uncover real learning efficacy (as opposed to look and feel) without evaluating those different solutions. This means the tricky and inconvenient business of real research, with controls, reasonable sample sizes, randomly selected learners and clear measurement of retention in long-term memory, even transfer. Few with just ‘Design Thinking’ skills have the skills, time and budget to do this. This is why we must rely on past research and build on this body of knowledge, just as clinicians do in medicine. We need to be aware of the work of Bjork, Roediger, Karpicke, Heustler and Metcalfe, who show that asking learners what they think is counterproductive. And build on the research that shows what techniques work for high retention.
A problem is that prototyping is often defined by the primitive tools used by learning designers, that can only produce presentation-like, souped-up Powerpoint and MCQs, whereas real learning requires much deeper structures. Few have made the effort to explore tools that allow open input and free text input, which really does increase retention and recall. Low fidelity prototyping won’t hack it if you want open input and sophisticated adaptive and personalised learning through AI – and that’s where things are heading.
One area that Design Thinking can help is with the ‘user interface’ but this is only one part of the deliverable and often not that important. It is important to make it as frictionless as possible but this comes as much through technical advances, touchscreen, voice, open input, than design.
Testing is a complex business. I used to run a large online learning test lab – the largest in the UK. We tested for usability, accessibility, quality assurance and technical conformance and, believe me, to focus just on ‘design’ is a big mistake. You need to focus not on surface design but, more importantly, on all sorts of things, such as learning efficacy. Once again, learner testimony can help but it can also hinder. Learners often report illusory learning when they are presented with high quality media – this means absolutely nothing. Testing is pointless if you’re not testing the real goal – actual retained learning. Asking people for qualitative opinions does not do that.
In truth testing is quite tricky. You have to be clear about what you are testing, cover everything and have good reporting. There are tried and tested methods, that few have ever studied, so this is a really wet link. Just shoving something under the nose of a learner is not enough. See found early on that it is a short number of iterations with an expert that really works with interface design, along with A/B testing. Not some simple suck it and see trial.
I’ve heard several presentations on this and done the reading bit my reaction is still the same. Is that it? It seems like a short-circuited version of a poor, project management course. I honestly think that the danger of ‘Design Thinking’ is that it holds us back. We’ve had this for several years now, where design trumps deep thinking, knowledge of how we learn, knowledge of cognitive overload and knowledge of optimal learning strategies. It gives us the illusion of creativity but at the expense of sound learning. Walk around any large online learning exhibition and observe the output – over-engineered design that lacks depth. Design thinking lures us into thinking that we have solved learning problems when all we have done is polish presentation. The real innovations I’ve seen come from a deep understanding of the research, technology and innovative solutions based on that research, like nudge learning and WildFire. Delivery, I think, is better rooted in strong practices, such as ISO standards and practices guided by evidence, which have evolved over time and not simplistic processes that are often simplified further and sold as bromides.