Surprise, Transformation, & Learning
Go to Source
Recently, I came across an article about a new explanation for behavior, including intelligence. This ‘free energy principle’ claims that entities (including us) “try to minimize the difference between their model of the world and their sense and associated perception”. To put it in other words, we try to avoid surprise. And we can either act to put the world back in alignment with our perceptions, or we have to learn, to create better predictions.
Now, this fits in very nicely with the goal I’d been trying to talk about yesterday, generating surprise. Surprise does seem to be a key to learning! It sounds worth exploring.
The theory is quite deep. So deep, people line up to ask questions of the guy, Karl Friston, behind it! Not just average people, but top scientists need his help. Because this theory promises to yield answers to AI, mental illness, and more! Yet, at core, the idea is simply that entities (all the way down, wrapped in Markov blankets, at organ and cell level as well) look to minimize the differences between the world and their understanding. The difference that drives the choice of response (learning or acting) is ‘surprise’.
This correlates nicely with the point I was making about trying to trigger transformative perceptions to drive learning. This suggests that we should be looking to create these disturbances in complacency. The valence of these surprises may need to be balanced to the learning goal (transformative experience or transformative learning), but if we can generate an appropriate lack of expectation and outcome, we open the door to learning. People will want to refine their models, to adapt.
Going further, to also make it desirable to learn, the learner action that triggers the mismatch likely should be set in a task that learners viscerally get is important to them. The suggestion, then, is create a situation where learners want to succeed, but their initial knowledge shows that they can’t. Then they’re ready to learn. And we (generally) know the rest.
It’s nice when an interest in AI coincides with an interest in learning. I’m excited about the potential of trying to build this systematically into design processes. I welcome your thoughts!