December 6, 2024

Some questions to use when discussing why we shouldn’t replace humans with AI (artificial intelligence) for learning

Author: dave
Go to Source

I struggle to have good conversations about my concerns about Artificial intelligence as a learning tool.

I ended up in an excellent chat with my colleague Lawrie Phipps discussing the last 10 years of the Post-Digital conversation and ended up in a bit of a rant about AI tutors. Like many I have had a vague sense of discomfort around thinking of AI as something that replaces humans in the ways in which we validate what it means to know in our society. I don’t particularly care ‘exactly’ what a person knows, no one, in any profession or field of knowledge is going to be able to ‘remember’ every facet of a particular issue… or being able to balance all its subtleties. We are all imperfect knowers. But what it means to know, on the other hand, what we can look at as ‘the quality that makes you someone who knows about that’ that lives at the foundation of what it means to be human… and every generation gets to involve itself in defining it for their era.

There are many different angles to approaching this discussion. These questions, and the thought that follow, are my attempt to provide some structure to thinking about the impact of AI on our learning culture.

  • What does it mean to know?
  • How does a learner know what they want to know?
  • What’s AI really?
  • Who decides what a learner needs to learn when AI is only perceiving the learner?
  • What does it mean when AI perceives what it means to know in a field?
  • What are the implications if AI perceives both the learner and what it means to know?

Why “what it means to know?” matters
It is my belief that deep at the bottom of most debates around how we should do education is a lack of clarity about what it actually means to ‘learn’ something or, more to the point, to ‘know’ something. Educators and philosophers have been arguing this point for millennia, and I will not try and rehash the whole thing here, but suffice it to say that our current education models are a little conflicted about it. All of our design and most of our assessments are created in an effort to help people know things… and yet there is no clear agreement in education on what learning actually is. It’s complex, which is the problem. Learning totally depends on what the learning is intended for. I may be only average at parallel parking, and don’t really remember any of the bits of information i was taught many moons ago when i learned… but i can mostly park… so i ‘know how’. Would i pass a test on it? Probably not… but i didn’t hit any cars, so i ‘know how’. I’m clear about what my goal is and therefore the judgement, for me, is fine. The guy behind me last week who didn’t like the fact i didn’t signal properly to parallel park is POSITIVE that i don’t know how to park. He told me so. Do I know how?

What does it mean to learn in all situations? I don’t know. What i believe is that each time you enter into a learning situation you have to ask yourself (and ideally get students to ask themselves) “what does it means for them to ‘know’”?

Are you going to talk about the ‘real’ AI?
Anytime an educator gets involved in a discussion about AI with a computer scientist, you can be pretty much be guaranteed that the sentence “that’s not REALLY AI” is going to follow. From Wikipedia,

Computer science defines AI research as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.

Using this simplistic definition, then, Artificial intelligence

  1. perceives the learner
  2. perceives knowledge in a field
  3. perceives the learner and knowledge in a field

and takes action based on this to help the learner successfully achieve their goal.

AI and students knowing what to ask
On multiple, painful, occasions I have asked students what they wanted to learn in my class. I have tried to follow the great androgogists by having students define for themselves in a learning contract what they want to learn at the beginning of a course. I have mostly failed… I now wait until we’ve got a good 8 hours behind us before i enter into any of these kinds of discussions. I then set aside a long time for those things to be renegotiated the deeper a learner gets into a given field of knowing. You don’t know what you don’t know… and if you don’t know, you can’t ask good questions about what to know.

So. Students come to classes (often) because they don’t know something and have some desire to know it. We, as providers of education services, ostensibly have some plan for how they are going to come to know that thing. Learners coming to a classroom often

  1. don’t know what they need to know
  2. want to know things that aren’t knowable
  3. think they know things that are actually wrong
  4. want to know things that there are multiple reasonable answers to
  5. know actual things that are useful
  6. don’t really want to know some things
  7. and… and… and…

And each one of those students comes to your class with a different set of these qualities. What is a learner’s goal when they start the learning process? I don’t know. No one does.

AI perceives the learner
So. What does this mean for AI? If AI is following the input from a student and deciding what to give them next… how is it supposed to respond to this much complexity in the learner? It does it by simplifying it. And, I would argue, there are some very concerning implications to that.

I can understand how AI can perceive a learner and adapt to giving them a particular resource based on their responses. They can use the responses of other people who’ve been in the same situation to help give reasonable recommendations. 72% of people who answered this question incorrectly improved when given this resource. Of those that remained, some were helped by this other resource… etc…

But in this case, and i can’t emphasize this enough, someone behind the algorithm has DECIDED what the correct answers are. If we’re solving for a quadratic equation, i’m less concerned about this, but if we’re training people to be good managers, I become very concerned about it. What’s a good manager? It totally, totally depends. It’s going to be different for different people. Helping people come to know in a complex knowledge space is a combination of their experience, them exploring other people’s experience and mentorship. Imagine a teacher asking an algorithm how to teach properly. Who’s answers to that question is the teacher going to get?

AI perceives knowledge in a field
Lets say that I want to make dovetail box joints.


From a simple perspective making this join is easy

  1. draw on the flat side of the board
  2. cut that out
  3. put that board on the edge of the other board
  4. cut that and they fit together.

That is a straight up answer to the question “how do I make a dovetail joint?”, that probably doesn’t leave you able to actually make a dovetail joint. So lets imagine an AI system that rifles through every youtube video with the word ‘dovetail’ in it, can exclude all the videos about birds that come back from that search and can actually see and understand every technique used in those videos. Now, lets assume that it judges the value of those videos based on likes, total views, and positive comments in the comments that refer to ‘quality of instruction’. (I have built a very smart algorithm here)

My algorithm is going to come up with a list of ‘must dos’ that are related to the creation of dovetails. Those must dos will be a little slanted to carpenters who are appealing or entertaining. It will also lean towards carpenters who show the easiest way to do something (not necessarily the best way). Once people know about the algorithm, of course, those people who want more hits and attention will start to adjust their approaches to make them more likely to be found and used as exemplar resources by the algorithm. And… you can see one field slowly drifting towards the simplistic and the easy and away from craft. You could imagine any other kinds of drift based on an algorithm with different values.

Now… that’s me talking about carpentry. Imagine the same scenario when we are talking about ethics… or social presence. It gets a bit more concerning. Algorithms will privilege some forms of ‘knowing’ over others, and the person writing that algorithm is going to get to decide what it means to know… not precisely, like in the former example, but through their values. If they value knowledge that is popular, then knowledge slowly drifts towards knowledge that is popular.

AI is sensing the learner and knowledge in the field
Lets put those two things together and imagine a perfect machine that both senses the learner and senses knowledge in a field. How does the feedback loop between students who may not know what they want and an algorithm that privileges ‘likeable knowledge’ over other kinds of knowing, going to impact what students are going to learn? How is it going to impact what it MEANS to know?

One last question.
What is the increased value of having an algorithm process youtube videos instead of you actually watching them?

Read more