December 6, 2024

Why AI in healthcare is working but IBM failed. What we can learn from this in learning?

Author: Donald Clark
Go to Source

First up, this is not a hype piece that says ‘Look at this wonderful stuff in medicines where Doctors will soon be replaced by software… the same will happen to teachers’. The lessons to learn from AI in healthcare are that AI is useful but not in the way many thought.
The dream, pushed primarily by IBM, who saw Watson winning the game show Jeopardy as a marketing platform for what IBM CEO Virginia Rometty called their ‘Moonshot’, a suite of healthcare applications, was that AI would change healthcare forever. This all started in 2011, followed by billions in investment and acquisitions. That was the plan but, in practice, things turned out differently.

The mistake was to think that mining big data would produce insights and that this would be the wellspring for progress in everything from research to diagnosis. That didn’t happen and commercial products are few and far between. The data proved messy and difficult for NLP to use effectively. Trials in hospitals were disappointing. Diagnosis proved tricky. The Superdoctor dream of an all-round physician with access to way more knowledge and way more data than any human, that would trounce professionals in the field, has had to be rethought.

Roger Schank has been a relentless critic of IBMs misuse of the term AI. He is especially critical of their use of terms such as ‘cogntitive computing’, an area he pioneered. Roger knows just how difficult these problems are to crack and sees IBM as marketing lies. He has a point. Another is Robert Wachter, chair of the department of medicine at the University of California in his book The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age

So what went wrong? IBMs Oncology Expert Advisor product used NLP (Natural Language Processing) to summarise patient cases then search patient databases to recommend optimal treatments. The problem was that patient data is very messy. It is not, as one would imagine, precise readings from investigative tests, in practice, it is a ton of non-standard notes, written in jargon and shorthand. Strangely enough, Doctors through their messy processes and communication, proved to be remarkably resilient and useful. They could pick up on clues that IBM products failed to spot.

The Oncology Expert project was also criticised for not really being an AI project. In truth, many AI projects are actually a huge and often cobbled together mix of often quite traditional software and tricks, to get things to work. IBM has proved to be less of a leading edge AI company than Google, Microsoft or Amazon. They have a sell first, deliver later mentality. Rather than focus on some useful bullets aimed at clear and close targets, they hyped their moonshot. Unfortunately, it has yet to take off. 

However…

However, and this is an important however, is that successes with AI in healthcare have come in more specific healthcare domains and tasks. There’s a lot to cover in healthcare:

Image analysis

Pathology

Genetic analysis

Patient monitoring

Healthcare administration

Mental health

Surgery

Clinical decision making

In the interpretation of imagery, where data sets are visual, and biopsied confirmation is available, success is starting to flow – mammary scans, retina scans, pathology slides, X-rays and so on. This is good, classifiable data.

Beyond scanned images, pathology slides are largely examined by eye, but image recognition can do this much faster and will, in time, do this with more accuracy. 

One of IBMs rare successes has been their FDA-approved app, launched in 2018 (SUGAR IQ). This delivers personalized patient support for diabetes by monitoring glucose levels and giving recommendations on diet, lifestyle and medication. It is this narrow domain, clear input and defined, personalized outputs for patients, that marks it out as a success. It is here that the real leverage of AI can be applied – in personalized patient delivery.

Another success has been in genome analysis, which is becoming more common, a precise domain with exact data means that the input is clean. Watson for Genomics lists a patient’s genetic mutations and recommends treatments. This is another case of limited-domain input with sensible and measured outputs that can really help oncologists treat patients.

Another good domain is healthcare administration, often antiquated, inefficient and expensive. There are specific tasks within this area that can be tackled using AI, optimising schedules, robots delivering blood and medicines within hospitals, selecting drugs in pharmacies and so on.

In mental health, rather than depending on NLP techniques, such as sentiment analysis, to scan huge amounts of messaging or text data, simple chatbots, like Woebot, which delivers daily dose of personalised CBT therapy, are proving more promising.

Robot surgery has got a lot of hype but in practice it really only exists at the level of laser-eye surgery and hair transplants. These are narrowly defined processes, with not a lot of variation in their execution. 

Where AI has not yet been unsuccessful is in the complex area of Doctor’s diagnosis and clinical decision making. This has proved much more difficult to crack, as AIs need for clean data clashes with the real words of messy delivery.

So most of the low hanging fruit lies in support functions, helping Doctors and patients, not replacing Doctors.

So what can we learn from this story about AI for learning?

Lessons learnt

There are several lessons here:

avoid the allure of big data solutions

avoid data that is messy

look for very specific problems

look for well-defined domains

AI is rarely enough on its own

focus on learners not teachers

I have been critical of the emphasis, in learning (see my piece ‘On average humans have one testicle‘), on learning analytics, the idea that problems will be solved through access to big data and machine learning to give us insights that will lead to diagnosis of students at risk of dropout. This is largely baloney. The data is messy and the promises often ridiculous. Above all the data is small, so stick to a spreadsheet.

Way forward
Let’s not set off like a cartoon character running off the cliff, finding ourselves looking around in mid-air, then plummeting to earth. Let’s be careful with the hype and big data promises. Let us be honest about how messy our data is and how difficult it is to manage that data. Let us instead, look for clear problems with clear solutions that use areas of AI that we know work – text to speech, speech to text, image recognition, entity analysis, semantic analysis and NLP for dialogue. We’ve been using all of these in WildFire to deliver learning experiences that are firmly based on good cognitive science, while being automated by AI. High-retention learning experiences that are created in minutes not months.


Conclusion
I have no doubts about AI improving the delivery of healthcare. I also have no doubts about its ability to deliver in education and training. What is necessary is a realistic definition of problems and solutions. Let’s not be distracted by the blue-sky moonshots and and focus on the grounded problems.

Read more