Headlines featuring phrases like “AI beats doctors” or “AI beats radiologists” are capturing the public imagination, boasting the impressive abilities of AI algorithms in potentially life-saving tasks such as detecting breast cancer or predicting heart disease.
While such headlines reveal how much exciting progress has been made in AI, in a way they are also misleading – because they suggest a false dichotomy.
It’s true that an AI algorithm must perform on par or even better than a human expert on a specific task in order to be of value in healthcare. But it’s not actually a matter of one “beating” the other.
A lot of the decisions that clinicians make on a daily basis are incredibly complex, and, as I will argue, require more than an AI- or data-driven approach alone. Both clinicians and AI have their unique strengths and limitations. It’s their joint intelligence that counts if we want to make a meaningful difference to patient care.
Here’s why.
The relationship between machine learning (A.I.) is and will be symbiotic. A.I. can identify objects, language, and images that humans may miss. Humans will cross-check results from Machine learning as well as identify errors of machine learning.
Two pairs of eyes see more than one – the need for human oversight We know that even the most experienced clinicians may overlook things. Missed or erroneous diagnoses still occur. In fact, one study by the National Academy of Sciences in 2015 showed that most people will experience at least one diagnostic error in their lifetime [6]. The truth is, however, that AI algorithms will never be infallible either. They may err, too. Even if it’s only in 1 out of 100 cases. Clearly, that is a risk we need to mitigate when an algorithm is used to assess whether a patient is likely to have a malignant tumor or not.
That’s why human oversight is essential, even when an algorithm is highly accurate. Applying two pairs of eyes to the same image – one pair human, one pair of AI – can create a safety net, with the clinician and AI compensating for what the other may overlook.
AI can help track the volume of brain tumors over time by extracting information from MRI images
It’s a compelling example of the power of human-AI partnership. And there are other collaborative models that also rest on the same four-eyes principle. AI can act either as a first reader, a second reader or as a triage tool that helps prioritize worklists based on the suspiciousness of findings [8]. In other words, A.I. can tag images that deserve human interpretation. It would improve efficiency discarding 'normal' and forwarding suspicious images to the physician. This has been done for many years with electrocardiograms.
Deep learning algorithms allow us to see more than we ever could with our own eyes. But unlike human experts, they do not develop a true understanding of the world.
Camouflage graffiti and art stickers cause a neural network to misclassify stop signs as speed limit 45 signs or yield signs.
A cautionary example from the AI literature is a study into visual recognition of traffic signs. An algorithm was trained to recognize STOP signs in a lab setting and did so admirably well – until it encountered STOP signs that were partly covered with graffiti or stickers. Suddenly, the otherwise accurate algorithm could be tricked to misclassify the image – mistaking it for a Speed Limit 45 sign in 67% of the cases when it was covered with graffiti, and even in 100% of the cases when it was covered with stickers [9].
Why AI and deep clinical knowledge need to go hand in hand in healthcare - Blog | Philips
No comments:
Post a Comment