How will it help or hinder?

Iin classic condition To find a balance between the costs and benefits of science, researchers grapple with the question of how AI in medicine can and should be applied to clinical patient care—despite knowing that there are examples that put patients’ lives at risk.

This question was central to a symposium recently held at the University of Adelaide, part of Research Tuesday lecture seriestitled “Antidote AI”.

As artificial intelligence has grown in sophistication and usefulness, we are starting to see it appear more and more in everyday life. From AI Traffic Control And the Environmental Studiesfor machine learning to find The origins of the Martian meteorite And the Read rock art in Arnhem LandFor AI research, the possibilities seem endless.

Some of the most promising and controversial uses of AI may lie in the medical field.

Doctors and AI researchers are really excited about The potential to help artificial intelligence in patient care tangible and visible. After all, medicine is about helping people and the moral basis is “do no harm.” AI is certainly part of the equation for improving our ability to treat patients in the future.

AI is certainly part of the equation for improving our ability to treat patients in the future.

Khalia Bremmer, a doctoral student at Adelaide Medical School, points to several areas of medicine in which AI is already making waves. “Artificial intelligence systems detect critical health risks, Lung cancer detectionDiagnosing diabetes, classifying skin disorders and determining the best drugs to fight neurological diseases.

“We may not need to worry about the advent of radiology machines, but what are the safety concerns to consider when machine learning meets medical science? What potential risks and harms should health care workers be aware of and what solutions can we come up with to make sure that evolution continues? This exciting field?” asks the primer.

These challenges are exacerbated by the fact that “the regulatory environment is struggling to keep up” and “AI training for healthcare workers is virtually non-existent,” Bremer says.

“Artificial intelligence training for healthcare workers is virtually nonexistent.”

primer blank

As a doctor by training And the AI researcher Dr Lauren Oakden Rayner, Senior Research Fellow at the Australian Institute of Machine Learning (AIML) at the University of Adelaide and Director of Medical Imaging Research at the Royal Adelaide Hospital, weighs the pros and cons of AI in medicine.

“How do we talk about artificial intelligence?” She asks. One way is to highlight the fact that AI systems perform as well or even better than humans. The second way is to say that AI is not intelligent.

“You could call this, the ‘hype’ position of AI and the ‘contradictory’ position of AI,” Oakden Rayner says. “People have made their entire careers by taking on one of these positions now.”

Rayner Oakden explains that these two positions are true. But how can both be true?

“You could call these, the ‘hype’ position of AI and the ‘contradictory’ position of AI. People have made whole jobs by filling one of these positions now.”

Dr. Lauren Oakden Rayner

The problem, according to Oakden Rayner, is the way we compare AI to humans. A fairly understandable baseline given us be human, but the researcher insists that this only serves to confuse the scope of AI by anthropomorphizing AI.

Oakden Rainer points to a 2015 study in Comparative Psychology – the study of non-human intelligences. This research showed that for a tasty treat, pigeons can be trained to detect breast cancer on mammograms. In fact, the pigeons only took two to three days to reach expert performance.

Of course, no one would claim for a moment that pigeons are as smart as a trained radiologist. Birds have no idea what a crab is or what to look for. “Morgan’s Law” – the principle that the behavior of a non-human animal should not be explained in complex psychological terms if it can instead be explained in simpler concepts – says that we should not assume that a non-human intelligence is doing something intelligent if there is a simpler explanation. This certainly applies to artificial intelligence.

“These technologies often don’t work the way we expect.”

Dr. Lauren Oakden Rayner

Oakden-Rainer also tells of an AI that looked at a photo of a cat and correctly identified it as cats – before it was completely sure it was a photo of cats. Highly sensitive is the artificial intelligence to recognize patterns. The funny cat/guacamole mix in a medical setting gets less funny.

This leads Oakden Rayner to ask: “Does this put patients at risk? Does this raise safety concerns?”

The answer is yes.

One of the early AI tools used in medicine was used to examine mammograms just like a pigeon. In the early 1990s, the tool was given the go-ahead for use in detecting breast cancer in hundreds of thousands of women. The decision was based on lab experiments that showed radiologists improved detection rates when using artificial intelligence. Great, isn’t it?

Twenty-five years later, a 2015 study looked at the real-world application of the program and the results were not good. In fact, women fared worse where the tool was in use. Ockden-Rainer’s quick idea is that “these technologies often don’t work the way we expect them to.”

AI tends to perform worse for the patients most at risk—in other words, the patients who need the most care.

Additionally, Oakden Rayner notes that there are 350 AI systems on the market, but only five of them have been in clinical trials. And AI tends to perform worse for the patients most at risk — in other words, the patients who need the most care.

AI has also proven to be a problem when it comes to different demographics. Commercially available facial recognition systems have been found to perform poorly on blacks. “Companies that really took this on board, went back and overhauled their systems by training on more diverse data sets,” Oakden Rayner notes. These systems are now much more equal in their output. Nobody even thought of trying to do this when they were originally building the systems and bringing them to market.”

Even more worrisome is the algorithm used in the United States by judges to determine sentencing, bail, and parole, and to predict individuals’ recidivism potential. The system is still in use despite 2016 media reports that it was likely to be wrong in predicting that a black person would reciprocate.

So where does that leave things for Rayner Oakden?

“I am a researcher in artificial intelligence,” she says. “I’m not just someone who makes holes in AI. I really like AI. I know the vast majority of my talk is about harms and risks. But the reason I’m like this is because I’m a doctor, and so we need to understand what can go wrong, So we can stop him.”

“I really like AI […] We need to understand what can go wrong, so we can prevent it.”

Dr. Lauren Oakden Rayner

The key to making AI safer, according to Rayner Oakden, is to establish standards of practice and guidelines for the publication of clinical trials involving AI. She believes that all of this is highly achievable.

Professor Lyle Palmer, Lecturer in Genetic Epidemiology at the University of Adelaide and Senior Research Fellow at AIML, highlights the role that South Australia plays as a center for artificial intelligence research and development.

If there’s one thing you need for good AI, he says, it’s data. Miscellaneous data. And a lot of it. Palmer says that South Australia is a prime location for large population studies due to the vast amount of medical history in the state. But it also echoes Rayner Oakden’s sentiment that these tests should include diverse samples to capture differences in different demographics.

“All of this is possible. We have the technology to do this for ages.”

Professor Lyle Palmer

Palmer says excitedly. “All of this is possible. We have the technology to do this for ages.”

Palmer says this technology is particularly advanced in Australia – particularly in South Australia.

Such historical data can help researchers determine, for example, the age of the disease to better understand the causes of disease development in different individuals.

For Palmer, AI will be critical in medicine given “difficult times” in healthcare including in the drug delivery pipeline, which sees many treatments not reaching the people who need them.

Artificial intelligence can do amazing things. But, as Rainer Oakden warns, it’s a mistake to compare him to humans. Tools are as good as the data we feed them with, and even then, they can make many strange mistakes due to their sensitivity to patterns.

AI will change medicine (it appears to be slower than some have suggested in the past) for sure. But, just as new technology itself aims to care for patients, the human creators of this technology are required to ensure that the technology itself is safe and does more harm than good.