The Robot Will See You Now: U of T experts on the revolution of artificial intelligence in medicine
“AI will never replace the doctor,” Michael Brudno says, but it will “go a long way to simplifying doctor’s workflows”
Make room, stethoscope and otoscope. Artificial intelligence (AI) applications are increasingly among the physician’s standard instruments,experts at the University of Toronto say.
“With electronic records, you can use text algorithms to read a patient’s history, review their genetic predispositions, and correlate the information to make predictions,” says Dr. Frank Rudzicz.
Rudicz is one of five experts exploring the issues if privacy, accuracy and accountability at The Robot Will See You Now – the Revolution in Artificial Intelligence and Medicine at U of T on April 5.
A research scientist with the Toronto Rehab Institute and an assistant professor (status only) in the department of computer science at the University of Toronto, Rudzicz is also a project lead within a federally funded national research network in technology and aging known as AGE-WELL NCE.
The acronym stands for Aging Gracefully across Environments using Technology to Support Wellness, Engagement and Long Life Network Centre of Excellence Inc.
As part of AGE-WELL, Rudzicz is developing an application for caregivers, connecting them to information and to resources in their community that can help them with their loved ones.
“If someone notices their spouse is losing their memory, or falling down more often, instead of going to Dr. Google, they would use our tool,” says Rudzicz who has assembled an expert panel for a public AGE-WELL discussion on the use of artificial intelligence in medicine, April 5.
“It’s a good opportunity to bring together people with different backgrounds to talk about a general topic that’s going to be very important in the near future,” says Rudzicz.
Medical practice relies on human interaction and communication between patients and doctors. Integrating AI successfully into that nuanced setting creates intriguing challenges for researchers.
Panelist , a professor in the department of computer science, researches and teaches natural language processing – language used in the real-world, and how it is used by computers in analysis and information retrieval.
“If we imagine some kind of AI system that asks: ‘What seems to be the problem?’ the patient has to respond to that, presumably, in natural language. And the AI has to talk to them in natural language. It has to deal with all the issues of complex conversation – and health communication,” says Hirst.
How would people talk to these medical AIs? Would they tend to naturally speak in full sentences? Or would they talk to it as if it were a Google search: spots rash can’t sleep?
“The good clinician asks, at the end of a visit, ‘Is there anything else?’ to elicit issues that the patient is reluctant to disclose, but which were, perhaps, the real purpose of the visit,” Hirst says. “The question is easy to program, but it reflects a situation that is conversationally and linguistically difficult. Our AI medical systems are going to have to be supportive of difficult communication, too.”
Hirst has developed methods for detecting cognitive decline, including Alzheimer’s disease, by examining linguistic changes in a person’s writing over time.
“Our more recent work on detecting Alzheimer’s from changes in language suggests you don’t have to look back over 30 years of language; you can also look at many synchronic aspects of speech and writing.”
But imagine writing on a word processor, he suggests, and suddenly a pop-up message appears and reads: Your vocabulary and syntax has declined, you should make an appointment with your family doctor and get a neurological check-up.
“That’s not the way to tell someone they might have Alzheimer’s disease!” he says. “But the idea of a process that was watching over you, is still a serious one, even if we’re not quite sure how to implement it. Is a word processor or a computer the right way to do it? There are difficult ethical issues there.”
Panelist Michael Brudno, a professor in the department of computer science, oversees a research group that spans both computational biology and an emerging subfield of computational medicine.
“Computational biology applies to a much broader set of disciplines, from how to raise better cattle, to forests that are more heat resistant, because of global warming,” says Brudno, who works closely with clinicians at SickKids’ division of clinical and metabolic genetics.
“Computational medicine is about the application to patients, and to human health.”
The field draws not only from AI, but from all areas of computer science. Data about a patient must be stored securely within a database. Effective computer-human interaction and natural language processing tools are needed to enter the data. If the patient has had imaging tests, techniques from computer vision are important in order to find the salient features and apply predictive techniques. Information from thousands of previously seen patients, requires machine learning-based approaches. And data visualization is necessary to explain the logic that went in to the software’s decision, so that the clinician can make a final call.
“AI will never replace the doctor – not in the near foreseeable future, I think – but AI will go a long way to simplifying doctor’s workflows,” says Brudno, whose PhenoTips™ software is being used by hospitals to record patient symptoms and draw genetic family trees. He is currently building a portal, PhenomeCentral, to connect clinicians and scientists across the world for rare diseases.
Brudno also points to a disappearing border between clinical and research settings. For example, when you show up at the doctor’s office, there’s an implied consent to care. But your doctor may be involved in a research study, and asks whether you would be willing to participate.
“Almost all data recorded about you clinically could be useful to some research effort or another. Drawing the border becomes harder and harder.”
Adding to the ethical complexity, if an algorithm makes a diagnosis and it is wrong, who is liable for damages? The doctor? The software designer? Or the company who sold it?
“There are many ethical and legal considerations that require exploration,” says panelist Sally Bean, senior ethicist and policy advisor at Sunnybrook hospital and member of U of T’s Joint Centre for Bioethics.
In her role as an ethicist, Bean advises on ethical issues that arise across the healthcare continuum ranging from beside care, to broader issues affecting the organization or health system.
“Insofar as AI alters the context of the traditional face-to-face physician-patient trust-based relationship there is the potential for it to transform the substance of that relationship so it is important to ensure that potential negative implications are minimized,” says Bean.
Additional panelists include George Cernile, manager, A.I. Technology and R&D, Artificial Intelligence in Medicine Inc and Professor Ross Upshur of the department of family and community medicine and the Dalla Lana School of Public Health.
“It’s starting to seep in to the public consciousness that AIs are ‘coming for our jobs’,” says Rudzicz, citing reports about self-driving cars eliminating the need for long-distance truck drivers, or taxi drivers.
“Medicine is one of those areas, too. Nothing is really safe.”