Doctors cannot tell a person’s race from medical images such as X-rays and CT scans. But a team including MIT researchers was able to train a deep learning model to identify patients as white, black, or Asian (according to their own description) just by analyzing those patients. such a picture — and they still can’t figure out how the computer works. it.
After considering variables including differences in anatomy, bone density, and image resolution, the team “couldn’t come close to identifying a good representative for this task.” paper co-author Marzyeh Ghassemi, PhD ’17, an assistant professor at EECS and the Institute of Medical Sciences and Engineering (IMES).
This is worrying because doctors use algorithms to help make decisions like whether a patient is a candidate for chemotherapy or an intensive care unit, the researchers say. . Now, these findings raise the possibility that algorithms are “looking at your race, ethnicity, gender, whether you are incarcerated or not — even if all that information is hidden,” said co-author Leo Anthony Celi, SM ’09, a principal research scientist at IMES and an associate professor at Harvard Medical School.
Celi suggests that clinicians and computer scientists should look to social scientists for more insight. “We needed another group of experts to come in and provide input and feedback on how we design, develop, implement, and evaluate these algorithms,” he said. “We also need to ask data scientists, before data exploration: Is there a disparity? Which patient groups are marginalized? What is the driving force behind those disparities? “
Algorithms often have access to information that humans don’t, and this means that experts must work to understand the unintended consequences. Otherwise, there is no way to prevent algorithms from perpetuating existing biases in medical care.