When the coronavirus spread violently through the nation, disproportionately burdening Black communities, it was in some grim way the perfect reminder of why Dr. Noha Aboelata had founded her community clinic in East Oakland, California, more than a decade prior. 

In Alameda County, where she was located, Black folks were dying at a rate much higher than white folks. That winter, Aboelata came across an article in the New England Journal of Medicine. What it laid out was how pulse oximeters, the algorithm-powered devices relied upon to make split-second emergency care decisions as hospitals overflowed and the health system teetered on overwhelm, weren’t accurately reading the blood oxygen levels of Black patients. 

EMTs used the readings to decide if patients were sick enough to admit to the hospitals. Emergency room staff used them to make quick decisions on whether to keep a patient or tell them to go home. Patients used them in their home to monitor their health. 

What was less known at the time was how the devices overestimated the health of Black patients, in turn begging a question of how big their role was in COVID-19’s disparate outcomes. 

“I just saw red,” said Aboelata, the founding CEO of Roots Community Health Center. “I had no idea.”

She called her colleagues after reading the research. “No one was aware.” 

So, the Roots Community Health Clinic sounded the alarms in California, pointing aim at anyone who makes, sells, or distributes the devices. Three years later, the clinic filed a lawsuit against Walgreens, CVS, GE Healthcare, Masimo, and other leading manufacturers and sellers of pulse oximeters as a way to establish some form of accountability and raise awareness. They’re hoping that in response, the devices will no longer be sold in the state. They’re also calling for increased regulation by the federal government and at least a warning label placed on the devices for consumers who are unaware of the possible ramifications of a misreading.

With no court date set yet, the companies listed have yet to respond.

It’s among a number of lawsuits that are adding pressure and urgency to the idea of accountability when innovations go awry.  

The question that lingers at the intersections of health care, algorithms, and artificial intelligence is who’s responsible when racism is baked into the technology and worsens the racial disparities already saturating America’s health system. The answer to the question of accountability is complex. Experts who study law, health, and tech say it’s an issue with many layers, from the diligence on the part of the developers of the technologies to the providers who use them, the pharmacies that sell inaccurate devices, and the federal agencies charged with regulating medicines and medical devices. 

In its most ideal form, experts hoped artificial intelligence could be trained to be less biased than a physician. 

What actually unfolded was the introduction of a string of algorithms and medical innovations that perpetuated racial disparities in health that already existed, from less accurate cancer diagnosis to worse assessment of stroke risks for Black folks. Could artificial intelligence and algorithms actually just be more efficient at being biased? Last year, a Los Angeles barber sued an affiliate of a local California hospital, saying the use of the eGFR, an algorithm that measures kidney function, placed him lower on the transplant list. Within the algorithm are equations that estimate Black patients’ kidney function as higher, which can effectively drop them in organ transplant lists or suggest other interventions are less urgent than they actually are. 

Pulse oximeters took center stage in the throes of the COVID-19 pandemic, when the line between life, death, and medical racism became too clear to ignore. The Roots clinic is looking for more action from the U.S. Food and Drug Administration. 

“It felt like something of an enormous magnitude,” Aboelata said. “Someone needed to be accountable.”

But to dig into accountability requires rewinding back to the algorithms and artificial intelligence’s development, and that begins with a question of data quality. How is that data collected? Where is the data coming from? Who’s missing from the data that developers are using to make their technologies? 

“The algorithms are only as good as the data that’s input,” said Fay Cobb Payton, a professor of information systems and technology at North Carolina State University.

For pulse oximeters, that means looking at whom the devices were tested on. While they work well on lighter skin tones of the test groups they used, in practice, it was found they work less well for darker complexions, which weren’t tested.

It’s not new for data to be collected in some spaces and not others. Exactly how that trend unraveled has shifted throughout the decades. Black patients were once nonconsensually experimented on to test medical innovations that white communities would gain more access to, from the stolen cells of Henrietta Lacks to the procedures conducted by J. Marion Sims on enslaved Black women without anesthesia. 

Today, that looks different. White Americans are overrepresented in clinic trials, meaning various drugs, devices, and treatments are proven effective for their features while Black patients are excluded. This leads to devices like the pulse oximeter being less effective on their skin tone. 

Earlier this month, an FDA panel recommended more diversity in trials.

Experts say it matters whether the data is being collected from major hospitals like Massachusetts General compared with local community clinics and less wealthy facilities. The patient makeup is different, resulting in a dataset with potentially different characteristics, said Nicholson Price, a professor of law at the University of Michigan. 

There’s also a level of responsibility that lays with the providers who decide to use such algorithms, intelligence, and devices, some say. Others are pushing back. 

“Providers are employees of hospitals. They didn’t buy the pulse oximeters that the hospitals use,” said Kadija Ferryman, who studies race, ethics, and policy in health technology. They don’t have much control over the tools they use, she said. And even among physicians who practice privately, outside of hospital settings, Ferryman said the market is saturated with the devices that work less well. 

It’s a structural issue, she said: “It’s really hard to pin the blame on one particular group.”

The FDA has shouldered much of the heat when it comes to responsibility in mitigating how technology — and the racism it might intensify — is monitored and regulated in health care. In November, a group of 25 state attorneys general sent a letter to the agency urging them to take action on the issue of pulse oximeters. It came a year after the FDA hosted a public meeting on the issue and almost 21 months after the agency issued communication about the devices’ inaccuracies. 

“It is imperative that the FDA act now to prevent additional severe illness and mortalities,” the letter reads. 

Since 2019, said Ferryman, the agency has been collecting public comment and drafting proposed guidelines around algorithmic devices. It’s clear that FDA officials understand it’s something they must regulate, she said. 

“How much they can address it is another story,” Ferryman said. 

Part of it is an issue of labor force. It’s hard to find artificial intelligence experts who know these devices well enough to regulate them. Those kinds of folks are in high demand within health care. 

Exactly how to solve the issue of technological racism in medicine is complicated. 

“We have a problem,” said Aboelata, the clinic founder. “It’s a problem of a pretty large magnitude.”

Margo Snipe is a health reporter at Capital B. Twitter @margoasnipe