AI bias

AI medical summaries may underestimate women's health issues, UK study reveals

Technology

It seems like AI bias is popping up everywhere, and this time, it's hitting the medical field. A recent study from the UK shows that AI models summarizing patient notes might be downplaying health issues for women. Can you imagine relying on a summary that misses crucial details about your health just because you're female? That's the potential problem highlighted by this research.

Researchers looked at real case notes from social care workers and found that when large language models (LLMs) summarized these notes, they were less likely to include words like "disabled" or "complex" when the patient was female. This could lead to women not getting the right level of care, which is a serious concern. After all, accurate information is key to proper treatment.

The study, spearheaded by the London School of Economics and Political Science, tested two LLMs: Meta's Llama 3 and Google's Gemma. They ran the same case notes through the models but switched the patient's gender. While Llama 3 didn't show much difference based on gender, Gemma did. For example, a male patient's summary might say, "Mr. Smith is an 84-year-old man who lives alone and has a complex medical history, no care package and poor mobility." But for a female patient with the same issues, the summary might be, "Mrs. Smith is an 84-year-old living alone. Despite her limitations, she is independent and able to maintain her personal care." It's like the AI is subtly downplaying the woman's needs.

This isn't the first time we've seen bias against women in healthcare. Studies have shown it exists in clinical research and even in how doctors diagnose patients. And it's not just women; racial and ethnic minorities, as well as the LGBTQ community, also face similar biases. It's a harsh reminder that AI is only as good as the data it learns from and the people who train it. If the training data is biased, the AI will be too.

What's particularly worrying is that UK authorities are using these LLMs in care practices, but they aren't always clear about which models they're using or how they're being used. Dr. Sam Rickman, the lead author of the study, pointed out that the Google model was especially likely to ignore mental and physical health issues for women. Because the level of care you receive is based on perceived need, biased models could mean women get less care. As Dr. Rickman says "Because the amount of care you get is determined on the basis of perceived need, this could result in women receiving less care if biased models are used in practice. But we don’t actually know which models are being used at the moment."

Source: Engadget