Saturday, March 7, 2026
HomeHealthSTAT+: Why doctors and patients are having two totally different AI chatbot...

STAT+: Why doctors and patients are having two totally different AI chatbot experiences

In this edition of STAT’s AI Prognosis, we take a closer look at how artificial intelligence (AI) models perform differently based on the user, and the potential dangers that can arise from these discrepancies. AI has become an integral part of our daily lives, from voice assistants to predictive algorithms, and its capabilities continue to expand. However, as with any technology, there are concerns and challenges that need to be addressed.

AI models are created and trained using vast amounts of data, with the goal of accurately predicting outcomes or making decisions. However, the data used to train these models can often be biased, leading to unequal performance for different users. This can have serious consequences, especially in the healthcare industry where AI is being used to make critical decisions about patient care.

Brittany Trang, a data scientist and researcher at STAT, has been studying the performance of AI models and how it varies based on the user. In her research, she has found that AI models can have significantly different outcomes for different groups of people, sometimes with dangerous results.

One of the main reasons for these discrepancies is the lack of diversity in the data used to train AI models. Trang explains, “AI models are only as good as the data they are trained on. If the data is biased, the model will also be biased.” For example, if a healthcare AI model is trained on data primarily from white males, it may not accurately diagnose or treat conditions in women or people of color.

This issue is not limited to healthcare. AI models used in other industries, such as finance and criminal justice, have also been found to have biased outcomes. In some cases, this has led to discrimination and perpetuated societal inequalities.

Trang’s research has also revealed that AI models can have different levels of accuracy for different groups of people. In one study, she found that a popular AI-powered facial recognition software had a higher error rate for darker-skinned individuals and women. This could have serious consequences, especially in law enforcement where facial recognition is being used to identify suspects.

The implications of these findings are concerning, but Trang believes that there are steps that can be taken to address these issues. One solution is to increase diversity in the data used to train AI models. This means including data from different demographics and backgrounds to ensure that the model is not biased towards a particular group.

Another solution is to regularly test and evaluate AI models for bias and accuracy. Trang suggests that companies and organizations using AI should have a diverse team of experts to review and analyze the performance of the models. This will help identify any biases and ensure that the models are accurately representing all users.

Trang’s research also highlights the need for transparency and accountability in the development and use of AI. Companies and organizations should be open about the data used to train their models and the algorithms they use. This will not only help identify any biases but also build trust with users.

Despite these challenges, Trang remains optimistic about the potential of AI to improve our lives. She believes that with proper measures in place, AI can be a powerful tool for positive change. “We need to be aware of the potential biases in AI and actively work towards addressing them,” she says.

In conclusion, AI models’ performance can vary significantly based on the user, and this can have dangerous consequences. However, with the right measures in place, we can ensure that AI is used ethically and accurately. As we continue to integrate AI into our daily lives, it is crucial to prioritize diversity, transparency, and accountability to ensure fair and unbiased outcomes for all users. Brittany Trang’s research serves as a reminder that we must be mindful of the potential dangers of AI and take proactive steps to mitigate them.

Related news

Don't miss