AI algorithms show bias in screening depression among Black individuals, study finds

biased

We already know that artificial intelligence can be biased; several recent cases have demonstrated this. But just how biased can technology be?

According to a new study: Very. A study employed artificial intelligence to analyze social media to see if the system could detect indicators of depression from online posts. The results speak eloquently.

Biased AI

The studies discovered that, while artificial intelligence (AI) may detect indicators of sadness in white Americans via social media use, it may not be as successful for Black people. It was three times less predictive of Black Facebook postings than White people’s Facebook posts.

“Race seems to have been especially neglected in work on language-based assessment of mental illness,” wrote the authors of the study published in the Proceedings of the National Academy of Sciences (PNAS).

Research methodology: ‘I’ talk

For their study, they utilized an “off the shelf” AI tool to analyze the language of 868 volunteers, including an equal number of Black and white adults of similar ages and genders.

Participants were also asked to complete a depression screening questionnaire that is widely used by healthcare providers.

According to a prior study on social media posts, those who frequently use first-person pronouns such as I, me, and mine, as well as self-deprecating words or concepts, are more likely to suffer from depression.

Interestingly, the study found that the use of first-person pronouns and self-deprecating language was solely connected to depression among white people.

Study co-author Sharath Chandra Guntuku said that they “were surprised that these language associations found in numerous prior studies didn’t apply across the board.”

While he admitted that social media data alone cannot diagnose depression, he claimed it could help with risk assessment for individuals or communities.

Exit mobile version