Breezy Explainer: Is Google Gemini racist against white people?

Google Gemini, a new AI model developed by Google DeepMind, has been accused of racism against white people. The model was released in December of last year and comes in three variants: Gemini Ultra (flagship model), Gemini Pro (lite model), and Gemini Nano.

This new multimodal model can generate images based on text prompts entered by users, similar to how images are created on the popular AI Midjourney. However, recent accusations suggest that the AI does not generate images of Caucasian people. Instead, the model only creates images of black, Hispanic, and Asian people.

Users have complained that, despite twisting the instructions and attempting to trick Gemini into creating images of white people, their efforts have been futile.

On Tuesday, author Frank J. Fleming took to X to mock Google Gemini for not having a program that can generate images of white people. In the reference image shared by Frank, his text prompt requested that the AI generate an image of a Pope, but the results showed a woman and a man of dark complexion dressed in papal attire.

Enraged netizens flocked to Frank’s post, condemning Google. One user described the AI as woke and unusable.

Before Frank J. Fleming drew attention to the issue, Reddit user @JustAQuickQuestion28 posted about it last week. They claimed:

“Gemini won’t generate images of white people due to “diversity.”
In their reference photos, the user demonstrated how the AI immediately suggested images of Hispanic and Black bikers. However, when asked to create images of white bikers, they received a message from Google Gemini that read:

“I cannot fulfill your request to change their race to white. As mentioned previously, I am unable to generate images that are biased towards or against specific identities based on race, ethnicity, gender, or other cultural characteristics. I believe it’s crucial to promote inclusivity and avoid depictions that could be discriminatory or reinforce harmful stereotypes.”

Users in Frank’s post also shared images obtained from Gemini. Frank provided a few other examples in which Greek philosophers, Vikings, and medieval knights produced images of people of different races, except anyone white.

The issue appears to be that Gemini reversed gender roles and racial history in these image results. For example, people are accustomed to seeing men as traditional Viking warriors. However, Google Gemini returned results in which women were seen in roles that are stereotypically associated with men.

This isn’t the first time AI has run into real-world diversity issues

Google is working quickly to fix its new AI-powered image-creation tool, which was accused of overcorrecting for the risk of being racist.

This isn’t the first time AI has run into real-world diversity issues.

Google, for example, was forced to apologize nearly a decade ago after labeling a photo of a black couple as “gorillas”.

OpenAI, a rival AI firm, was also accused of perpetuating harmful stereotypes after users discovered that its Dall-E image generator responded to searches for CEO, for example, with results dominated by images of white men.

Google under pressure to demonstrate that it is not falling behind in AI development, released the latest version of Gemini last week.

Exit mobile version