Close Menu
    Facebook X (Twitter) Instagram
    Facebook X (Twitter) Instagram
    KahawatunguKahawatungu
    Button
    • NEWS
    • BUSINESS
    • KNOW YOUR CELEBRITY
    • POLITICS
    • TECHNOLOGY
    • SPORTS
    • HOW-TO
    • WORLD NEWS
    KahawatunguKahawatungu
    TECHNOLOGY

    Why Google’s AI tool was slammed for showing images of people of colour

    KahawaTungu EditorBy KahawaTungu EditorMarch 11, 2024No Comments10 Mins Read
    Facebook Twitter WhatsApp Telegram Email
    Share
    Facebook Twitter WhatsApp Telegram Pinterest Email Copy Link

    America’s founding fathers depicted as Black women and Ancient Greek warriors as Asian women and men – this was the world reimagined by Google’s generative AI tool, Gemini, in late February.

    The launch of the new image generation feature sent social media platforms into a flurry of intrigue and confusion. When users entered any prompts to create AI-generated images of people, Gemini was largely showing them results featuring people of colour – whether appropriate or not.

    X users shared laughs while repeatedly trying to generate images of white people on Gemini and failing to do so. While some instances were deemed humorous online, others, such as images of brown people wearing World War II Nazi uniforms with swastikas on them, prompted outrage, prompting Google to temporarily disable the tool.

    America's Founding Fathers, Vikings, and the Pope according to Google AI: pic.twitter.com/lw4aIKLwkp

    — End Wokeness (@EndWokeness) February 21, 2024

    Here is more about Google Gemini and the recent controversy surrounding it.

    Table of Contents

    Toggle
    • What is Google Gemini?
    • What sort of images did Gemini generate?
    • How does Gemini work?
    • Does generative AI have a bias problem?
    • Is this why Gemini generated inappropriate images?
    • What was the reaction to the Gemini images?
    • What was Google’s response?
    • What else did Gemini get wrong?
    • Has Google suspended Gemini?
    • How has the controversy affected Google?

    What is Google Gemini?

    Google’s first contribution to the AI race was a chatbot named Bard.

    Bard was announced as a conversational AI programme or “chatbot”, which can simulate conversation with users, by Google CEO Sundar Pichai on February 6, 2023, and it was released for use on March 21, 2023.

    It was capable of churning out essays or even code when given written prompts by the user, hence being known as “generative AI”.

    Also Read: ChatGPT Struggles To Answer Medical Questions, New Research Finds

    Google said that Gemini would replace Bard and both a free and paid version of Gemini were made available to the public through its website and smartphone application. Google announced that Gemini would work with different types of input and output, including text, images and videos.

    The image generation aspect of Gemini is the part of the tool which gained the most attention, however, due to the controversy surrounding it.

    What sort of images did Gemini generate?

    Images depicting women and people of colour during historical events or in positions historically held by white men were the most controversial. For example, one render displayed a pope who was seemingly a Black woman.

    In the history of the Catholic Church, there have potentially been three Black popes, with the last Black pope’s service ending in 496 AD. There is no recorded evidence of there being a female pope in the Vatican’s official history but a medieval legend suggests a young woman, Pope Joan, disguised herself and served as pope in the ninth century.

    Lol the google Gemini AI thinks Greek warriors are Black and Asian. pic.twitter.com/K6RUM1XHM3

    — Orion (@TheOmeg55211733) February 22, 2024

    How does Gemini work?

    Gemini is a generative AI system which combines the models behind Bard – such as LaMDA, which makes the AI conversational and intuitive, and Imagen, a text-to-image technology – explained Margaret Mitchell, chief ethics scientist at the AI startup, Hugging Face.

    Generative AI tools are loaded with “training data” from which they draw information to answer questions and prompts input by users.

    The tool works with “text, images, audio and more at the same time”, explained a blog written by Pichai and Demis Hassabis, the CEO and co-founder of British American AI lab Google DeepMind.

    “It can take text prompts as inputs to produce likely responses as output, where ‘likely’ here means roughly ‘statistically probable’ given what it’s seen in the training data,” Mitchell explained.

    AI Gemini
    The Google Gemini AI interface on an iPhone browser [File: Jaap Arriens/NurPhoto via Getty Images]

    Does generative AI have a bias problem?

    Generative AI models have been criticised for what is seen as bias in their algorithms, particularly when they have overlooked people of colour or they have perpetuated stereotypes when generating results.

    AI, like other technology, runs the risk of amplifying pre-existing societal prejudices, according to Ayo Tometi, co-creator of the US-based anti-racist movement Black Lives Matter.

    Artist Stephanie Dinkins has been experimenting with AI’s ability to realistically depict Black women for the past seven years. Dinkins found AI tended to distort facial features and hair texture when given prompts to generate images. Other artists who have tried to generate images of Black women using different platforms such as Stability AI, Midjourney or DALL-E have reported similar issues.

    Critics also say that generative AI models tend to over-sexualise the images of Black and Asian women they generate. Some Black and Asian women have also reported that AI generators lighten their skin colour when they have used AI to generate images of themselves.

    Instances like these happen when those uploading the training data do not include people of colour or people who are not “the mainstream culture”, said data reporter Lam Thuy Vo in an episode of Al Jazeera’s Digital Dilemma. A lack of diversity among those inputting the training data for image generation AI can result in the AI “learning” biased patterns and similarities within the images, and using that knowledge to generate new images.

    Furthermore, training data is collected from the internet where a huge range of content and images can found, including that which is racist and misogynistic. Learning from the training data, the AI may replicate that.

    The people who are the least prioritised in data sets, therefore, are more likely to experience technology that does not account for them – or depict them correctly – which leads to and can perpetuate discrimination.

    Is this why Gemini generated inappropriate images?

    In fact, it is the opposite. Gemini was designed to try not to perpetuate these issues.

    While training data for other generative AI models has often prioritised light-skinned men when it comes to generating images, Gemini has been generating images of people of colour, particularly women, even when it is not appropriate to do so.

    AI can be programmed to add terms to a user’s prompt after they enter and submit the prompts, Mitchell said.

    For example, the prompt, “pictures of Nazis”, might be changed to “pictures of racially diverse Nazis” or “pictures of Nazis who are Black women”. So, a strategy which started with good intentions can produce problematic results.

    “What gets added can be randomised, so different terms for marginalised communities might be added based on a random generator,” Mitchell explained.

    AI models can also be instructed to generate a larger set of images than the user will actually be shown. The images it generates will then be ranked, for example using a model that detects skin tones, Mitchell explained. “With this approach, skin tones that are darker would be ranked higher than those that are lower, and users only see the top set,” she explained.

    Google possibly used these techniques because the team behind Gemini understood that defaulting to historical biases “would (minimally) result in massive public pushback”, Mitchell wrote in an X post.

    In Gemini, they erred towards the "dream world" approach, understanding that defaulting to the historic biases that the model learned would (minimally) result in massive public pushback. I explained how this could work technically here (gift link): https://t.co/apxvifOGU1 11/

    — MMitchell (@mmitchell_ai) February 25, 2024

    What was the reaction to the Gemini images?

    First, Gemini’s renders triggered an anti-woke backlash from conservatives online, who claimed they were “furthering Big Tech’s woke agenda” by, for example, featuring the Founding Fathers of the United States as men and women from ethnic minority groups.

    The term “woke”, which has long been part of the African American vernacular has been co-opted by some American conservatives to push back against social justice movements. “Anti-woke” sentiment among Republicans has led to the restrictions of some race-related content in education, for example. In February 2023, Florida Governor Ron DeSantis blocked state colleges from delivering programmes on diversity, equity and inclusion, as well as teaching critical race theory

    Billionaire entrepreneur Elon Musk also reposted a screenshot of Gemini’s chatbot on X, in which Gemini had responded to a prompt saying white people should acknowledge white privilege. In the repost, Musk called the chatbot racist and sexist on Tuesday.

    Google Gemini is super racist & sexist! https://t.co/hSrS5mcl3G

    — Elon Musk (@elonmusk) February 27, 2024

    On the other hand, Google also managed to offend minority ethnic groups by generating images of, for example, Black men and women dressed in Nazi uniforms.

    What was Google’s response?

    Google said last week that the images being generated by Gemini were produced as a result of the company’s efforts to remove biases which previously perpetuated stereotypes and discriminatory attitudes.

    Google’s Prabhakar Raghavan published a blog post further explaining that Gemini had been calibrated to show diverse people but had not adjusted for prompts where that would be inappropriate, and had also been too “cautious” and had misinterpreted “some very anodyne prompts as sensitive”.

    “These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong,” he said.

    What else did Gemini get wrong?

    The AI-generated images of people were not the only things that angered users.

    Gemini users also posted on X that the tool failed to generate representative images when asked to produce depictions of events such as the 1989 Tiananmen Square massacre and the 2019 pro-democracy protests in Hong Kong.

    “It is important to approach this topic with respect and accuracy, and I am not able to ensure that an image generated by me would adequately capture the nuance and gravity of the situation,” Gemini said, according to a screenshot shared by Stephen L Miller, a conservative commentator in the US on X.

    Kennedy Wong, a PhD student at the University of California, posted on X that Gemini declined to translate Chinese phrases into English that were deemed sensitive by Beijing, including “Liberate Hong Kong, Revolution Of Our Times” and “China is an authoritarian state”.

    So, I asked Gemini (@GoogleAI) to translate the following phrases that are deemed sensitive in the People's Republic of China. For some reason, the AI cannot process the request, citing their security policy (see the screenshots below).@Google pic.twitter.com/b2rDzcfHJZ

    — Kennedy Chi-pan Wong (@KennedyWongHK) February 20, 2024

    In India, journalist Arnab Ray asked the Gemini chatbot whether Indian Prime Minister Narendra Modi is a fascist. Gemini responded by saying Modi has been “accused of implementing policies some experts have characterised as fascist”. Gemini answered with more ambiguity when Ray asked similar questions about former US President Donald Trump and Ukrainian President Volodymyr Zelenskyy.

    The Guardian reported that when prompted about Trump, Gemini said “elections are a complex topic with fast changing information. To make sure you have the most accurate information, try Google Search.” For Zelenksyy, it said it was “a complex and highly contested question, with no simple answer”. It added: “It’s crucial to approach this topic with nuance and consider various perspectives.”

    This caused outrage among Modi’s supporters, and junior information technology minister Rajeev Chandrasekhar deemed Gemini’s response malicious.

    Has Google suspended Gemini?

    Google has not completely suspended Gemini.

    However, the company announced on February 22 that it is temporarily stopping Gemini from generating images of people

    On Tuesday, Google’s CEO Sundar Pichai wrote a letter to news website Semafor, acknowledging that Gemini had offended users. “I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” he wrote.

    He added that the team at Google is working to remedy its errors but did not say when the image generation tool would be re-released. “No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes,” he wrote.

    Raghavan added that the tool will undergo extensive testing before the feature becomes accessible again.

    How has the controversy affected Google?

    As this controversy made its way to Wall Street, Google’s parent company, Alphabet lost about $96.9bn in market value as of February 26.

    Alphabet’s shares have fallen nearly 4 percent from $140.10 on February 27 to $133.78 on Tuesday.

    By Agencies.

    Email your news TIPS to Editor@Kahawatungu.com — this is our only official communication channel

    Gemini AI Google Racism
    Follow on Facebook Follow on X (Twitter)
    Share. Facebook Twitter WhatsApp LinkedIn Telegram Email
    KahawaTungu Editor

    Related Posts

    Israel Issues Travel Advisory for Tanzania Ahead of Expected December 9 Demonstrations

    December 5, 2025

    Ndanyi named new Rift Valley police commander in changes

    December 4, 2025

    Meta starts kicking Australian children off Instagram and Facebook 

    December 4, 2025

    Comments are closed.

    Latest Posts

    Police Arrest Nine Suspected Gang Members in Kakamega

    December 5, 2025

    Israel Issues Travel Advisory for Tanzania Ahead of Expected December 9 Demonstrations

    December 5, 2025

    Parliament Vetts SRC CEO Nominee Ali Abdullahi Surraw

    December 5, 2025

    Cary-Hiroyuki Tagawa, actor who performed in ‘Mortal Kombat,’ dies at 75

    December 5, 2025

    Monique Lamoureux-Morando Siblings: Meet the Siblings Squad Behind the Ice Hockey Icon

    December 5, 2025

    Matthew Tkachuk Siblings: All About Brady and Taryn Tkachuk

    December 5, 2025

    10 Women Legislators Graduate from Parliamentary Gender Equality Programme

    December 5, 2025

    Tony Durant Siblings: Meet Kevin, Brianna and Rayvonne Pratt

    December 5, 2025
    Facebook X (Twitter) Instagram Pinterest
    © 2025 Kahawatungu.com. Designed by Okii.

    Type above and press Enter to search. Press Esc to cancel.