The Internet has been stirred by the peculiar behavior of the image generator within the Gemini chatbot, operated by Google. When users requested historical images, the generator often presented characters with racial differences. This week, Google came under criticism as users began to notice that the tool provided images depicting various genders and ethnicities, even if it wasn’t historically accurate. This raised questions about whether the company is adequately addressing the risk of racial bias within its artificial intelligence model. The company is currently working on adjustments and preparing to release a new version of this tool.

Jack Krawczyk, the Director of Gemini Experiences, stated that the AI-powered image generation by Gemini generally creates a wide range of characters, which is positive as it is utilized by people worldwide. However, in this specific case, there is incorrect representation occurring. The company is actively working to improve this issue.

This situation is not new for AI companies. A few years ago, Google apologized when its application mistakenly labeled a photo of a black couple as “gorillas.” OpenAI, a competitor, faced accusations of perpetuating stereotypes when users discovered that its image generator Dall-E more frequently depicted white men in queries related to positions such as corporate leadership.

According to the AP agency, previous studies suggest that AI-powered image generators may amplify racial and gender stereotypes present in their training data. Without filters, they also tend to depict lighter-skinned men more often when prompted to generate an image of a person in various contexts.

IT správa Ostrava