December 27, 2024

Brighton Journal

Complete News World

Google explains 'embarrassing' AI photos Gemini took of various Nazis

Google explains 'embarrassing' AI photos Gemini took of various Nazis

Google has issued a clarification for the “awkward and wrong” images generated by its Gemini AI tool. in Friday's postGoogle says its model produced “inaccurate historical” images due to adjustment issues. the edge Others caught Gemini generating images of racially diverse Nazis and the US Founding Fathers earlier this week.

“We set out to make sure that Gemini showed the failure of a group of people to interpret situations that should be obvious no “Show scale,” Prabhakar Raghavan, Google’s senior vice president, wrote in the post. “Second, over time, the model became more cautious than we intended, refusing to answer some prompts completely, leading to misinterpretation of some soothing stimuli as sensitive.”

Gemini results for the claim “generates a portrait of a 19th-century U.S. senator.”
Screenshot of Addie Robertson

This has led to Gemini AI “overcompensating in some cases,” such as what we saw in images of racially diverse Nazis. It also causes Gemini to become “overly conservative.” This led to her refusing to create specific images of a “black person” or a “white person” when asked to do so.

In the blog post, Raghavan said Google is “sorry that the feature did not work well.” He also points out that Google wants Gemini to “work well for everyone” and that means getting photos of different types of people (including different races) when it asks for photos of “football players” or “person walking a dog.” But he says:

However, if you ask Gemini for pictures of a certain type of person — such as “a black teacher in the classroom” or “a white vet with a dog” — or people in certain cultural or historical contexts, you should definitely get a response that accurately reflects what You ask for it.

Raghavan says Google will continue to test Gemini AI's image-generating capabilities and “work to improve them significantly” before re-enabling it. “As we have said from the beginning, hallucinations are a challenge known to all LLM holders [large language models] Raghavan points out that there are cases where AI gets things wrong. “This is something we are constantly working on improving.”

See also  Amazon's all-new Alexa voice assistant is coming soon, powered by the new Alexa LLM