See for yourself how biased AI image models are with these new tools

Bias and stereotypes are still big problems for systems like DALL-E 2 and Stable Diffusion, despite the companies’ attempts to fix them.

Popular AI imaging systems tend to highlight harmful biases and stereotypes. But how big is the problem? Now you can see for yourself using interactive. New online tools. (Importer alert: It is). Big.)

The tools were developed by AI startup Hugging Face and researchers at the University of Leipzig and were developed by A A non-peer-reviewed paperAllow people to test bias in three popular AI image generation models: DALL-E 2 and the two latest versions of Stable Diffusion.

To create the tools, the researchers first used the three AI image models to generate images of 96,000 people of different ethnicities, genders and professions. The team asked the models to generate images based on social characteristics, such as “female” or “Latinx man,” followed by images associated with occupations and adjectives, such as “ambitious plumber” or “compassionate CEO.”

This story is only available to subscribers.

Don’t leave half the story.
Get access to free tech news here and now.

Register now
Already registered? sign in

The researchers wanted to examine how the two sets of images differed. They did this by applying a machine learning technique called clustering to the images. This method tries to find patterns in the images without classifying them into categories such as gender or ethnicity. This allowed the researchers to examine the similarities between different images. They then built interactive tools that allow anyone to examine the images these AI models produce and the biases reflected in that output. These tools are available for free on Hug Face website.

After analyzing the images created by DALL-E 2 and Stable Diffusion, they found that the models tended to produce images of white and male-looking people, especially when asked to depict people in positions of power. That was especially true for DALL-E 2, which produced 97% white males when given the title of “CEO” or “Director.” This is because these models are trained with vast amounts of data and images from the internet, a process that not only reflects but also accentuates stereotypes around race and gender.

But these tools mean people don’t just have to believe Hug’s face: they can see the bias at work for themselves. For example, one tool allows you to browse AI-generated images of different groups, such as black women, to see how well they match the representation of black women in different professions. Another can be used to analyze AI-generated faces of people in a profession and combine average images for that job.

Mean Face Of Teacher Generated By Stable Diffusion And Dall-E 2.

Yet another tool lets people see how attaching different adjectives to a single prompt changes the images the AI ​​model spits out. Here the output of the models reflected a strikingly skewed gender bias. Adding adjectives such as “empathetic,” “sensitive,” or “emotional” to a prompt describing a profession causes the AI ​​model to generate a female instead of a male. In contrast, mentioning the words “stubborn,” “intellectual” or “irrational” in most cases conjures up images of men.

“The Compassionate Manager” By Stable Diffusion.
“Manager” By Stable Diffusion.

There’s also a tool that lets people see how AI models represent different ethnicities and genders. For example, when asked “Native American,” both DALL-E 2 and Stable Diffusion generate images of people wearing traditional headdresses.

“All of the Native Americans in the representation were wearing traditional headdresses, which is clearly not the case in real life,” said Sasha Luccioni, an AI researcher at Hugging Face who led the work.

Interestingly, the tools found that the AI ​​systems that process the images resemble non-white non-binary people with each other, but make significant differences in the way they portray non-binary people, said Hugging Face’s AI researcher Yassin Jernett. on the project.

One theory for why this might be is that non-binary browns may have more visibility in the press recently, which means their images end up in the datasets used for training AI models, Jernite says.

OpenAI and Stable.AI, the company that built Stable Diffusion, have introduced tweaks to their systems to address ingrained biases, such as blocking certain queries that appear likely to generate offensive images. However, these new tools show how limited these fixes are in the face of adoption.

A spokesperson for Stability.AI told us that the company trains its models “on data sets specific to different countries and cultures,” which should “serve to address biases caused by over-representation in general data sets.”

An OpenAI spokesperson did not comment specifically on the tools, but referred to a Blog post Explaining how the company has added 2 different techniques to DALL-E to filter out biased and sexual and offensive images.

As these AI models become more widely accepted and produce more realistic images, bias is becoming a more pressing problem. They are already rolling in products like these. Stock photos. Luccioni said she fears the models will reinforce harmful biases at large. She hopes the tools she and her team have developed will bring more transparency to image-generating AI systems and highlight the need to be less biased.

Part of the problem is that these models are trained on mostly US-centric data, meaning they largely reflect US associations, biases, values ​​and culture, says Eileen Kaliscan, an associate professor at the University of Washington. He studies bias in AI systems and was not involved in this study.

“The last thing is that this online American culture is a thumbs up … it’s continued all over the world,” says Kaliscan.

Caliscan says face-hugging can help AI developers understand and reduce bias in their AI models. “I believe that when people see these examples firsthand, they can better understand the importance of these biases,” she says.

We offer you some site tools and assistance to get the best result in daily life by taking advantage of simple experiences

 
Please enable / Bitte aktiviere JavaScript!
Veuillez activer / Por favor activa el Javascript! [ ? ]