Bias in AI image generators is a well-studied and well-documented phenomenon, but consumer tools continue to exhibit clear cultural biases. The latest culprit in this space is his AI chatbot from Meta, which for some reason tries to add a turban to an image of an Indian man.
Earlier this month, the company rolled out Meta AI across WhatsApp, Instagram, Facebook, and Messenger in more than a dozen countries. However, the company has rolled out MetaAI to some users in India, one of the world's largest markets.
TechCrunch explores a variety of culture-specific queries as part of our AI testing process. This showed us, for example, that Meta is blocking election-related queries in India as general elections are underway. But Meta AI's new image generator, Imagine, showed an unusual tendency to generate turbanized Indian men, among other biases.
As we tested different prompts and generated over 50 images to test different scenarios, they're all here to see how the system represents different cultures. I removed two of them (such as “German Driver''). There was no scientific method behind the generation, and inaccuracies in the representation of objects and scenes beyond a cultural lens were not taken into account.
There are many men in India who wear turbans, but the percentage is not as high as meta-AI tools suggest. In Delhi, the capital of India, only one out of every 15 men wears a turban. However, in the images Meta's AI generates, approximately 3 to 4 out of 5 images depicting Indian men are wearing turbans.
It started with the prompt “Indian man walking down the street” and all the images were of men wearing turbans.
Next, generate images containing prompts such as “Indian man,” “Indian man playing chess,” “Indian man cooking,” and “Indian man swimming.” I tried. MetaAI generated only one image of a man without a turban.
Even when using gender-neutral prompts, Meta AI did not show much diversity in terms of gender or cultural differences. We tried the prompts in a variety of professions and settings, including an architect, a politician, a badminton player, an archer, a writer, a painter, a doctor, a teacher, a balloon seller, and a sculptor.
As you can see, despite the variety of settings and clothing, all the men are generated wearing turbans. Again, turbans are common in all jobs and regions, but it's strange that a meta AI would consider them so ubiquitous.
I have produced images of Indian photographers, most of them using outdated cameras, except for one image where the monkey somehow has a DSLR.
We also generated images of Indian drivers. And until we added the word “Dapper,” the image generation algorithm showed signs of class bias.
We also attempted to generate two images with similar prompts. Here are some examples: Indian programmer in the office.
An Indian man operates a tractor in the field.
Two Indian men sitting next to each other:
I also generated a collage of images with prompts, such as Indian men with different hairstyles. This seemed to give us the variety we were hoping for.
Meta AI's Imagine also has a puzzling habit of generating one type of image for similar prompts. For example, images of old-fashioned Indian houses with bright colors, wooden columns, and stylish roofs were always generated. A Google image search shows that this is not the case in most Indian homes.
Another prompt we tried was “Indian content creators,” which repeatedly generated images of women creators. The gallery below contains images of her creator's content on beaches, hills, mountains, zoos, restaurants and shoe stores.
As with other image generators, the bias seen here may be due to inadequate training data and a subsequent inadequate testing process. Although it is not possible to test every possible outcome, common stereotypes should be easy to spot. Meta AI seems to choose one type of representation for a given prompt, which indicates a lack of diverse representation, at least in the Indian dataset.
In response to a question TechCrunch sent to Meta about bias in its training data, the company said it is working on improving its generative AI technology, but did not provide many details about that process.
“This is a new technology and will not always give you the intended response. This is true of all generative AI systems. Since its launch, we have continually released updates and improvements to our model. “We continue to work to make our models better,” a spokesperson said in a statement.
The biggest appeal of Meta AI is that it's free and easy to use on multiple surfaces. Therefore, millions of people from different cultures are using it in different ways. While companies like Meta are constantly working to improve their image generation models in terms of the accuracy of how they generate objects and people, it's also important to work on these tools to avoid falling into stereotypes.
Meta likely wants creators and users to use this tool to post content to the platform. However, when generative bias persists, it can also serve to confirm or exacerbate the biases of users and viewers. India is a diverse country with many cultures, castes, religions, regions and languages. Companies working on AI tools need to better represent different populations.
If you notice that your AI model is producing abnormal or biased output, please contact us at im@ivanmehta.com via email or this link on Signal.