Meta announced the creation of an AI advisory board made up entirely of white men on Wednesday. What else could we expect? For decades, women and people of color have spoken out about being ignored and excluded from the world of artificial intelligence, despite their qualifications and the critical role they play in evolving the field.
Meta did not immediately respond to our request for comment on the diversity of its advisory board.
The new advisory board is different from Meta's actual board and oversight committee, which has more diverse gender and racial representation. The AI board is not shareholder-elected and has no fiduciary responsibility. Mehta told Bloomberg that the board will provide “insights and recommendations on technology advancements, innovations and strategic growth opportunities.” It will meet “regularly.”
It is telling that the AI Advisory Board is made up entirely of businessmen and entrepreneurs, not ethicists or people with academic or deep research backgrounds. One might argue that current and former executives from Stripe, Shopify, and Microsoft are perfectly positioned to oversee Meta's AI product roadmap given the sheer number of products they've brought to market to date, but AI has proven time and time again to be different from other products. AI is a risky business, and the consequences of getting it wrong can be far-reaching, especially for marginalized groups.
In a recent interview with TechCrunch, Sarah Myers West, managing director of the AI Now Institute, a nonprofit that studies the societal impact of AI, said it's important to “critically examine” the institutions that are producing AI to identify the public's needs. [are] It was provided.”
“This is an error-prone technology, and independent research shows that those errors are not evenly distributed and disproportionately harm communities that have long faced the brunt of discrimination. I understand,” she said. “We need to set a much higher bar.”
Women are far more likely to experience the dark side of AI than men. In 2019, Sensity AI found that 96% of AI deepfake videos online were non-consensual, sexually explicit videos. Since then, generative AI has become much more prevalent, and women remain targets of this violation.
In one high-profile incident in January, a non-consensual, pornographic deepfake of Taylor Swift went viral on X, with one of the most widespread posts garnering hundreds of thousands of likes and 45 million views. Social platforms like X have historically failed to protect women from such situations, but because Taylor Swift is one of the most influential women in the world, X stepped in by banning search terms such as “Taylor Swift ai” and Taylor Swift deepfake.
But if this happens to you and you're not a global pop sensation, you might be out of luck. There have been numerous reports of middle and high school students creating explicit deepfakes of their classmates. The technology has been around for a while, but it's now easier and more accessible than ever before. You don't have to be tech-savvy to download an app specifically advertised for “undressing” photos of women or swapping their faces for porn. In fact, as reported by NBC's Kat Tenbarge, Facebook and Instagram hosted ads for an app called Perky AI, which it describes as a tool for creating explicit images.
Two of the ads, which were said to have escaped Meta's detection at the time Tenbarge reported the issue to Meta, featured blurred photos of celebrities Sabrina Carpenter and Jenna Ortega and were posted on the app. He asked her to take off her clothes. The ad featured a photo of Ortega when he was just 16 years old.
The mistake of allowing Perky AI to advertise is not an isolated incident: Meta's oversight board recently opened an investigation into the company's failure to act on reports of AI-generated sexually explicit content.
It is essential that the voices of women and people of color are included in the innovation of artificial intelligence products. For too long, these marginalized groups have been excluded from the development of world-changing technologies and research, with dire consequences.
A clear example is the fact that women were excluded from clinical trials until the 1970s. In other words, entire fields of research were being developed without understanding how they affected women. Black people in particular are feeling the effects of technology that was created without them in mind. For example, a 2019 study by the Georgia Institute of Technology found that black people are more likely to be hit by self-driving cars because self-driving car sensors are less able to detect their skin.
Algorithms trained on already discriminatory data will only regurgitate the same biases that humans trained them to employ. We already widely see AI systems perpetuating and amplifying racism in hiring, housing, and criminal justice. As Axios noted, because English is AI’s native language, voice assistants struggle to understand diverse accents and often flag work by non-native English speakers as AI-generated. Facial recognition systems are more likely to flag Black people as potential criminal suspects than White people.
Current developments in AI embody existing power structures around class, race, gender, and Eurocentrism seen elsewhere, and not enough leaders seem to be addressing them. On the contrary, it is strengthening. Investors, founders, and technology leaders are so focused on moving quickly and breaking things that the hottest AI technology of the moment, generative AI, may be making the problem worse, not better. I can't seem to understand certain things. According to a McKinsey report, AI has the potential to automate about half of jobs that don't require a four-year degree and pay salaries of $42,000 or more per year. Minority workers are overrepresented in these jobs.
At one of the world's most prominent technology companies in this race to use AI to save the world, an all-white male team is trying to figure out which It's natural to worry about how you can advise on products for everyone. is represented. It takes a lot of effort to build technology that everyone, just everyone, can use. In fact, the layers required to actually build safe and inclusive AI (from research to cross-level understanding of society) are so complex that this advisory committee is focused on getting Meta right. It's almost obvious that it doesn't help. At least where there is a lack of meta, another startup could emerge.