Elon Musk's Grok released a new AI image generation feature on Tuesday night that, like AI chatbots, comes with few safeguards. This means you can, say, generate a fake image of Donald Trump smoking marijuana on Joe Rogan's show and upload it directly to the X platform. But it's not Elon Musk's AI company that's actually powering this madness. Rather, it's the startup Black Forest Labs that's behind the controversial feature.
The collaboration between the two was revealed on Tuesday, when xAI announced it was partnering with Black Forest to use its FLUX.1 model to power Grok's image generator. Black Forest, an AI image and video startup founded on August 1, seems sympathetic to Musk's vision for Grok as an “anti-woke chatbot” without the strict guardrails seen in OpenAI's Dall-E and Google's Imagen. The social media site is already awash in ridiculous images from the new feature.
Black Forest Labs is based in Germany and recently came out of stealth with $31 million in seed funding led by Andreessen Horowitz, according to a press release. Other notable investors include Y Combinator CEO Garry Tan and former Oculus CEO Brendan Iribe. The startup's co-founders, Robin Rombach, Patrick Esser, and Andreas Blattmann, are former researchers who helped create Stability AI's Stable Diffusion model.
According to Artificial Analysis, Black Forest Lab's FLUX.1 model outperforms AI image generators from Midjourney and OpenAI in terms of quality, at least when ranked by users in the image category.
The startup has released its open-source AI image generation model on Hugging Face and GitHub, “to make our model available to a wide range of users.” The company said it also plans to create a text-to-video conversion model soon.
Black Forest Labs did not immediately respond to TechCrunch's request for comment.
Hell, Grok has absolutely no filters for image generation. This is one of the most reckless and irresponsible AI implementations I've ever seen. pic.twitter.com/oiyRhW5jpF
— Alejandra Caraballo (@Esqueer_) August 14, 2024
In its announcement release, the company said it aims to “increase confidence in the safety of these models,” but the flood of AI-generated images on Wednesday may suggest the opposite is true. Many of the images that users were able to create using Grok and Black Forest Labs' tools, such as Pikachu with an assault rifle, could not be reproduced by Google or OpenAI's image generators. There is no doubt that copyrighted images were used to train the models.
That's the point
This lack of safeguards is likely the main reason Musk chose this collaborator. Musk has made it clear that he believes safeguards actually make AI models less safe. “The dangers of training an AI to be awake, in other words to lie, are mortal,” Musk said in a 2022 tweet.
Anjney Midha, director of Black Forest Labs, posted a comparison series on X of images produced by Google Gemini and Grok's Flux collaboration on the first day of release. The thread highlights well-known issues with Google Gemini producing historically accurate images of people, specifically with inappropriately injecting racial diversity into the images.
“I'm glad @ibab & team took this seriously and made the right choice,” Midha tweeted, noting that FLUX.1 appears to have avoided the issue (and referencing the account of xAI's lead researcher Igor Babuschkin).
Because of the gaffe, Google apologized and turned off Gemini's ability to generate images of people in February, and to this day, the company still doesn't allow Gemini to generate images of people.
A flood of misinformation
This general lack of safeguards could be problematic for Musk. The X Platform came under fire when an explicit AI-generated deepfake image of Taylor Swift went viral on the platform. Beyond that incident, Grok generates hallucinogenic headlines for X users on an almost weekly basis.
Last week, five secretaries of state urged X to stop spreading misinformation about Kamala Harris on X. Earlier this month, Musk reshared a video that used AI to replicate Harris' voice, making it seem as if the vice president had endorsed a “diversity hire.”
Musk seems intent on allowing this kind of misinformation to flourish on the platform: By allowing users to post images of Grok's AI directly to the platform that appear to have no watermark whatsoever, Musk has essentially flooded everyone's X news feed with misinformation.