A mysterious new image generation model outperforms models from Midjourney, Black Forest Labs, and OpenAI in the crowdsourced Artificial Analysis benchmark.
The model, which goes by the name “red_panda,” beats the next highest-ranked model, Black Forest Labs' Flux1.1 Pro, by about 40 Elo points on the Artificial Analysis text-to-image leaderboard. . Artificial Analysis uses Elo, a ranking system originally developed to calculate the relative skill level of chess players, to compare the performance of the different models it tests.
Image allegedly generated by red_panda. Image credit: Deedy Das (Opens in new window)
Similar to Chatbot Arena, a community AI benchmark, Artificial Analysis ranks models through crowdsourcing. For image models, Artificial Analysis randomly selects two models and provides them with unique prompts. Images of the prompt and results are then displayed, and the user selects which one they think better reflects the prompt.
Image credit: Artificial Analysis
Admittedly, there is some degree of bias in this voting process. Most of Artificial Analysis's voters are AI enthusiasts, and their choices may not reflect the preferences of the broader community of generative AI users.
However, red_panda is also one of the best performing models on the leaderboard in terms of generation speed. The model takes a median time of about 7 seconds to generate an image, which is more than 100 times faster than OpenAI's DALL-E 3.
Another image reportedly from red_panda. Image credit: Neuralithic (opens in new window)
So where did red_panda come from? What company made it? And when is it scheduled to be released? All good questions. However, it may not be long before we find out, as AI Labs increasingly use community benchmarks to build anticipation ahead of announcements.