Is the number of AI models too many? Depending on your point of view, 10 models per week may be a bit much. This is the number we've gotten, or close to it, over the past few days, making it increasingly difficult to say whether and how these models compare to each other. – Even if it was possible in the first place. So what does it mean?
We are at a strange time in the evolution of AI. Of course, this has always been a rather strange situation. Models large and small are proliferating, from niche developers to large, well-funded developers.
Let's take a quick look at this week's list. We have tried to condense the features of each model as much as possible.
LLaMa-3: Meta's latest “open” flagship large-scale language model. (Although there is current debate about the term “open,” this project is widely used in the community regardless.) Mistral 8×22: French company creates a “mix of experts” model in the grand scheme of things. , the openness they once embraced. Stable Diffusion 3 Turbo: SD3 upgraded to support the new open Stability API. Borrowing “turbo” from OpenAI's model nomenclature is a bit strange, but it's okay. Adobe Acrobat AI Assistant: “Talk to your documents” from the 800-pound document gorilla. However, it is certainly primarily a wrapper around ChatGPT. Reka Core: A baked-from-scratch multimodal model created by a small team previously employed by Big AI that can, at least nominally, compete with the big boys. Idefics2: A more open multimodal model built on recent smaller Mistral and Google models. OLMo-1.7-7B: A larger version of his LLM in AI2, one of his most open and a stepping stone to his future 70B scale models. Pile-T5: A version of the old reliable T5 model, fine-tuned with the code database Pile. Same as the T5 you know and love, but with better coding capabilities. Cohere Compass: The “embedded model” (don't worry if you don't already know) focuses on incorporating multiple data types to cover more use cases. Imagine Flash: Meta's latest image generation model. It relies on a new distillation method that facilitates diffusion without unduly compromising quality.
infinite: “Customized AI based on what you see, say, and hear. It's a web app, a Mac app, a Windows app, and a wearable.” 😬
One was announced while I was writing this, so there are 11. Just to be clear, this is not all of the models released or previewed this week. That's just what we saw and talked about. Relax your inclusion criteria a bit and you'll end up with dozens of models, including some tweaked existing models, some combos like Idefics 2, and some experimental or niche models. . Not to mention, we're also introducing this week's new tools for building (torchtune) and fighting generative AI (Glaze 2.0).
What are we to think about this never-ending avalanche? Because next week may not contain the 10 or 20 releases that were in the last release, but at least 5 of the above levels. or 6 releases for sure. I can't “review” them all. So how can we help our readers understand and stay on top of all this?
Well…the truth is, you don't have to catch up, and most other people don't either. There has been a shift in the AI space, and some models such as ChatGPT and Gemini have evolved into entire web platforms that span multiple use cases and access points. Other large language models such as LLaMa and OLMo, while technically speaking sharing the same basic architecture, do not actually serve the same role. They are intended to exist in the background as a service or component rather than in the foreground as a well-known brand.
These two points are intentionally confused because the model's developers want to borrow a bit from the hype often associated with major AI platform releases such as GPT-4V and Gemini Ultra. We all want our releases to be considered important. And while it probably matters to someone, that someone is almost certainly not you.
Think of it in terms of a broad, diverse category like cars. When they were first invented, you simply bought a “car”. And a little later, you could choose between large cars, small cars, and tractors. Nowadays, hundreds of cars are released every year, and he probably doesn't care about one out of ten of them. Because 9 out of 10 cars aren't what you need and aren't really even cars in this understanding of the word. . We are moving from the big/small/tractor era to the pervasive era of AI, and even AI experts can't keep up and test all the upcoming models.
The flip side of this story is that we were already at this stage long before ChatGPT and other big models came along. Seven or eight years ago, far fewer people were reading about this, and yet the reason we covered it was because it was clearly a technology waiting for its breakout moment, and it eventually It has arrived. Papers, models, and research were constantly being published, and conferences like SIGGRAPH and NeurIPS were filled with machine learning engineers comparing notes and building on each other's work. This is a visually understandable story I wrote about him in 2011.
These activities continue every day. But since AI has become big business, perhaps the biggest thing in the technology space right now, these developments are given a bit of special emphasis. Because people are curious about whether one of these developments could be a major advancement for ChatGPT over its previous products.
The simple truth is that none of these models are much of a step forward. OpenAI's advancements are because it is built on fundamental changes to machine learning architectures that every other company has now adopted and has not replaced. For now, all we have to hope for is incremental improvements, such as a 1-2 point improvement on synthetic benchmarks or slightly more convincing language or images.
Does that mean none of these models matter? Yes, they do. You cannot migrate from version 2.0 to 3.0 without 2.1, 2.2, 2.2.1, etc. Researchers and engineers are working hard on this. And sometimes those advances are meaningful, addressing significant shortcomings or exposing unexpected vulnerabilities. We try to cover interesting things, but that's only a small part of the story. We are currently working on a collection of all the models we think anyone interested in ML should know about, about 12 of his models.
please do not worry. You'll know when a big event happens, and not just because TechCrunch covers it. It will be as clear to you as it is to us.