OpenAI's Sora, which can generate videos and interactive 3D environments on the fly, is a remarkable demonstration of the cutting edge of GenAI and a true milestone.
But curiously, one of the innovations that led to this, an AI model architecture colloquially known as a diffusion transformer, arrived on the AI research scene several years ago.
This diffusion transformer is also the driving force behind Stable Diffusion 3.0, AI startup Stability AI's latest image generator, which is expanding the GenAI field by allowing GenAI models to scale beyond what was previously possible. Looks like it's ready to transform.
Sainin Shi, a computer science professor at New York University, began the research project that led to the diffusion transformer in June 2022. Peebles combined his two concepts in machine learning with his mentor William Peebles, who is now co-leader of OpenAI's Sola, while interning at Meta's AI lab. — diffusion And that transformer — Create a diffusion transformer.
Most modern AI-powered media generators, including OpenAI's DALL-E 3, rely on a process called diffusion to output images, video, audio, music, 3D meshes, artwork, and more.
This isn't the most intuitive idea, but essentially noise is slowly added to the media (e.g. an image) until it's no longer recognizable. Repeat this to build a data set of noisy media. When a diffusion model is trained on this basis, it learns how to gradually subtract noise as it moves progressively closer to the target output media (e.g., a new image).
Diffusion models typically have a “backbone,” or engine, called the U-Net. The U-Net backbone learns how to estimate the noise to remove and does it well. However, U-Net is complex and features specially designed modules that can dramatically slow down the diffusion pipeline.
Fortunately, transformers can replace U-Net and improve process efficiency and performance.
Transformers are an ideal architecture for complex inference tasks and power models such as GPT-4, Gemini, and ChatGPT. Trance has several unique characteristics, but by far the most distinctive feature of trance is its “attention mechanism.”For each input data (diffusion, image noise), transformer weigh It examines the relevance of all other inputs (other noise in the image) and pulls them out to generate an output (an estimate of the image noise).
The attention mechanism not only makes the transformer simpler than other model architectures, but also makes the architecture parallelizable. In other words, training increasingly larger transformer models allows for significant, but not unattainable, increases in compute.
“Transformers' contribution to the adoption process is similar to upgrading an engine,” Xie told TechCrunch in an email interview. “The introduction of transformers has greatly improved scalability and efficiency. This is especially noticeable for models like Sora, which benefit from training on vast amounts of video data and are able to handle a wide range of model parameters. We leverage this to demonstrate the transformative potential of Transformers when applied at scale.”
So if the idea of diffusion transformers has been around for a while, why did it take years for projects like Sora and Stable Diffusion to start leveraging them? I think the importance of having one didn't become clear until relatively recently.
“The Sora team has really gone above and beyond to show how much you can do with this approach at scale,” he said. “They pretty much made it clear that U-Nets was leaving. transformer I'm for diffusion A model for the future. ”
diffusion transformer should Xie said the model is a simple swap-in for existing diffusion models, whether they produce images, video, audio, or other forms of media. The current process of training diffusion transformers can introduce inefficiencies and performance degradation, but Xie believes this can be addressed in the long run.
“The main point is very simple: forget about U-Nets, switch to U-Nets. transformer, Because they're faster, they work better, and they're more scalable,” he said. “I'm interested in integrating the realm of content understanding and creation within the framework of a diffuse transformer. At the moment, these are like two different worlds: one that understands, and one that understands. I envision a future where these aspects are integrated, and achieving this integration requires standardization of the underlying architecture, and transformers are the perfect candidate for this purpose. I believe there is.”
If Sora and Stable Diffusion 3.0 are a preview of what to expect from diffusion transformers, we're in for one hell of an adventure.