Meta's new AI model is making waves in the tech world. The two new models, part of Facebook's parent company's Llama series of artificial intelligence tools, are both open source, helping differentiate them from competing products from OpenAI and other big-name companies.
Meta's new Llama models have underlying datasets of different sizes, with the Llama 3 8B model featuring 8 billion parameters and the Llama 3 70B model having around 70 billion parameters. The more parameters, the more powerful the model, but not all AI tasks require the largest possible dataset.
The company's new model, trained on a 24,000 GPU cluster, performed well across the benchmarks compared by Meta, outperforming several rival models already on the market. For those of us who aren't competing to build and release the most capable or largest AI models, it's important that they continue to improve over time. And do the work. And a lot of calculations.
While Meta takes an open-source approach to its AI work, competitors often prefer more closed-source work. Despite its name and history, OpenAI provides access to models but not source code. There is a healthy debate in the AI world about which approach is better, both in terms of development speed and safety. After all, some technologists, and clearly some computing doomers, worry that AI technology is developing too fast and could be dangerous to things like democracy. Noda.
For now, Meta continues to light the fire in AI, providing new challenges to its peers and rivals to beat the latest titles. Press play and let's talk about it!