Web browser company Opera announced today that it will now allow users to download and use large-scale language models (LLMs) locally on their computers. This feature will first be rolled out to his Opera One users who will receive the developer stream update, allowing users to choose from her 150+ models in 50+ families.
These models include Meta's Llama, Google's Gemma, and Vicuna. This feature is available to users as part of Opera's AI feature drop program, giving users early access to some AI features.
The company said it uses the Ollama open source framework in its browser to run these models on computers. Currently, all available models are a subset of the Ollama library, but we are considering including models from a variety of sources in the future.
The company says each variant takes up more than 2GB of space on your local system. Therefore, you should be careful about free space to avoid running out of storage. In particular, Opera has not done any work to conserve storage while downloading models.
“Opera is the first to offer access to a variety of third-party local LLMs directly on the browser. We expect that their size may shrink as they become increasingly specialized in the task at hand. ” Opera Vice President Jan Standal told TechCrunch in a statement.
This feature is useful if you plan to test different models locally, but if you want to save space, there are plenty of online tools to explore different models, such as Quora's Poe and HuggingChat.
Opera has been experimenting with AI-powered features since last year. The company launched an assistant called Aria in the sidebar last May, and brought it to iOS in August. Opera built an AI-powered browser with its own engine for iOS in January after the EU's Digital Markets Act (DMA) required Apple to remove mandatory WebKit engine requirements for mobile browsers. announced that it was doing so.