OpenAI may be close to releasing an AI tool that can take control of your PC and perform actions on your behalf.
Tibor Blaho, a software engineer with a reputation for accurately leaking upcoming AI products, claims to have discovered evidence of OpenAI's long-rumored Operator tool. Publications like Bloomberg have previously reported on Operator, which is described as an “agent” system that can autonomously handle tasks like writing code and booking travel.
According to The Information, OpenAI is targeting January as the release month for Operator. Code discovered by Blaho this weekend lends credence to that report.
According to Blaho, OpenAI's ChatGPT client for macOS now has an option to define shortcuts to the “Toggle Operator” and “Force Quit Operator”, which are currently hidden. OpenAI also added a reference to the Operator on its website, but the reference is not yet publicly available, Blajo said.
Confirmed – ChatGPT macOS desktop app has a hidden option to define shortcuts to “Toggle Operator” and “Force Quit Operator” in the desktop launcher https://t.co/rSFobi4iPN pic.twitter.com/j19YSlexAS
— Tibor Blaho (@btibor91) January 19, 2025
Blaho said OpenAI's site also includes unpublished tables comparing Operator's performance to other computer-based AI systems. Tables may be placeholders. However, if the numbers are accurate, we can see that the Operator is not 100% reliable for some tasks.
OpenAI's website already includes references to Operators/OpenAI CUA (Computer-Used Agents) – “Operator System Card Table,” “Operator Survey Evaluation Table,” and “Operator Rejection Rate Table.”
Using the Claude 3.5 Sonnet Computer, including comparisons with Google Mariner, etc.
(Table preview… pic.twitter.com/OOBgC3ddkU
— Tibor Blaho (@btibor91) January 20, 2025
On OSWorld, a benchmark that attempts to mimic a real-world computer environment, the OpenAI Computer Use Agent (CUA) (presumably the AI model that powers the Operator) scored 38.1%, beating Anthropic's computer-controlled model. , which fell well short of 72.4% of humans. Score. OpenAI CUA outperforms human performance in WebVoyager, which assesses AI's ability to navigate and interact with websites. However, leaked benchmarks show that this model falls short of the human-level scores of WebArena, another web-based benchmark.
If the leaks are to be believed, operators are also struggling with tasks that are easily performed by humans. In tests where the Operator was tasked with signing up to a cloud provider and starting a virtual machine, the Operator succeeded only 60% of the time. Operators tasked with creating Bitcoin wallets only had a 10% success rate.
OpenAI's entry into the AI agent space is imminent as rivals including the aforementioned Anthropic and Google are battling it out in this emerging field. AI agents may be dangerous and speculative, but tech giants are already touting them as the next big thing in AI. According to analytics firm Markets and Markets, the market for AI agents could reach a value of $47.1 billion by 2030.
Today's agents are quite primitive. But some experts have expressed concerns about its safety if the technology advances rapidly.
One of the leaked graphs shows the operator performing well in selected safety assessments, including tests that force the system to perform “illegal activities” and search for “sensitive personal data.” It shows that. Reportedly, one of the reasons for Operator's long development cycle is safety testing. In a recent X post, OpenAI co-founder Wojciech Zaremba criticized Anthropic for releasing an agent that he claimed had unmitigated security.
“I can only imagine the negative reaction that would occur if OpenAI released something similar,” Zaremba wrote.
It's worth noting that OpenAI has been criticized by AI researchers, including former staffers, for neglecting safety practices in favor of quickly commercializing its technology.