As companies increasingly integrate artificial intelligence into their workflows and products, the demand for tools and platforms that make it easier to create, test, and deploy machine learning models is increasing. This category of platforms, commonly known as Machine Learning Operations (MLOps), is already a bit crowded with startups like InfuseAI, Comet, Arrikto, Arize, Galileo, Tecton, and Diveplane, as well as offerings from incumbents like Google Cloud. I'm doing it. Azure, AWS.
Now, one Korean MLOps platform called VESSL AI is carving out a niche of its own by focusing on optimizing GPU costs using a hybrid infrastructure that combines on-premises and cloud environments. Masu. And the startup has now raised $12 million in a Series A funding round to accelerate the development of its infrastructure for companies wanting to develop custom large-scale language models (LLMs) and vertical AI agents.
The company already has 50 corporate customers, including major companies such as Hyundai. LIG Nex1, a Korean aerospace and weapons manufacturer. TMAP Mobility is a mobility-as-a-service joint venture between Uber and South Korean telecommunications company SK Telecom. So are tech startups Yanolja, Upstage, ScatterLab, and Wrtn.ai. The company also has strategic partnerships with Oracle and Google Cloud in the US and has more than 2,000 users, co-founder and CEO Jaeman Kuss An told TechCrunch.
Mr. An founded the startup in 2020 along with Jihwan Jay Chun (CTO), Intae Ryuo (CPO), and Yongseon Sean Lee (Technical Director). The founders previously worked at Google, mobile gaming company PUBG, and some AI startups. A specific challenge he had to deal with while developing machine learning models at a previous medical technology startup was the sheer amount of work involved in developing and using machine learning tools.
The team found that by leveraging a hybrid infrastructure model, they could make processes more efficient and, most importantly, lower costs. The company's MLOps platform essentially uses a multi-cloud strategy and spot instances to reduce GPU costs by as much as 80%, An said, adding that this approach also addresses GPU shortages and helps train, deploy, and operate AI models. Streamline it, he added. Large scale LLM.
“VESSL AI's multi-cloud strategy allows us to use GPUs from various cloud service providers such as AWS, Google Cloud, and Lambda,” said An. “The system automatically selects the most cost-effective and efficient resources, significantly reducing costs for our customers.”
VESSL's platform offers four main features: VESSL Run to automate training of AI models. VESSL Serve supports real-time deployment. VESSL Pipelines. Streamline your workflow by integrating model training and data preprocessing. VESSL Cluster optimizes GPU resource usage in cluster environments.
Investors in the company's $16.8 million Series A round include A Ventures, Ubiquoss Investment, Mirae Asset Securities, Sirius Investment, SJ Investment Partners, Wooshin Venture Investment, and Shinhan Venture Investment. The startup has 35 staff in its San Mateo offices in South Korea and the United States.