AWS, Amazon's cloud computing business, wants to be the go-to place for companies to host and fine-tune custom-generated AI models.
Today, AWS announced the launch of Custom Model Import (preview), a new feature in Bedrock, AWS' suite of enterprise generative AI services. This allows organizations to import and access in-house generated AI models as fully managed APIs. .
Once imported, company-specific models can benefit from the same infrastructure as other generative AI models in Bedrock's library (e.g. Meta's Llama 3, Anthropic's Claude 3). This includes tools to extend and fine-tune knowledge and implement safeguards to reduce bias. .
“Some AWS customers use other tools to fine-tune and build their own models outside of Bedrock,” said Vasi Philomin, vice president of generative AI at AWS. he said in an interview with TechCrunch. “This custom model import feature allows you to bring your own models into Bedrock and display them right next to all the other models that are already on Bedrock, as well as use them in any workflow that is already on Bedrock. .”
Importing a custom model
According to a recent poll from Intel's AI specialist subsidiary Cnvrg, the majority of companies are tackling generative AI by building their own models and refining them for their applications. According to the poll, these companies say they believe infrastructure, including cloud computing infrastructure, is the biggest barrier to adoption.
By importing custom models, AWS aims to keep pace with its cloud rivals and quickly respond to meet your needs. (Amazon CEO Andy Jassy foreshadowed something similar in his recent annual letter to shareholders.)
Vertex AI, similar to Google's Bedrock, has for some time now allowed customers to upload generative AI models, tune them, and make them available through an API. Databricks has also long provided toolsets for hosting and fine-tuning custom models, including its own recently released DBRX.
When asked what makes Custom Model Import unique, Philomin claims that Bedrock, and by extension, offers broader and deeper model customization options than its competitors, with “tens of thousands” of customers currently using Bedrock. I added that I am using it.
“First, Bedrock offers several ways for customers to address the service delivery model,” Philomin says. “Secondly, we have a ton of workflow around these models and our customers can stand right next to all the other models we already offer. The key thing about this is that you can use the same workflow to try out multiple different models and actually deploy them into production from the same location.”
So what are the implied model customization options?
Philomin points to Guardrails, which allows Bedrock users to set, or at least attempt to filter, the model's output for things like hate speech, violence, and personal and corporate information. (Generative AI models are notorious for deviating in problematic directions, including leaking sensitive information, and AWS was no exception.) He also noted that customers are asking how well one or more of their models are. We also highlighted Model Evaluation, a Bedrock tool that can be used to test whether Perform by meeting a given set of criteria.
Both Guardrails and Model Evaluation are now generally available after several months of preview.
Note that custom model import currently only supports three model architectures (Flan-T5 for Hugging Face, Llama for Meta, and Mistral models), as well as Vertex AI and Microsoft's on Azure. I feel it's important to note that it's a comparable offering to other Bedrocks that includes AI development tools. Provides more or less equivalent safety and evaluation capabilities (see Azure AI Content Safety, Model Evaluation with Vertex, etc.).
Unique to Bedrock, however, is AWS's Titan family of generative AI models. And with the release of custom model import, there are some notable developments on this front.
Upgraded Titan model
Titan Image Generator, AWS' text-to-image model, is now generally available after being released in preview last November. As before, Titan Image Generator can create new images or customize existing images by specifying a text description (for example, replacing the background of an image while preserving the subject matter within the image). Such).
Without going into details, Philomin said that compared to the preview version, GA's Titan Image Generator can produce more “creative” images. (Your guess is as good as mine as to what that means.)
I asked Philomin if he could share more details about how the Titan Image Generator was trained.
At the time of the model's debut last November, AWS was vague about exactly what data it used to train Titan Image Generator. Few vendors readily release such information. They view training data as a competitive advantage, so they hold it close to their chest.
Training data details are also a potential source of intellectual property-related litigation, preventing much from being revealed. Several cases pending in court suggest that text-to-image tools can duplicate an artist's style without the artist's express permission and allow users to create new works similar to the artist's original without the artist receiving compensation. It rejects the vendor's fair use defense by claiming that it can be produced. .
Philomin only told me that AWS uses a combination of first-party data and license data.
“We have a combination of proprietary data sources, but we also license a lot of the data,” he said. “In fact, we pay license fees to copyright holders to make their data available and have agreements with several of them.”
It's even more detailed than it was in November. However, I have a feeling that his Philomin answer won't satisfy everyone, especially content creators and AI ethicists who advocate for greater transparency regarding the training of generative AI models.
In exchange for transparency, AWS provides indemnification that covers customers in the event that a Titan model, like the Titan Image Generator, spits out a potentially copyrighted training sample (i.e., spits out a mirror copy of it). It says it will continue to offer policies. (Several competitors, including Microsoft and Google, offer similar policies covering their image generation models.)
To address another pressing ethical threat, deepfakes, AWS now includes an invisible “tamper-proof” watermark on images created with Titan Image Generator, as well as during preview. It says that it will be done. Philomin says his release of GA has improved resistance to watermark compression and other image editing and manipulation.
Moving into less controversial territory, I asked Philomin if AWS (along with Google, OpenAI, etc.) is considering video generation given the excitement surrounding the technology (and investment). Philomin didn't say AWS doesn't…but he didn't suggest anything further.
“Obviously, we're always thinking about what new features our customers want, and video generation always comes up in our conversations with customers,” Philomin says. “Please look forward to.”
In final Titan-related news, AWS has released the second generation of the Titan Embeddings model, Titan Text Embeddings V2. Titan Text Embeddings V2 transforms text into numeric representations called embeddings to power search and personalization applications. The first generation of his Embeddings model was similar, but AWS claims Titan Text Embeddings V2 is more efficient, cost-effective, and accurate overall.
“What the Embeddings V2 model does is reduce overall storage. [necessary to use the model] It delivers up to 4x improvement while maintaining 97% of accuracy,” Philomin claims, “outperforming other comparable models.”
We'll see if real tests bear that out.