There have been many attempts at open-source AI-powered voice assistants (see Rhasspy, Mycroft, and Jasper to name a few), all of which have focused on privacy without compromising functionality. Established with the goal of creating a protected offline experience. However, development has proven to be very slow. That's because, in addition to the usual challenges associated with open source projects, programming the assistant is extremely difficult. difficult. Technologies like Google Assistant, Siri, and Alexa have years, if not decades, of research and development and huge infrastructures to boot.
But that doesn't deter the people at Large-Scale Artificial Intelligence Open Network (LAION), the German nonprofit responsible for maintaining the world's most popular AI training data set. This month, LAION announced his BUD-E, a new initiative that aims to build a “fully open” voice assistant that can run on consumer hardware.
Why launch an entirely new voice assistant project when there are countless abandoned voice assistants in various states? Wieland Brendel, Ellis Institute fellow and BUD-E contributor, We believe that no open assistant exists with an architecture that is extensible enough to take full advantage of GenAI technology, especially large-scale language models (LLMs). OpenAI's ChatGPT.
“Most interactions are [assistants] Relies on a chat interface that is quite tedious to interact with [and] Interacting with these systems feels stilted and unnatural,” Brendel told TechCrunch in an email interview. “These systems are fine for communicating commands to control music or turn on lights, but they don't provide the basis for long, engaging conversations. BUD-E's goal is to The goal is to provide the basis for a voice assistant that feels natural, mimics the natural speech patterns of human interaction, and remembers past conversations.”
Brendel added that LAION wants all components of BUD-E to eventually be able to integrate with apps and services, even commercially, license-free. This is not necessarily the case with other open assistant initiatives.
BUD-E (recursive abbreviation for “Friends for Understanding and Digital Empathy”), a collaboration between the Ellis Institute in Tübingen, the technology consultancy Collabora and the Tübingen AI Center, has an ambitious roadmap. In a blog post, the LAION team revealed what they hope to accomplish in the coming months, primarily building “emotional intelligence” into BUD-E and ensuring conversations involving multiple speakers simultaneously. It is to be able to process it.
“There's a huge need for natural voice assistants that work well,” Brendel said. “LAION has shown in the past that it excels at community building, and ELLIS Institute Tübingen and Tübingen AI Center are committed to providing resources to develop the assistant. ”
BUD-E is up and running. You can download and install it now from GitHub on Ubuntu or Windows PC (macOS is coming). But it is clear that we are still in the early stages.
LAION assembled the MVP by stitching together several open models, including Microsoft's Phi-2 LLM, Columbia's StyleTTS2 for speech synthesis, and Nvidia's FastConformer for speech synthesis. So the experience is a little less optimized. For BUD-E to respond to commands within about 500 milliseconds, within the range of commercial voice assistants like Google Assistant and Alexa, a powerful GPU like Nvidia's is required. RTX4090.
Collabora is working pro bono to adapt its open source speech recognition and text-to-speech models, WhisperLive and WhisperSpeech, to BUD-E.
Jakub Piotr Cłapa, AI researcher and BUD-E team member at Collabora, said: “Building text-to-speech and speech recognition solutions ourselves is not possible with a closed model exposed through an API. That means it can be customized to a certain extent,” he said in an email. “The collaboration was initially [open assistants] One reason for this was that we had difficulty finding a suitable text-to-speech solution for an LLM-based voice agent for one of our customers. We decided to collaborate with the broader open source community to make our model more widely accessible and useful. ”
Near future, According to LAION, this will reduce the burden on BUD-E's hardware requirements and reduce wait times for the assistant. A longer-term effort is to build a data set of dialogs to fine-tune BUD-E. It is also possible to build a memory mechanism that allows BUD-E to store information from previous conversations, as well as an audio processing pipeline that allows it to track conversations between multiple people. immediately.
I asked the team: accessibility This was a priority, given that speech recognition systems have historically not worked well with languages other than English or accents other than Transatlantic languages. A Stanford University study found that speech recognition systems from Amazon, IBM, Google, Microsoft, and Apple were almost twice as likely to mishear a black speaker as they were a black speaker of the same age and gender. I understand that.
Brendel said: LAION doesn't ignore accessibility — but that's not the “immediate focus” BUD-E.
“The initial focus is to really redefine the experience of how you interact with voice assistants before generalizing that experience to a wider variety of accents and languages,” Brendel said.
for that, LAION has some pretty wild ideas for BUD-E, from an animated avatar to anthropomorphize the assistant to support for analyzing the user's face via webcam to describe emotional states.
That last bit, the ethics of facial analysis, is, needless to say, a little dicey. However, LAION co-founder Robert Kaczmarczyk emphasized that LAION remains committed to safety.
“[We] “Strictly adhere to the safety and ethical guidelines established by the EU AI Law,” he told TechCrunch via email, referring to the legal framework governing the sale and use of AI in the EU. The EU AI law allows European Union member states to adopt more restrictive rules and safeguards for “high-risk” AI, including emotion classifiers.
““This commitment to transparency not only facilitates the early detection and correction of potential biases, but also serves the cause of scientific integrity,” Kaczmarczyk added. “By providing access to our datasets, we enable the broader scientific community to participate in research that maintains the highest standards of reproducibility.”
LAION's previous work has not been ethically pure, and he is currently working on another somewhat controversial project in emotion detection. But perhaps her BUD-E will be different. We'll have to wait and see.