Amazon researchers have trained the largest text-to-speech model ever. This model claims to exhibit “emergent” properties that improve the ability to speak naturally even in complex sentences. A breakthrough could be just what technology needs to get out of the uncanny valley.
These models are constantly growing and improving, but the researchers specifically hoped to see the kind of jump in power that was observed when language models got beyond a certain size. For unknown reasons, as LLMs grow beyond a certain point, they become more robust and versatile, capable of performing tasks for which they were not trained.
This doesn't mean they're gaining sentience or anything, it's just that their hockey-stick performance on certain conversational AI tasks is beyond a certain point. The team at Amazon AGI (it's no secret what they're aiming for) thought the same thing might happen as text-to-speech models grow. And their research suggests that this is indeed the case.
The new model is called Big Adaptive Streamable TTS with Emergent capabilities, which they have twisted into the abbreviation BASE TTS. The largest version of the model uses 100,000 hours of public domain audio, 90% of which is in English and the rest in German, Dutch, and Spanish.
With 980 million parameters, BASE-large is probably the largest model in this category. For comparison, we also trained 400M and 150M parametric models based on 10,000 and 1,000 hours of audio, respectively. So the idea is that if one of these models exhibits emergent behaviors and another doesn't, we can figure out the extent to which those behaviors start. To appear.
As it turned out, the midsize model represented the leap in capability the team was looking for. Not necessarily in the usual voice quality (which is better reviewed, but only by a few points), but in a set of new capabilities that the team observed and measured. . Here are some examples of tricky texts mentioned within papers:
- compound noun: The Beckhams decide to rent a quaint country house with a charming stone structure.
- emotions: “Oh my god! Are we really going to the Maldives? That's unbelievable!” Jenny squealed, bouncing on her toes with unbridled glee.
- foreign words: “Mr. Henri, a well-known chef, has designed a seven-course meal, each with its own unique flavor.
- paralinguistics (i.e. non-words that can be read): “Shh, Lucy, shh, don't wake your baby brother,” Tom whispered as he tiptoed across the nursery.
- punctuation mark: She received a strange email from her brother. “There is an emergency at home. Please call us as soon as possible! Mom and dad are worried… #familyissue.”
- question: But the question of Brexit remains: after all the trials and tribulations, will ministers be able to find an answer in time?
- Syntactic complexity:De Moya, who recently won the Lifetime Achievement Award, starred in a movie in 2022 that received mixed reviews but became a huge hit at the box office.
“These sentences can be used to parse garden path sentences, add phrasal stress to lengthy compound nouns, produce emotive or whispered sounds, or use foreign words such as 'qi' or '@'. designed to include difficult tasks such as producing the correct phonemes for punctuation. – None of which BASE TTS is explicitly trained to do,” the authors write.
Such features typically trigger the speech synthesis engine, causing it to mispronounce, skip words, use strange intonation, and make other gaffes. Although the BASE TTS still had its problems, it performed much better than his contemporaries such as the Tortoise and VALL-E.
There are many examples of these difficult texts being spoken very naturally on sites created for new models. Of course, these are chosen by researchers, so they are necessarily cherry-picked, but they are impressive nonetheless. If you don't want to click, see some below.
Since the three BASE TTS models share an architecture, it is clear that the size of the model and the range of its training data are responsible for the model's ability to handle some of the complexities mentioned above. Please note that this is still an experimental model and process and is not a commercial model or anything. Future research should identify inflection points for new capabilities and how to efficiently train and deploy the resulting models.
One thing to note is that this model is, as the name suggests, “streamable.” That is, the entire sentence does not have to be generated at once, but is generated moment by moment at a relatively low bitrate. The team also tried packaging audio metadata, such as emotion and prosody, into a separate low-bandwidth stream that could accompany vanilla audio.
It looks like the text-to-speech model could have a breakout moment in 2024. It's just election time. However, there is no denying the usefulness of this technology, especially when it comes to accessibility. The team said it declined to release the source of the model or other data, citing the risk that malicious parties could misuse the model. However, the cat will eventually come out of the bag.