Deepgram has made a reputation for itself as one of many go-to startups for voice recognition. In the present day, the well-funded firm introduced the launch of Aura, its new real-time text-to-speech API. Aura combines extremely practical voice fashions with a low-latency API to permit builders to construct real-time, conversational AI brokers. Backed by massive language fashions (LLMs), these brokers can then stand in for customer support brokers in name facilities and different customer-facing conditions.
As Deepgram co-founder and CEO Scott Stephenson informed me, it’s lengthy been potential to get entry to nice voice fashions, however these had been costly and took a very long time to compute. In the meantime, low latency fashions are likely to sound robotic. Deepgram’s Aura combines human-like voice fashions that render extraordinarily quick (sometimes in nicely below half a second) and, as Stephenson famous repeatedly, does so at a low worth.
“All people now’s like: ‘hey, we want real-time voice AI bots that may understand what’s being mentioned and that may perceive and generate a response — after which they will converse again,’” he mentioned. In his view, it takes a mixture of accuracy (which he described as desk stakes for a service like this), low latency and acceptable prices to make a product like this worthwhile for companies, particularly when mixed with the comparatively excessive value of accessing LLMs.
Deepgram argues that Aura’s pricing at present beats nearly all its opponents at $0.015 per 1,000 characters. That’s not all that far off Google’s pricing for its WaveNet voices at 0.016 per 1,000 characters and Amazon’s Polly’s Neural voices on the similar $0.016 per 1,000 characters, however — granted — it’s cheaper. Amazon’s highest tier, although, is considerably costlier.
“You must hit a very good worth level throughout all [segments], however then you need to even have superb latencies, velocity — after which superb accuracy as nicely. So it’s a very onerous factor to hit,” Stephenson mentioned about Deepgram basic strategy to constructing its product. “However that is what we targeted on from the start and for this reason we constructed for 4 years earlier than we launched something as a result of we had been constructing the underlying infrastructure to make that actual.”
Aura presents round a dozen voice fashions at this level, all of which had been educated by a dataset Deepgram created along with voice actors. The Aura mannequin, identical to all the firm’s different fashions, had been educated in-house. Here’s what that feels like:
You may strive a demo of Aura right here. I’ve been testing it for a bit and despite the fact that you’ll generally come throughout some odd pronunciations, the velocity is actually what stands out, along with Deepgram’s current high-quality speech-to-text mannequin. To spotlight the velocity at which it generates responses, Deepgram notes the time it took the mannequin to start out talking (usually lower than 0.3 seconds) and the way lengthy it took the LLM to complete producing its response (which is usually slightly below a second).