|Alphabet Inc.-owned Google has launched the next-generation Google Assistant at I/O 2019, the tech giant's annual developer conference / Photo by: David Nagle via Wikimedia Commons|
Alphabet Inc.-owned Google has launched the next-generation Google Assistant at I/O 2019, the tech giant's annual developer conference. It features an on-device speech recognition model that omits the need to upload voice samples to cloud systems.
TechRepublic reports that Google is driving the voice recognition technology from the cloud onto the edge with the new Google Assistant. The new smart assistant features a compact machine learning library that the tech giant said is built from 100 gigabytes of data to less than half a gigabyte.
"The souped-up digital helper requires hefty computing power for a phone, so it will only be available on high-end devices," tech media website CNET notes.
CNET adds that Google will launch the new Google Assistant on the upcoming premium version of its flagship Pixel phone, which is expected in the fall.
The search giant is also expanding its Edge Machine Learning capabilities with betas of the On-device Translation API, an Object Detection and Tracking API, and AutoML Vision Edge—all of which were also revealed at the I/O 2019 conference.
While the technology for the new Google Assistant is not yet deployable for third-party developers, it doesn't mean they can't take advantage of on-device voice recognition.
According to TechRepublic, French software firm Snip produces the Snips platform. This platform is free for non-commercial use and needs an order of magnitude less in terms of processing power, seeing that Snips can operate on a Raspberry Pi 3. The Snips platform also doesn't need an internet connection to run except for certain integrations.
"The main differentiator of the Snips platform is that it focuses on all the components required to build high-quality voice interfaces: Wake word detection, Speech Recognition, and Natural Language Understanding," said Joseph Dureau, the platform's CTO.
He said none of these voice processing algorithms are found in Google's ML Kit and that Snips' data generation solutions provide the possibility of generating great volumes of "diverse and high-quality training data for any voice interface use case.
"It enables developers to train their assistants with [a] very high performance before their actual launch, helping them to overcome the cold start problem," Dureau added.
The potential for developers to use this technology in their own applications could ease some of the worries of individuals having second thoughts in adopting voice-activated smart assistants.