The business that monetizes the voice ecosystem will lead the voice-first economy. During Google’s I/O developer conference May 7–9, Google previewed a major development in its fight with Amazon to be that leader: the launch of a faster Google Assistant, described as “game changing” by Gartner Research Director Werner Goertz. A Google Assistant that responds more effectively to voice commands is certain to make Google a more appealing utility as people continue to use the voice interface to accomplish everyday tasks. And offering a utility remains Google’s chief strength as a brand. The only question is whether Google will move quickly enough to capitalize on its advantage by making the next-generation Google Assistant widely available.
Google Launches On-Device Speech Recognition
Google announced that Google Assistant, Google’s voice assistant, is getting faster with on-device machine learning. In other words, Google devices using Android will offer a voice interface directly through the device rather than rely on the cloud.
This news might have come as a surprise to people who assume that their conversations with voice assistants are managed on their devices solely. In reality, the software that manages Google Assistant actually resides on the cloud. Similarly, when you use the Apple Siri voice assistant on your iPhone, Apple relies on the cloud. And so does Amazon when you use Amazon’s Alexa voice assistant on an Echo smart speaker. By moving the voice assistant software from the cloud to your phone, Google says Google Assistant will deliver answers to voice requests up to 10 times faster. According to Manuel Bronstein, Google’s vice president of product development, Google Assistant:
Running on-device, the next generation Assistant can process and understand your requests as you make them, and deliver the answers up to 10 times faster. You can multitask across apps — so creating a calendar invite, finding and sharing a photo with your friends, or dictating an email is faster than ever before. And with
Continued Conversation, you can make several requests in a row without having to say “Hey Google” each time.
He also wrote, “This breakthrough enabled us to create a next generation Assistant that processes speech on-device at nearly zero latency, with transcription that happens in real-time, even when you have no network connection.”
The next-generation Google Assistant will become available on Google Pixel phones later in 2019. Google has not yet announced its availability beyond the Pixel. Now that Google has taken the wraps off the improved product, Google needs to act quickly to make Google Assistant more widespread across the Android world while Google has first-mover advantage.
Why a Faster Google Assistant Matters
Making voice faster and responsive is crucial for Google to be a leader. Years ago, Google became synonymous with the entire search category because the Google search engine offered (and still offers) a utility. Users could type commands and get useful, reasonably accurate answers quickly. Fast-forward to 2019. Google still dominate traditional search. But Google does not lead the voice-first experience as it does traditional search. For example, in the United States, Amazon owns 63 percent of the market for voice-activated smart speakers (although its share is declining). Globally, Amazon and Google are neck and neck in this category.
Google has a strong motivation to overtake Amazon: the use of voice assistants is expected to triple from 2.5 billion digital voice assistants in use to 8 billion in 2023. With on-device voice:
Google Can Make Voice More Reliable
Google Assistant has been evaluated as being a more reliable assistant than Alexa based on accuracy of responses. By making Google Assistant faster, Google makes its voice technology even more reliable, thus building on its strength. At Google I/O, Google demonstrated vividly just how useful voice technology can be with the faster Google Assistant:
As Andy Boxall noted in Digital Trends, “Speed is everything, because with it comes convenience. Without it, there’s only frustration. You can reply to messages now using dictation, but you have to go through a series of steps first, and Assistant can’t always help. Using voice is faster, provided the software is accurate and responsive enough. Google Assistant 2.0 looks like it will achieve this goal, and using our phones for something more than only basic, often-repeated tasks may be about to become a quicker, less screen-intensive process.”
With speed and reliability comes trust. As consumers see just how useful voice can be, they’re going to move beyond the current state of using voice to do simple things such as check the weather and move on to more doing more complicated tasks such as making purchases — and businesses are eager for that day to come.
Google Can Make Voice a Better Mobile Experience
Google Assistant is available on one billion devices, up from 500 million in May 2018. Why? One big reason: mobile phones powered by Google’s Android operating system use Google Assistant by default. Android has acted as a Trojan horse to make Google Assistant live on mobile phones. As Manuel Bronstein told The Verge “The largest footprint right now is on phones. On Android devices, we have a very, very large footprint.” And here Amazon can’t touch Google, whose real rival is Apple for leadership of voice on mobile phones.
Now, mobility means more than using our phones, as evidenced by Amazon, Apple, and Google fighting to embed their voice assistants in automobiles. To that end, at I/O, Google also introduced driving mode, which makes any Android-powered phone using Google Assistant more valuable for driving. As Google announced,
In the car, the Assistant offers a hands-free way to get things done while you’re on the road. Earlier this year we brought the Assistant to navigation in Google Maps, and in the next few weeks, you’ll be able to get help with the Assistant using your voice when you’re driving with Waze.
Today we’re previewing the next evolution of our mobile driving experience with the Assistant’s new driving mode. We want to make sure drivers are able to do everything they need with just voice, so we’ve designed a voice-forward dashboard that brings your most relevant activities — like navigation, messaging, calling and media — front and center. It includes suggestions tailored to you, so if you have a dinner reservation on your calendar, you’ll see directions to the restaurant. Or if you started a podcast at home, you can resume right where you left off from your car. If a call comes in, the Assistant will tell you who’s calling and ask if you want to answer, so you can pick up or decline with just your voice. Assistant’s driving mode will launch automatically when your phone is connected to your car’s bluetooth or just say, “Hey Google, let’s drive,” to get started. Driving mode will be available this summer on Android phones with the Google Assistant.
Now, consider how a faster Google Assistant could help you as you’re driving and using your voice as a device for wayfinding, making restaurant reservations, and communicating. It’s easy to see how faster replies matter even more when you’re driving, especially when you drive through an unfamiliar area or cities with complicated routes.
“Insanely Powerful, But Can’t Be Used”
As noted, the faster Google Assistant will first launch on Google’s new Pixel phones, which are reportedly the fastest-growing smartphones in the United States. So far Google has not yet said when widespread availability beyond the Pixel will happen. But Google will need to make the faster on-device Google Assistant available on any Android-powered device to make a real difference. As Yahoo! News wryly noted in a recent headline, “New Google Assistant is insanely powerful, but can’t be used.” It’s hard to believe Google would restrict an on-device Google Assistant to Pixel phones. Google cannot afford to do so. The opportunity is too great, and the stakes are too high, for Google to play conservatively.
What’s the monetary pay-off for Google making Google Assistant smarter? As I discussed earlier this year, being the backbone of voice protects the company’s online advertising business, which accounts for more than 70 percent of Google’s revenue. Google needs to keep giving people reasons to use products such as the Google search engine, Google Maps, and Google Chrome. That’s why in 2018, Google launched Google Duplex, an AI-powered bot that mimics the human voice to book reservations and perform other tasks with businesses. (Google Duplex was launched on Pixel phones and is now available on the web.) By keeping people on Google’s ecosystem, Google can continue to deliver audiences to advertisers and learn from audience behavior.
As Amazon’s own advertising products take flight, and with Amazon stealing consumer search traffic from Google, Google is under tremendous pressure to protect and extend its reach in the home and on the go. As we move toward a voice-first world, is Google moving quickly enough?