Google announced a slate of new behind-the-scenes improvements coming to Google Assistant today to help it better understand natural human language.
While smart speakers are undeniably helpful, a lot of the time, it seems like users need to speak to them in very specific ways to make sure their requests are properly understood. This takes some time to get used to and, for some people, makes them aggravating to use.
Google has taken steps to address this issue by rolling out three new Assistant features to help remedy this.
Teach Google your friends names
The first feature allows users to tell Google how to enunciate the names of their contacts in case the virtual assistant doesn’t pronounce them properly. This makes it easier for you to speak to Google naturally when you ask it to call or text that contact.
This is going to be available in the edit section of the contacts app, and it’s only available in English.
Since timers are such a popular feature to use Google Assistant devices for, Google is making it more natural. This means if you stumble your words or set multiple timers at a time, the Assitant should get better at recognizing them.
To make this happen, the team rebuilt parts of Assitant with the BERT model that it used to make Search better in 2018. The real key with BERT is that it allows the system to process words in relation to one another instead of in a one-by-one order.
So far, this is only going to apply to alarms and timers, but Google says it’s working to bring the functionality to more aspects of the virtual assistant.
More natural conversations with Google on mobile
The final addition to Assistant resides on your phone. Using the BERT model, Google Assistant can see what’s on your phone and reply naturally to that. The Google example shows someone asking about Miami, then asking about nice beaches.
Since Assistant is using BERT, it knows that the previous question referenced Miami, so it expects the follow-up to be related to the same location.