The latest Google Developer Days conference was held in Krakow on 5th and 6th of September 2017, where Google announced the latest news for Google Assistant developers. This post aggregates all that information for you and helps explain how to build your perfect action.
If you currently trigger the Assistant, be it on Home, on your phone, and ask it a question, the answer will be built based on the global knowledge that Google has of the world. This means that Google will favour the path that is more likely to give any user the correct answer. Starting from - likely - next year, Google will leverage the previous search results to provide the following answers, therefore increasing the likelihood of previous contexts of appearing again in following answers.
Google Assistant on Android already does this, albeit partially, when you fire it up from any screen - let's say, a music player - and ask it questions like:
when did it come out?
This is possible given the context is provided by the currently playing song, artist and album.
If you ever wrote a Prolog program, you may have used the
assert meta-predicate to teach your program new rules in a dynamic way. Google will roll out a very similar feature that will accept instructions such as:
when it's raining I can not go to work walking
These kinds of instructions will effectively teach your unique Assistant new information about your lifestyle that it may not be able to understand on its own. Technically, it will generate new internal rules to be able to answer queries such as:
can I walk to work tomorrow morning?
As small as it is, I think this is the biggest takeaway of all the Google Assistant news, as users will now be able to increase the power of their Assistant without any hard-coded path from Google (as it is for the weather) or from developers (as it is now with custom Actions for Assistant).
Since the Google Assistant is planned to be available on Google Home, Android, Android Wear, Android TV, and Android Auto, any action you, as a developer, create for it should support different means of interaction: taps on screen, speech, text and visuals. The Assistant ecosystem supports that with a multitude of features:
Detailed information with dos and don’ts is available at The Conversational UI and Why it Matters.
The Assistant SDK allows Actions to request permissions in the middle of a conversation, if the permission wasn't provided before. This is much cleaner than asking for it up front, as it provides the user the context of their question: "Assistant is asking me for location because I just asked it to deliver me a pizza".
The best part about this type of permission request is that developers can provide the reason for it as part of the request (which cannot happen on Android, for instance):
const permission = app.SupportedPermissions.DEVICE_PRECISE_LOCATION; app.askForPermission('To deliver your porcini mushroom pizza', permission);
This will result in Assistant asking for something like:
To deliver your porcini mushroom pizza, I'll need your exact location. Is that ok?
You can read more about this in the Helpers for Actions guide.
Last but not least, Google Assistant also supports user identity. By simply building your own OAuth 2 identity server (or configuring one of the many open source alternatives available), you can allow users to link their accounts to your service.
The obvious reasons for this is to authorise payments through the Google platform, but also with external providers such as Stripe or Braintree.
You can read this detailed step-by-step guide on how to add money transactions to your Actions.
All actions available for Google Assistant can be found in the Google Assistant Discovery, where they're listed according to user's guessed preferences and from where they can be linked to anywhere, since the Assistant platform is available for all kinds of devices.
There's no better way of understanding the Google Assistant platform than getting hands on and developing a very simple Action.
You can watch the talk "Developing Conversational Assistant Apps Using Actions on Google" to get an in-deep understanding of the developer features we discussed in this blog post.
If you are more interested in the upcoming user-based features for Google Assistant, then you should watch the GDD Day 2 Keynote.
This Youtube playlist holds all the videos of the Google Developer Days 2017, if you're interested in more Google-related products.
Finally, if you're interested in learning more about designing Conversational Interfaces, you can check out Alex's blog post, "Building natural dialogues for your voice assistant".
We plan, design, and develop the world’s most desirable software products. Our team’s expertise helps
brands like Sony, Motorola, Tesco, Channel4, BBC, and News Corp build fully customized Android devices
or simply make their mobile experiences the best on the market. Since 2008, our full in-house teams work
from London, Liverpool, Berlin, Barcelona, and NYC.
Let’s get in contact