Google Debuts Actions SDK for Google Assistant

Google this week made more tools available to developers for customizing app interactions with Google Assistant. By releasing Actions Builder and Actions SDK, Google says developers can build their own conversational actions for Assistant faster than ever. Here’s what you need to know.

Google says there are more than 500 million active users of Google Assistant across some 90 countries. Google allows developers to tie their own voice-enabled apps and services to Assistant in ways that benefit users. That’s why it wants to make improvements to the way conversational actions work. 

Actions Builder, the first of the new tools, is a web-based IDE that allows developers to develop, test, and deploy their apps straight from the Actions console. Google says the refreshed graphical interface gives developers the power to visualize conversations, manage natural language understanding, and debug. 

The Actions SDK takes the web-powered tools of Actions Builder and brings them to the local level. It offers a file-based representation of Actions projects, which lets developers create natural language understanding training data, manage conversation flows, and import training data in bulk. Google says it updated the accompanying CLI so developers can build Actions entirely with code. 

Together, these two tools should cover the bases developers need when creating Actions for Google Assistant. Moreover, Google says developers can switch from one environment to the other to suit changing workflow needs. Codelabs, samples, and documentation are available to assist developers.

Google has more in store for developers and Assistant. It also added functionality to Home Storage and updated the Media API and Continuous Match Mode. 

Home Storage is a brand new feature that provides communal storage for Assistant devices connected to the home graph. Developers can save content for each individual user of the Assistant device, allowing for things such as saving the last played point in a game for every individual in a household. 

The updated Media API now supports longer-form media sessions. This means users can resume playback of select content across devices. For example, people can resume playback from a specific spot of a song or video, or choose to commence playback from a certain spot. 

Last, Continuous Match Mode lets Assistant respond immediately to users’ speech. This is meant to facilitate more fluid experiences, as Assistant can recognize defined words and phrases set by developers. This impacts the hardware, such as the mic, which will stay open temporarily so users can speak when they are ready without requiring them to sit through additional prompts from Assistant or the app.

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">EricZeman</a>