Tue. Jul 14th, 2020

SVMAKERS.ORG

Shenango Valley Makers

Google Simplifies ML Kit SDK, Adds APIs

Google has made a standalone version of its Machine Learning Kit SDK available to developers, allowing them to create AI-assisted apps directly on devices. The big change includes two new APIs that make it possible to create web-connected ML Kit apps without requiring Firebase.

Google first debuted ML Kit at I/O two years ago. Since then, some 25,000 applications on both Android and iOS have come to depend on ML Kit’s features. Google believes the changes revealed this week will simplify the process of coding ML Kit apps.

The first release of ML Kit depended heavily on Firebase. Google says many developers asked for more flexibility. This is the primary reason Google is decoupling ML Kit from Firebase. On-device APIs in the new ML Kit SDK no longer necessitate a Firebase project, though both can still be used together should you wish. 

ML Kit’s APIs are meant to assist developers when it comes to Vision and Natural Language domains. This means ML Kit helps scan barcodes, recognize text, track and classify objects in real-time, translate text, and similar. It is now fully focused on on-device machine learning. Google says it’s fast since there is no network latency. It can perform inferences on a stream of images or video multiple times per second. It works offline. All the APIs maintain functionality no matter the network connection. Privacy is still top of mind. Thanks to the local processing, there’s no need for user data to be sent to a remote server over the network. 

First step? Google suggests developers migrate from the Firebase on-device APIs to the standalone ML Kit SDK. Instructions are available here. Once migrated, developers will find several new functionalities. 

For example, devs can shrink their app footprint via Google Play Services. By adding Face detection/contour APIs to ML Kit, developers can cram more functionality into their APK upon compiling. 

Google added Android Jetpack Lifecycle support to all the APIs. This means developers can put addObserver to use for automatically managing ML Kit API teardowns as apps go through actions such as screen rotations. This simplifies CameraX integration, which Google says developers should also consider adopting throughout their ML apps. 

Last, two new APIs are part of the early access program. The first is Entity Extraction, which detects entities in text and makes them actionable. Think addresses, phone numbers, and the like. The second is Pose Detection, which is a low-latency pose detection supporting 33 skeletal points, including hands and feet. Details are available here

Google says all the ML Kit resources are available on a refreshed website where samples, support documentation, and community channels are easily accessed.

Go to Source
Author: <a href="https://www.programmableweb.com/user/%5Buid%5D">EricZeman</a>