In the holy name of API, Google is rolling out TensorFlow, a new object detection API that shall enable developers and researchers to identify and recognize objects within images. Google is trying to provide with efficiency and simplicity in the TensorFlow thus making it easy to use for the users.
The numerous models that is included in the detection API comprises of heavy duty inception-based convolutional neural networks and streamlined models that are designed to operate on less sophisticated machines – a MobileNets shot detector is also there optimized to run in real-time on any smartphone. These models of MobileNets can handle tasks like object detection, facial recognition and landmark recognition.
Google, Facebook and Apple have been pouring their resources into these mobile models. Earlier Facebook has announced its Caffe2Go framework for building models that can run on smartphones- the first big implementation of this was Facebook’s Style Transfer. At WWDC, Apple pushed out CoreML, its attempt to reduce the difficulty of running machine learning models on iOS devices.
But even with so much of production made by other techo-freak companies, Google’s public cloud offerings stands out giving it a unique position with respect to both Facebook and Apple, and it is not new in delivering computer vision services at scale vis-a-vis its cloud vision API.