Google Releases MobileNets, Mobile-First Computer Vision Models for Tensor Flow
Google has announced MobileNets, a family of mobile first computer vision models for TensorFlow. The company designed them to work on low-power, low-speed platforms like mobile devices. Having a device that can determine what you are currently seeing, on your phone, might just revolutionize our app ecosystems by unleashing a variety of new use-cases.
Visual recognition isn’t something new, but most of the existing applications taking advantage of complex computer vision use cloud technology to get their results. It’s not a convenient way that stands in the way of privacy to many people. Google plans to change that approach and utilize offline, on-device machine learning. Using mobile hardware should reduce both waiting/latency and battery consumption if implemented properly.
“MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings, and segmentation similar to how other popular large-scale models, such as Inception, are used. ”
In case you are not familiar with how this is useful, the application implementing such computer vision techniques might be able to pre-process an image to determine its content. For instance, you should be able to get the name of the flower on the picture that you have just taken in your garden. The announcement of MobileNets is not surprising given Google’s latest focus on getting
With MobileNets, developers will have more tools to create mobile artificial intelligence-powered apps. Additionally, running these tasks directly on a device benefits the users substantially, as a big concern is having data leave one’s phone, but on-device computer vision addresses that.
Right now, a few big companies are working on bringing machine learning to their apps. Both Apple and Google have dropped hints that they are working on processors designed to best utilize machine learning, and Qualcomm has been focusing on optimizing current and future processors for on-device machine learning as well.