Build TensorFlow Lite model with Firebase AutoML Vision Edge

Train first image classification model with Firebase ML Kit

For more than a year now, Firebase – backend platform for mobile and web development, has ML Kit SDK in its portfolio. Thanks to this feature, it is way easier to implement machine learning solutions in mobile apps, regardless of ML skills we have. With APIs like Text Recognition or Image Labeling, we can add those functionalities to our app with a couple of lines of code.
ML Kit also provides a simple way for plugging-in custom machine learning solutions – we provide TensorFlow Lite model, and Firebase is responsible for deploying it into our app – multiplatform (Android and iOS), offline or online (model can be bundled with app on downloaded on-demand in runtime), with a simplified code for implementing an interpreter. 

Continue reading

Testing TensorFlow Lite image classification model

Testing TensorFlow Lite classification model

Make sure that your ML model works correctly on mobile app (part 1)

Looking for how to automatically test TensorFlow Lite model on a mobile device? Check the 2nd part of this article.

Building TensorFlow Lite models and deploying them on mobile applications is getting simpler over time. But even with easier to implement libraries and APIs, there are still at least three major steps to accomplish:

  1. Build TensorFlow model,
  2. Convert it to TensorFlow Lite model,
  3. Implement in on the mobile app.

There is a set of information that needs to be passed between those steps – model input/output shape, values format, etc. If you know them (e.g. thanks to visualizing techniques and tools described in this blog post), there is another problem, many software engineers struggle with.

Why the model implemented on a mobile app works differently than its counterpart in a python environment?

Software engineer

In this post, we will try to visualize differences between TensorFlow, TensorFlow Lite and quantized TensorFlow Lite (with post-training quantization) models. This should help us with early models debugging when something goes really wrong.
Here, we will focus only on TensorFlow side. It’s worth to remember, that it doesn’t cover mobile app implementation correctness (e.g. bitmap preprocessing and data transformation). This will be described in one of the future posts.

Continue reading

Traffic signs classification with retrained MobileNet model

Traffic signs classification with retrained MobileNet model

TensorFlow Lite classification model for GTSRB dataset

This post is a part of a series about building Machine Learning solutions in mobile apps. In the previous article, we started from building simple MNIST classification model on top of TensorFlow Lite. That post is also a good place to start if you are looking for some hints about how to set up your very first environment (local with Docker or remote with Colaboratory).

Let’s continue with basics. If you spent some time exploring the Internet for Machine Learning <-> mobile solutions, for sure you found “TensorFlow for Poets” code labs. If not, those are places where you should start your journey with building a more complex solution for apps vision intelligence.

Those code labs are focused on building very first working solution that can be launched directly on your mobile device. And here, we’ll build something very similar, with some additional explanation that can be helpful with understanding TensorFlow Lite a little bit better.

MobileNet

So what are code labs and this article about? They all show how to build a convolutional neural network that is optimized for mobile devices, with a little effort required for defining the structure of the Machine Learning model. Instead of building it from scratch, we’ll use a technique called Transfer Learning and retrain MobileNet for our needs.

MobileNet itself is a lightweight neural network used for vision applications on mobile devices. For more technical details and great visual explanation, please take a look at Matthijs Hollemans’s blog post: Google’s MobileNets on the iPhone (it says “iPhone” 😱, but the first part of the post is fully dedicated to MobileNet architecture). And if you want even more technical details, the paper titled MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications will be your friend.

Continue reading