TensorFlow Lite classification on Android (with support for TF2.0)

Adding the first Machine Learning model into your mobile app

*** Edit, 23.04.2019 ***

TensorFlow 2.0 experimental support
In the repository, you can find Jupyter Notebook with the code running on TensorFlow 2.0 alpha, with the support for GPU environment (up to 3 times faster learning process). As this is not yet stable version, the entire code may break in any moment. The notebook was created just for the Colaboratory environment. It requires some changes to make it working on Docker environment described in the blog post.
The notebook is available here.

TensorFlow 2.0 experimental support
In the repository, you can find Jupyter Notebook with the code running on TensorFlow 2.0 alpha, with the support for GPU environment (up to 3 times faster learning process). As this is not yet stable version, the entire code may break in any moment. The notebook was created just for the Colaboratory environment. It requires some changes to make it working on Docker environment described in the blog post.
The notebook is available here.

— — —

This is my new series about using Machine Learning solutions in mobile apps. As the opposition to the majority of articles, there will be not much about building layers, training processes, fine-tuning, playing with Google TPUs and data science in general. 
Instead, we’ll focus on understanding how to plug in models into apps, use, debug and optimize them, and be effective in the cooperation with data scientists and AI engineers.

MNIST

For sure you saw countless examples of how to implement MNIST classifier. Therefore, for the sake of the series completeness, I decided to implement it one more time. Maybe it’s not very challenging from ML perspective, but it’s still a good example to show how to work with TensorFlow Lite models in a mobile app.

In this blog post, we’ll create a simple Machine Learning model that detects a handwritten number presented on an image. The model will be converted to TensorFlow Lite and plugged into Android application, step by step.

The environment

If your primary area of focus is mobile engineering, it’s pretty likely you don’t have python environment with all required libraries to start working with TensorFlow. And even if you are not going build machine learning models in the future, there are still some useful tools you need to deal with, that require at least basic set of libraries, including TensorFlow, NumPy (effective working with arrays and matrices), Matplotlib (plotting previews of data) and others (pyyaml, H5py).

Also recently, some of the solutions (e.g., SSD model for real-time object detection) still require compiling TensorFlow from the source code. No other option — you need to have all required libs available on your workstation.

If you do not necessarily want to begin with installing all the required tools in your operating system, there are at least two additional options for a quick start.

Docker

If you are familiar with containerization, there are official Docker images with TensorFlow available to pull. And even if you have never used Docker before, installation and configuration of your environment won’t take more than 20min.

See more: https://www.tensorflow.org/install/docker

To not copy/paste Docker and TensorFlow documentations, here are steps required just for launching our project:

  1. Clone this post’s repository, and via the terminal, navigate to the main directory.
  2. Run $ docker run -p 8888:8888 -v $(PWD):/notebooks -it tensorflow/tensorflow:latest-py3 that will use the latest version of TensorFlow and Python 3,
  3. Open http://127.0.0.1:8888/?token={token} in your browser (copy and paste URL with a token from the terminal).
Docker for TensorFlow

If everything is fine, in your web browser navigate to notebooks/MNIST.ipynb. You should see Jupyter notebook with all the steps required to build your first machine learning model and to convert it to TensorFlow Lite format.

Do you want to check if everything works correctly on your computer? Click on Cell->Run All.

Colab

Another option for machine learning environment is Google’s Colab — free to use, online environment for Jupyter notebooks. While docker runs scripts on your hardware, Colab is hosted on your Google Drive and run on the cloud (CPUs and GPUs if you’re fortunate enough). Read more about Colab.

See more: https://colab.research.google.com

To run MNIST notebook from this post, go to the project repository and click on the button “Open in Colab”.

ML model preparation

For the first Machine Learning model, we’ll use the example from TensorFlow’s Getting Started.

Getting started with TensorFlow — https://www.tensorflow.org/tutorials/

Our notebook extends this example by more detailed descriptions and additional steps that convert the model into TensorFlow Lite format.

Some high-level steps:

  1. Data import, preparation, building and training a model,
  2. Model evaluation on test data,
  3. Saving model,
  4. Converting saved model to TensorFlow Lite format,
  5. TensorFlow Lite model evaluation.

In the notebook I also marked the most interesting pieces of information, that can be helpful while plugging in our model into a mobile app’s source code. Just look for (⚠️📲👀).

When you run all notebook cells sequentially, in the result, you should get mnist_model.tflite. This file should be put into assets/ directory of our Android app.

For more details, check our MNIST notebook.

TensorFlow Lite model in Android app

Now we’ll plug TensorFlow Lite model into Android app, which:

  1. Takes a photo,
  2. Preprocess bitmap to meet model’s input requirements,
  3. Classifies bitmap with label 0 to 9.

The source code of the project is available on Github. For the camera feature, we’ll use CameraKit library to make it as simple as possible. Unfortunately, if you are Pixel user, you can encounter issues, where a click on “Take photo” button results with no action (more about the issue here). If it happens to you, relaunch the app until it starts working…

See more: https://github.com/frogermcs/MNIST-TFLite

Image preprocessing

MNIST dataset consists of thousands of images presenting handwritten numbers. Each of them is 28×28 pixels size, black and white, inverted colors (white number on black background). It means that all the images we put into our model need to be in the same format.

And this is what ImageUtils class was built for:

Classification

Now, when we have a bitmap in the desired shape, we can do a classification. First — load the model. In our example, we put mnist_model.tflite (result of MNIST.ipynb notebook) into assets/ directory. Additionally, in app/build.gradle you need to add:

Thanks to it, AAPT won’t compress TFLite models. It is important because TensorFlow Lite is memory-mapped, so it won’t work when compressed.

Next, let’s load our model:

When we have an instance of Interpreter, we need to convert the preprocessed bitmap into ByteBuffer — the input of TensorFlow Lite model, and then decipher classification results:

Here’s how we convert a bitmap into ByteBuffer:

As can be found in MNIST.ipynb, model’s input size is 3136 bytes = 28×28 (image size), x 4 (float type size). We create byte buffer pixel by pixel, converting a value of each of them from (0–255) to (0–1) (same as we did in our notebook).
Mean value of Red, Green and Blue channels isn’t needed here, as long as input bitmap is in grayscale already.


More information about our MNIST model can be found in MnistModelConfig.

All data presented here comes from MNIST.ipynb.

As an output of the classification, we get up to 3 labels (MAX_CLASSIFICATION_RESULTS) and the probability of them coming from our model’s classification.

And that’s it! We built a mobile app that can use TensorFlow Lite model. Here’s how it works in practice:

The source code of the project can be found on Github: https://github.com/frogermcs/MNIST-TFLite

You can also run MNIST.ipynb notebook on Google’s Colaborary:

TensorFlow 2.0 alpha support

In the repository, there is also a version of the notebook running on TensorFlow 2.0 alpha. Click below to run it on Google’s Colaboratory.

Thanks for reading! 🙂
Please share your feedback below. 👇

Leave a Reply

Your email address will not be published. Required fields are marked *