TensorFlow to CoreML conversion and model inspection

Converting TF models to CoreML, an iOS-friendly format

Core ML

While TensorFlow Lite seems to be a natural choice for Android software engineers, on iOS, it doesn’t necessarily have to be the same. In 2017, when iOS 11 was released, Apple announced Core ML, a new framework that speeds up AI-related operations.
If you are fresh in machine learning on mobile, Core ML will simplify things a lot when adding a model to your app (literally drag-and-drop setup). It also comes with some domain-specific frameworks – Vision (computer vision algorithms for face, rectangles or text detection, image classification, etc.), and Natural Language.
Core ML and Vision give us a possibility to run inference process with the use of custom machine learning model. And those models may come from machine learning frameworks like TensorFlow.
In this article, we will see how to convert TensorFlow model to CoreML format and how to compare models side by side.

TensorFlow 2.0 support

At the moment Core ML converter doesn’t support TensorFlow 2.0 models. This will change in the future – converter is committed to supporter the latest stable version of TensorFlow. The worth following issue on Github.

Prerequirements

To convert TensorFlow model into Core ML, we’ll need more than Colaboratory environment. Here’s the list of things required to be available the machine you work on:

  • TensorFlow v1.* (at the moment of writing this article, TensorFlow 2.0 is not yet supported in Core ML converter)
  • Python 3.6 with some additional libraries: Numpy, Jupyter Notebook, pillow, requests – you will find them all in Jupyter Notebook with model inspection code.
    (Python 3.7 is not yet supporter by coremltools – the issue to track).
  • TF-CoreML – TensorFlow to Core ML converter, and it’s dependency – coremltools, for conversion and validation.
  • macOS – not only for iOS app development but also to be able to launch coremltools.

When all requirements are met, we’ll do: 

  1. MobileNet v2 conversion from TensorFlow to Core ML format,
  2. Inspect Core ML model (compare it with TF).

After that our model will be ready to be implemented in iOS app.

TensorFlow to CoreML conversion

This section can be done in Colaboratory environment.
Let’s create a new notebook, and start with installing tfcoreml and importing appropriate libraries:

Now let’s download and unpack the MobileNet v2 model and its labels:

Now we need to learn more about model that we are going to convert to CoreML. Fortunately package with MobileNet v2 network contains txt file with those information.
Alternatively you can use Netron application to preview *.pb file.
Read more about visualizing models.

And now the final step – convert frozen TensorFlow graph into CoreML with tf-coreml tool.

What is interesting about CoreML model, it contains labels built in and does some preprocessing on data before the inference process is run.
Here are some interesting tfcoreml params to:

  • image_input_names – if it’s set, those inputs will be treated as an image by Core ML,
  • image_scale – when image_input_names is set, we can also define a scale that will be used on input data. To convert RGB channel values from range [0, 255] to [0, 1] we use IMAGE_INPUT_SCALE = 1.0/255.0.
  • red_bias (and green_, blue_, gray_) – not used here, but we can also add bias to the scaled image. So, e.g. if a model requires data in the range [-1, 1], we would use: IMAGE_INPUT_SCALE = 2.0/255.0 and biases set to -1
  • class_labels – file containing model labels.
  • output_feature_names – the name of the output layer.

For more take a look at tfcoreml repository and example Jupyter Notebooks that show some of the use cases.

When the conversion is done, you should see a new file: mobilenet_v2_1.0_224.mlmodel.
In the console output, you should see some information about newly created model, similar to this:

Similar information can be seen in Netron:

Preview of CoreML *.mlmodel file representing MobileNet v2 model.

As you may notice, input and output names were changed a bit, from: input:0 to: input__0 and from: MobilenetV2/Predictions/Reshape_1:0 to MobilenetV2__Predictions__Reshape_1__0. It is because a generated file needs to be semantically correct with Swift or Objective-C code.

Colab notebook with TensorFlow to CoreML conversion is available here:

Model inspection

Before we put CoreML model into a mobile app, it’s good to check if it works the same way like the original TensorFlow model. To do this, we’ll create a notebook, that this time needs to be run on macOS directly (this is coremltools requirement).
Assuming that you have all needed python libs on your machine, to create a new notebook, first run Jupyter Notebook environment in the Terminal:

$ jupyter notebook

And then in your web browser run the link that was presented as a result of this operation (e.g. http://localhost:8888/?token=1234abcd).

Now we’re going to create python 3 notebook where we will run inference on both models – CoreML and TensorFlow and compare the results.
First, start with all required imports:

Now we’ll load two images from the Wikipedia:

Preview:

We will try to classifly two randomly selected images from the internet.

Now let’s load our CoreML model:

As you can see, the specification shows what we have seen during the export operation. Now let’s do some data preprocessing:

And there is nothing more we should do here. Data transformation (form the range [0, 255] to [0, 1]) is done automatically by Core ML model.

Let’s run the prediction:

And the output is:

As you can see, we tried different values for useCPUOnly flag. For Golden Retriever we forced Core ML to use CPU, while for the laptop image we let the operating system to determine what hardware should be used. While there was no big difference between those two, we’ll see an interesting thing when comparing the result with TensorFlow.

Now let’s do similar prediction with TensorFlow model.
First, load labels:

And do some data preprocessing:

For TensorFlow inference process, we need to do a bit more preparation. Input data needs to be not only in a proper size (224x224x3), but also have proper data format (float value in the range [0, 1]), and input tensor should have 4 dimensions shape.

Then, let’s load TensorFlow model:

And finally, prediction:

Here are outputs:

Results are pretty similar when CoreML is run on CPU (TF version that we used here runs inference on CPU too). But a bit more different when GPU was used.
Where does this difference come from? It isn’t necessarily hardware itself. There are some different ways of how high-level functions are implemented for both architectures – CPU and GPU. Usually, those differences are very subtle like here. But if you find them very far from each other, it’s pretty likely a bug.

At this point outcome is satisfying, and we can assume that CoreML model works correctly. The last thing we can do then is to implement it in iOS application. And this will be a part of another article in the future.

Source code, references

Source code for this blog post is available on Github (Colab notebook for the conversion process and Jupyter Notebook for TensorFlow and CoreML comparison): https://github.com/frogermcs/TF-to-CoreML/

Notebook with TF to CoreML conversion can be run by clicking at the button below:

This image has an empty alt attribute; its file name is colab-badge.svg

Preview of Jupyter Notebook with TF and CoreML comparison can be seen here (this notebook can be run only on macOS): https://github.com/frogermcs/TF-to-CoreML/blob/master/notebooks/Core_ML_inspection.ipynb

There is also a great blog post from Matthijs Hollemans about converting MobileNet SSD model to CoreML: MobileNetV2 + SSDLite with Core ML.


Thanks for reading! 🙂
Please share your feedback below. 👇

Leave a Reply

Your email address will not be published. Required fields are marked *