Make sure that your ML model works correctly on mobile app (part 1)
Looking for how to automatically test TensorFlow Lite model on a mobile device? Check the 2nd part of this article.
Building TensorFlow Lite models and deploying them on mobile applications is getting simpler over time. But even with easier to implement libraries and APIs, there are still at least three major steps to accomplish:
- Build TensorFlow model,
- Convert it to TensorFlow Lite model,
- Implement in on the mobile app.
There is a set of information that needs to be passed between those steps – model input/output shape, values format, etc. If you know them (e.g. thanks to visualizing techniques and tools described in this blog post), there is another problem, many software engineers struggle with.
Why the model implemented on a mobile app works differently than its counterpart in a python environment?Software engineer
In this post, we will try to visualize differences between TensorFlow, TensorFlow Lite and quantized TensorFlow Lite (with post-training quantization) models. This should help us with early models debugging when something goes really wrong.
Here, we will focus only on TensorFlow side. It’s worth to remember, that it doesn’t cover mobile app implementation correctness (e.g. bitmap preprocessing and data transformation). This will be described in one of the future posts.