Train first image classification model with Firebase ML Kit
For more than a year now, Firebase – backend platform for mobile and web development, has ML Kit SDK in its portfolio. Thanks to this feature, it is way easier to implement machine learning solutions in mobile apps, regardless of ML skills we have. With APIs like Text Recognition or Image Labeling, we can add those functionalities to our app with a couple of lines of code. ML Kit also provides a simple way for plugging-in custom machine learning solutions – we provide TensorFlow Lite model, and Firebase is responsible for deploying it into our app – multiplatform (Android and iOS), offline or online (model can be bundled with app on downloaded on-demand in runtime), with a simplified code for implementing an interpreter.
Making sure that your ML model works correctly on mobile app (part 2)
This is the 2nd article about testing machine learning models created for mobile. In the previous post – Testing TensorFlow Lite image classification model, we built a notebook that exports TensorFlow model to TensorFlow Lite and compares them side by side. But because the conversion process is mostly automatic, there are not many places to break something. We can find differences between quantized and non-quantized models or ensure that TensorFlow Lite works similarily to TensorFlow, but the real issues can come up somewhere else – on the client side implementation. In this article, I will suggest some solutions for testing TensorFlow Lite model with Android instrumentation tests.
Converting TF models to CoreML, an iOS-friendly format
While TensorFlow Lite seems to be a natural choice for Android software engineers, on iOS, it doesn’t necessarily have to be the same. In 2017, when iOS 11 was released, Apple announced Core ML, a new framework that speeds up AI-related operations. If you are fresh in machine learning on mobile, Core ML will simplify things a lot when adding a model to your app (literally drag-and-drop setup). It also comes with some domain-specific frameworks – Vision (computer vision algorithms for face, rectangles or text detection, image classification, etc.), and Natural Language. Core ML and Vision give us a possibility to run inference process with the use of custom machine learning model. And those models may come from machine learning frameworks like TensorFlow. In this article, we will see how to convert TensorFlow model to CoreML format and how to compare models side by side.