TensorFlow Lite學習筆記

本文為Udacity上Introduction to TensorFlow Lite此門課程的學習筆記

Coursera上有另一門Device-based Models with TensorFlow Lite的課程,我自己用10幾分鐘的時間快速瀏覽過去,發現大同小異。主要差別在於Coursera上的課程會帶學生看一次Android和iOS上的APP Code,比較詳細一些。

[Device-based Models with TensorFlow Lite]課程Github Repo(https://github.com/lmoroney/dlaicourse/tree/master/TensorFlow%20Deployment/Course%202%20-%20TensorFlow%20Lite)

Introduction to TensorFlow Lite課程主要分為以下部分:

Introduction to TensorFlow Lite

TF Lite on Android

TF Lite on iOS with Swift

TF Lite on IoT

包含了幾乎所有的裝置了。

NVIDIA Jetson Nano學習筆記(五):即時影像分類系統(PiCamera+OpenCV+TensorFlow Lite+Firebase)中,我透過Google Cloud AutoML Vision訓練自定義的TF Lite模型,進行Image Classification。

NVIDIA Jetson Nano學習筆記(六):即時影像偵測系統(PiCamera+OpenCV+TensorFlow Lite)中,我使用了Google官方的Pre-trained Model進行Object Detection

有過實際應用案例後,我回過頭來,更深入的針對Tensorflow Lite進行知識上的學習及了解。

Image for post
Image for post

可參考此頁的表格了解詳細差異

簡單來說,Quantization Model的效能會比Float Model還要快

The simplest form of post-training quantization quantizes weights from floating point to 8-bits of precision. This technique is enabled as an option in the TensorFlow Lite converter. At inference, weights are converted from 8-bits of precision to floating point and computed using floating-point kernels. This conversion is done once and cached to reduce latency.To further improve latency, hybrid operators dynamically quantize activations to 8-bits and perform computations with 8-bit weights and activations. This optimization provides latencies close to fully fixed-point inference. However, the outputs are still stored using floating point, so that the speedup with hybrid ops is less than a full fixed-point computation.If you don’t intend to quantize your model, you’ll end up with a floating point model. Also, remember that the converter will do its best to quantize all the operations (ops), but your model may still end up with a few floating point ops.It is important to note that even though post-training quantization works really well, quantization-aware training generally results in a model with higher accuracy because it makes the model more tolerant to lower precision values. Therefore, quantization-aware training should be used in cases where the loss of accuracy brought by post-training quantization is beyond acceptable thresholds.
export_dir = 'saved_model/1'tf.saved_model.save(model, export_dir)
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)tflite_model = converter.convert()tflite_model_file = pathlib.Path('model.tflite')tflite_model_file.write_bytes(tflite_model)
MODULE_HANDLE ="https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4"feature_extractor = hub.KerasLayer(MODULE_HANDLE,
input_shape=IMAGE_SIZE + (3,),
output_shape=[FV_SIZE],
trainable=do_fine_tuning)
model = tf.keras.Sequential([
feature_extractor,
tf.keras.layers.Dense(num_classes)
])
model.summary()
In this part of the lesson you are going to deploy a TF Lite model in an Android App that classifies images of cats and dogs. To get things up running you will need to download:Cats vs Dogs appTF Lite model we created in the Transfer Learning Colab.Once you have downloaded the app, you can run it in Android Studio. For simplicity, the app already contains the above pre-trained .tflite model and the .txt file containing the class labels in the assets folder, so you can start using the app right away.If you want to use another TF Lite model, download the TF Lite model and labels to your computer and then drag and drop the .tflite file containing your model and the .txt file containing the labels into the assets folder of the app:~/cats_vs_dogs/app/src/main/assets/. You can then run your app in Android Studio.
In this part of the lesson you are going to deploy a TF Lite model in an Android App that continuously classifies whatever it sees from your device's back camera . To get things up running you will need to download:Image Classification appQuantized MobileNet model and Labels .Once you have downloaded the app, you can run it in Android Studio. For simplicity, the app already contains the above pre-trained .tflite model and the .txt file containing the class labels in the assets folder, so you can start using the app right away.If you want to use another TF Lite model, download the TF Lite model and labels to your computer and then drag and drop the .tflite file containing your model and the .txt file containing the labels into the assets folder of the app: ~/image_classification/app/src/main/assets/. You can then run your app in Android Studio.
In this part of the lesson you are going to deploy a TF Lite model in an Android App that continuously detects the objects (bounding boxes and classes) in the frames seen by your device's back camera . To get things up running you will need to download:Object Detection appMobileNet SSD model and Labels .Once you have downloaded the app, you can run it in Android Studio. For simplicity, the app already contains the above pre-trained .tflite model and the .txt file containing the class labels in the assets folder, so you can start using the app right away.If you want to use another TF Lite model, download the TF Lite model and labels to your computer and then drag and drop the .tflite file containing your model and the .txt file containing the labels into the assets folder of the app: ~/object_detection/app/src/main/assets/. You can then run your app in Android Studio.
In this part of the lesson you are going to deploy a TF Lite model in an Android App that recognizes speech commands . To get things up running you will need to download:Speech Recognition appConvolutional ModelOnce you have downloaded the app, you can run it in Android Studio. For simplicity, the app already contains the above pre-trained .tflite model and the .txt file containing the class labels in the assets folder, so you can start using the app right away.If you want to use another TF Lite model, download the TF Lite model and labels to your computer and then drag and drop the .tflite file containing your model and the .txt file containing the labels into the assets folder of the app: ~/speech_commands/app/src/main/assets/. You can then run your app in Android Studio.

Android Exercise: Rock, Paper, or Scissors

In this exercise you will first train your own model that classifies hand gestures into rock, paper, or scissors. Then you will deploy your model in an app.You can download the app in the link below:Rock, Paper, ScissorsOnce you have downloaded the app, you can run it in Android Studio. For simplicity, the app already contains the above pre-trained .tflite model and the .txt file containing the class labels in the assets folder, so you can start using the app right away.If you want to use another TF Lite model, download the TF Lite model and labels to your computer and then drag and drop the .tflite file containing your model and the .txt file containing the labels into the assets folder of the app: ~/rock_paper_scissors/app/src/main/assets/. You can then run your app in Android Studio.
In this part of the lesson you are going to deploy a TF Lite model in an Android App that classifies images of the Fashion MNIST dataset. To get things up running you will need to download:Fashion MNIST appTF Lite model we created in the Fashion MNIST ColabOnce you have downloaded the app, you can run it in Android Studio. For simplicity, the app already contains the above pre-trained .tflite model and the .txt file containing the class labels in the assets folder, so you can start using the app right away.If you want to use another TF Lite model, download the TF Lite model and labels to your computer and then drag and drop the .tflite file containing your model and the .txt file containing the labels into the assets folder of the app: ~/fashion_mnist/app/src/main/assets/. You can then run your app in Android Studio.

根據我自己的觀察,訓練好的Model,不會包含labels.txt檔案。因此,我們應該先去Dataset來源找尋是否有完整的class names,再寫入到labels.txt當中。

class_names = ['rock', 'paper', 'scissors']with open('labels.txt', 'w') as f:
f.write('\n'.join(class_names))

https://video.udacity-data.com/topher/2019/September/5d8e8cb6_lesson-4-ios-apps/lesson-4-ios-apps.zip

由於目前沒有Mac電腦的關係,顧本單元直接略過,待後續有相關專案需求時再回來學習

IoT部分內容

課程介紹了以下三種裝置,不過,主要將著重於Pi的設置上

Raspberry Pi學習目錄
https://coral.ai/
除了上述裝置外也能再像是Arduino的裝置上執行TF Lite模型

可參考我之前寫的文章,只不過是安裝在Jetson Nano上

NVIDIA Jetson Nano學習筆記(四):安裝與執行Tensorflow Lite Model官方範例
NVIDIA Jetson Nano學習筆記(六):即時影像偵測系統(PiCamera+OpenCV+TensorFlow Lite)COCO Model output說明index[0]: locations,介於[0,1]之間的float list
index[1]: classes,輸出成float格式的整數list
index[2]: scores,介於[0, 1]之間的float,代表該class被偵測的機率
index[3]: numbers of detection,總共被偵測到的數量,float格式
To getting everything up and running on a Raspberry Pi you first need to download:Cats and Dogs application
TF Lite model we created in the
Transfer Learning Colab.
pip install -r requirements.txtpython3 classify.py --filename input.jpg --model_path converted_model.tflite
In this exercise you will first train your own model that classifies hand gestures into rock, paper, or scissors. Then you will deploy your model in an application.You can download the application in the link below:Rock, Paper, ScissorsSolution: Rock, Paper, or Scissorspip install -r requirements.txtpython3 classify.py --filename "hand.png" --model_path "rock_paper_scissors.tflite"

Written by

Machine Learning / Deep Learning / Python / Flutter cakeresume.com/yanwei-liu

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store