How to use TFjs converter

You can use tfjs-converter to quantize weghts of ML models e.g. from TF Hub.

TFjs converter on Windows 10

Python 3.6.8 is installed (as Administrator + pip + path) on a "dedicated" HP laptop with Windows 10.
Then "pip install tensorflowjs" ("tensorflowjs_converter --version" 1.4.0)
and example from Q4 in the tfjs-converter FAQ (as one long string in Windows PowerShell):

tensorflowjs_converter --input_format=tf_hub --quantization_bytes=1 'https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/classification/1' web_model

I get 8-bit quantized model (~4 MB).

Quantization-Aware models. Unsupported Ops error.

Unfortunately I can't convert "quantops" TF-lite models from TFhub:
tensorflowjs_converter --input_format=tf_hub --quantization_bytes=1 'https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/quantops/classification/3' web_model

ValueError: Unsupported Ops in the model before optimization
FakeQuantWithMinMaxArgs

Can we remove TF FakeQuantWithMinMaxArgs ops on Colab?

https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet_v1.md
https://www.tensorflow.org/tutorials/images/transfer_learning_with_hub
https://www.tensorflow.org/tutorials/keras/save_and_load
https://www.tensorflow.org/hub/tf2_saved_model
https://www.tensorflow.org/guide/saved_model

How to get the "quantops" MobileNet_v1.summary() for beginning? :)

classifier_url ="https://tfhub.dev/google/imagenet/mobilenet_v1_100_224/quantops/classification/3" #@param {type:"string"}
classifier.summary()IMAGE_SHAPE = (224, 224)
classifier = tf.keras.Sequential([
    hub.KerasLayer(classifier_url, input_shape=IMAGE_SHAPE+(3,))
])
classifier.summary()

Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
keras_layer_1 (KerasLayer)   (None, 1001)              4255001   
=================================================================
Total params: 4,255,001
Trainable params: 0
Non-trainable params: 4,255,001
_________________________________________________________________
no layers?

File 'mobilenet_v1_1.tar'

17 020 468 mobilenet_v1_1.0_224_quant.ckpt.data-00000-of-00001
    14 644 mobilenet_v1_1.0_224_quant.ckpt.index
 5 143 394 mobilenet_v1_1.0_224_quant.ckpt.meta
 4 276 352 mobilenet_v1_1.0_224_quant.tflite
   885 850 mobilenet_v1_1.0_224_quant_eval.pbtxt
17 173 742 mobilenet_v1_1.0_224_quant_frozen.pb
        89 mobilenet_v1_1.0_224_quant_info.txt
are there layers somewhere?

More

"Quantops" data (e.g. imagenet/mobilenet_v1_100_160/quantops/classification) are meant for use in models whose weights will be quantized to uint8. The trained weights of this module are shipped as float32 numbers, but its graph has been augmented by tf.contrib.quantize with extra ops that simulate the effect of quantization already during training, so that the model can adjust to it.

TF-Lite: Optimized models hub has a few more models with Quantization-Aware Training in PB format. Anfortunately SSD model is in "tflite" format only yet.


TFjs notes     updated 10 Dec 2019