Weight quantization test

  it

Pav = 1/N Σ Pi,   Disp = 1/N Σ(Pi - Pav)2,   Error = (Disp/N)1/2.

"Daisy" weight quantization demo. Press "iterate" to collect 1000 daisy-class (985) probabilities. See console for more detailes. Local copy of the original layers model from 'https://storage.googleapis.com/tfjs-models/tfjs/mobilenet_v1_1.0_224/model.json' is used.

Numbers

CNN       Pav
_______________
relu6     0.476
relu6 2B  0.474
relu6 1B  0.237
relu      0.264
relu  2B  0.263
relu  1B  0.128
We see inference degradation for 1-byte quantization!

I "have" to change (by hand) 'relu6' → 'relu' in the 'model.json' file due to "ValueError: Unknown activation function:relu6" in the tensorflowjs_converter. After model quantization, 'relu6' may be restored (layers to layers model conversion is used).

Really the Qualcomm team uses 'relu' too. They get similar results for 'relu6' and 'relu' after weights equalization.


TFjs notesupdated 6 Dec 2019