layer
MobileNet V1 consists of a regular 3×3 convolution as the very first layer,
followed by 13 times the depthwise separable convolution blocks (shown below).
Channel weights and biases are shown similar to Data-Free Quantization.
You can use e.g. "JSON Viewer Awesome" Chrome extension to explore
model.json
file.
Reduced MobileNet_1 LayersModel and
GraphModel Daisy test.
Similar to MobileNet V2 "the weight distributions differ so strongly between output channels that the same set of quantization parameters cannot be used to quantize the full weight tensor effectively".