What did MobileNet learn? Interactive image classification by MobileNet
Open a new browser window (or tab) and Copy / Paste(Ctrl+V) images from
image.google.com,
unsplash.com, ...
You can Copy images from your PC (+ flip, grayscale and much more) e.g. by Photos or Paint.
Drag mouse (from the top to the right) to mark a new region for inference.
Use the A,S,D,W keys to translate and J,K,L,I to enlarge bounding box (hold the Shift key
to move x10 faster). This region from the source image (see its position,
width and height in console) is scaled into 224x224 image and is shown to the right.
This demo is based on slightly modified
TFjs MobileNet sources
(i.e. codes and licensing are made by the TFjs team, errors by me).
TFhub models are used. They are trained on ImageNet-2012
1000 classNames.
This application is tested in Chrome and Firefox Nightly.
Thanks to Kai Ninomiya (Google) for the rendering rectangle on the top canvas hint.
Translation/scale invariance
For this drawing probability of "tabby cat" changes very much
when I use different bounding boxes included the whole kitty
(different translations, scales and deformations). The Top5 classes order is much more stable.
Big Data. How to compare different neural network performance?
For this drawing probability of "tabby cat" changes very much
when I use different bounding boxes included the whole kitty
(different scales and deformations). The Top5 classes order is much more stable.
It is curious to test MobileNet inference on many images.
Results are funny but rather entangled and unpredictable.
There are many small nets -
SqeesNet, MobileNet v1,2,3, EfficientNet, and popular bigger ResNet50.
For net optimization, we can try float16 precision or weights quantization.
Therefore to get quantative comparison results for different CNNs,
we need a good dataset.
Since MobileNet is trained on the ImageNet-2012 data,
we could use its validation dataset (~6GB of 50x1000 images) as the TF-lite team does.
"Dogs vs. Cats" transfer learning
Let us export into TFjs application trained top layers weights from Google Colab
(Transfer learning
with a pretrained ConvNet TF tutorial).
Next we can try to use frozen model with 8-bit quantized weights and fine tune it.
It is interesting if we can train top layers in browser (training dataset - 500 MB,
validating dataset - 45MB :)