...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
export BASEDIR=~/armnn-pi
export PATH=$BASEDIR/boost.build/bin:$PATH
export PATH=$BASEDIR/protobuf-host/bin:$PATH
export LD_LIBRARY_PATH=$BASEDIR/protobuf-host/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$BASEDIR/armnn/build:$LD_LIBRARY_PATH
export ARMNN_INCLUDE=$BASEDIR/armnn/include
...
Arm NN SDK provides a set of tests which can also be considered as demos showing what Arm NN does and how to use it. They load neural network models of various formats (Caffe, TensorFlow, TensorFlow Lite, ONNX), run the inference on a specified input data and output the inference result.
ArmNN SDK can be built on SP7021 with RPIOS.
Please see the attach file to see the detail process.
View file | ||||
---|---|---|---|---|
|
1. Arm Compute Library
It is in $BASEDIR/ComputeLibrary
It's library built is in $BASEDIR/ComputeLibrary/build
1.1 Running a DNN with random weights and inputs
Arm compute library comes with examples for most common DNN architectures like: AlexNet, MobileNet, ResNet, Inception v3, Inception v4, Squeezenet, etc.
All available examples source code can be found in this example location: $BASEDIR/ComputeLibrary/example .example
All available examples can be found in this example build location: $BASEDIR/ComputeLibrary/build/example
Each model architecture can be tested with graph_[dnn_model] application. For example, to run the MobileNet v2 DNN model with random weights, run the example application without any argument:
cd $BASEDIR/ComputeLibrary/build/examples export LD_LIBRARY_PATH=$BASEDIR/ComputeLibrary/build:$LD_LIBRARY_PATH ./graph_mobilenet_v2
| |
grep "Caff"
| |
...
Download the model files:
curl -L -o deploy.prototxt https://raw.githubusercontent.com/BVLC/caffe/master/models/bvlc_alexnet/deploy.prototxt
curl -L -o bvlc_alexnet.caffemodel http://dl.caffe.berkeleyvision.org/bvlc_alexnet.caffemodel
cp deploy.prototxt bvlc_alexnet_1.prototxt
nano bvlc_alexnet_1.prototxt
change the batch size to 1
Original content:
name: "AlexNet" | ||
layer { | ||
name: "data" | ||
type: "Input" | ||
top: "data" | ||
input_param { shape: { dim: 10 dim: 3 dim: 227 dim: 227 } } |
Modified content:
name: "AlexNet" | ||
layer { | ||
name: "data" | ||
type: "Input" | ||
top: "data" | ||
input_param { shape: { dim: 1 dim: 3 dim: 227 dim: 227 } } |
Run the following python script to transform the network
python3
import caffe
net = caffe.Net('deploy.prototxt', 'bvlc_alexnet.caffemodel', caffe.TEST)
new_net = caffe.Net('bvlc_alexnet_1.prototxt', 'bvlc_alexnet.caffemodel', caffe.TEST)
new_net.save('bvlc_alexnet_1.caffemodel')
Copy bvlc_alexnet_1.caffemodel from linux host to ~/ArmnnTests/models in SP7021
...
2.2.2 CaffeInception_BN-Armnn
- Use A linux host with py-caffe installed
Download the model files:
cd ~/ArmnnTests
curl -L -o deploy.prototxt https://raw.githubusercontent.com/pertusa/InceptionBN-21K-for-Caffe/master/deploy.prototxt
curl -L -o Inception21k.caffemodel http://www.dlsi.ua.es/~pertusa/deep/Inception21k.caffemodel
cp deploy.prototxt Inception-BN-batchsize1.prototxt
nano Inception-BN-batchsize1.prototxt
change the batch size to 1
Original content
name: "Inception21k" | ||
layer { | ||
name: "data" | ||
type: "Input" | ||
top: "data" | ||
input_param { shape: { dim: 10 dim: 3 dim: 224 dim: 224 } } |
Modified content:
name: "Inception21k" | ||
layer { | ||
name: "data" | ||
type: "Input" | ||
top: "data" | ||
input_param { shape: { dim: 1 dim: 3 dim: 224 dim: 224 } } |
Run the following python script to transform the network
import caffe
net = caffe.Net('deploy.prototxt', 'Inception21k.caffemodel', caffe.TEST)
new_net = caffe.Net('Inception-BN-batchsize1.prototxt', 'Inception21k.caffemodel', caffe.TEST)
new_net.save(' Inception-BN-batchsize1.caffemodel')python3
Copy Inception-BN-batchsize1.caffemodel to ~/ArmnnTests/models in SP7021
...
2.1.3 CaffeMnist-Armnn
Use A linux host with py-caffe installed
Download the model files:
cd ~/ArmnnTests
curl -L -o lenet.prototxt https://raw.githubusercontent.com/BVLC/caffe/master/examples/mnist/lenet.prototxt
curl -L -o lenet_iter_9000_ori.caffemodel https://github.com/ARM-software/ML-examples/blob/master/armnn-mnist/model/lenet_iter_9000.caffemodel
cp lenet.prototxt lenet_iter_9000.prototxt
nano lenet_iter_9000.prototxt
change the batch size to 1Original content:
name: "LeNet"
layer {
name: "data"
type: "Input"
top: "data"
input_param { shape: { dim: 64 dim: 1 dim: 28 dim: 28 } }
Modified content:
name: "LeNet"
layer {
name: "data"
type: "Input"
top: "data"
input_param { shape: { dim: 1 dim: 1 dim: 28 dim: 28 } }
Run the following python script to transform the network
python3
import caffe
net = caffe.Net(lenet.prototxt', lenet_iter_9000_ori.caffemodel', caffe.TEST)
new_net = caffe.Net(' lenet_iter_9000.prototxt', lenet_iter_9000_ori.caffemodel', caffe.TEST)
new_net.save(' lenet_iter_9000.caffemodel')
Copy lenet_iter_9000.caffemodel to ~/ArmnnTests/models in SP7021- Find a .jpg file containing a shark (great white shark). Rename it to shark.jpg and copy it to the data folder on SP7021.
- Download the two archives below and unpack them:
...
2.2.1 TfInceptionV3-Armnn
1. Download the model files. Unzip and move file to action folder :
cd ~/ArmnnTests
curl -L -o inception_v3_2016_08_28_frozen.pb.tar.gz https://storage.googleapis.com/download.tensorflow.org/models/inception_v3_2016_08_28_frozen.pb.tar.gz
tar zxvf inception_v3_2016_08_28_frozen.pb.tar.gz
mv inception_v3_2016_08_28_frozen.pb ./models/
2. Find a .jpg file containing a shark (great white shark). Rename it to shark.jpg and copy it to the data folder on the SP7021.
...
TfInceptionV3-Armnn --data-dir=data --model-dir=models
This is not an execution error. This occurs because the TfInceptionV3-Armnn test expects a specific type of dog, cat and shark to be found so if a different type/breed of these animals is passed to the test, it returns a case failed.
The expected inputs for this test are:
ID | Label | File name |
208 | Golden Retriever | Dog.jpg |
283 | Tiger Cat | Cat.jpg |
3 | White Shark | shark.jpg |
The complete list of supported objects can be found in https://github.com/ARM-software/armnn/blob/branches/armnn_18_11/tests/TfLiteMobilenetQuantized-Armnn/labels.txt
...
Arm NN SDK provides the following test for TensorFlow Lite models:
pi@raspberrypi:~/ArmnnTests$ ls -l ~/armnn-pi/armnn/build/tests/ | grep "TfL"
-rwxr-xr-x 1 pi pi 919808 Sep 2 05:08 TfLiteInceptionV3Quantized-Armnn
-rwxr-xr-x 1 pi pi 919808 Sep 2 05:28 TfLiteInceptionV4Quantized-Armnn
-rwxr-xr-x 1 pi pi 919656 Sep 2 05:42 TfLiteMnasNet-Armnn
-rwxr-xr-x 1 pi pi 921120 Sep 2 05:29 TfLiteMobilenetQuantized-Armnn
-rwxr-xr-x 1 pi pi 919812 Sep 2 05:14 TfLiteMobileNetQuantizedSoftmax-Armnn
-rwxr-xr-x 1 pi pi 915588 Sep 2 05:20 TfLiteMobileNetSsd-Armnn
-rwxr-xr-x 1 pi pi 919808 Sep 2 05:15 TfLiteMobilenetV2Quantized-Armnn
-rwxr-xr-x 1 pi pi 919808 Sep 2 05:02 TfLiteResNetV2-50-Quantized-Armnn
-rwxr-xr-x 1 pi pi 919656 Sep 2 05:06 TfLiteResNetV2-Armnn
-rwxr-xr-x 1 pi pi 919800 Sep 2 05:42 TfLiteVGG16Quantized-Armnn
-rwxr-xr-x 1 pi pi 666068 Sep 2 05:23 TfLiteYoloV3Big-Armnn
2.3.1 TfLiteInceptionV3Quantized-Armn
...
The Arm NN provides the following set of tests for ONNX models:
pi@raspberrypi:~/ArmnnTests$ ls -l ~/armnn-pi/armnn/build/tests/ | grep "Onn"
-rwxr-xr-x 1 pi pi 729136 Sep 2 05:08 OnnxMnist-Armnn
-rwxr-xr-x 1 pi pi 915132 Sep 2 05:01 OnnxMobileNet-Armnn
2.4.1 OnnxMnist-Armnn
1. Download and unpack the model file:
cd ~/ArmnnTests
curl -L -o mnist.tar.gz https://onnxzoo.blob.core.windows.net/models/opset_8/mnist/mnist.tar.gz
tar zxvf mnist.tar.gz
2. Rename the model.onnx file to mnist_onnx.onnx and copy it to the models folder on the SP7021
mv ./mnist/model.onnx ./mnist/mnist_onnx.onnx
cp ./mnist/mnist_onnx.onnx ./models/
3. Download the two archives below and unpack them:
curl -L -o t10k-images-idx3-ubyte.gz http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz
curl -L -o t10k-labels-idx1-ubyte.gz http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz
gzip -d t10k-images-idx3-ubyte.gz
gzip -d t10k-labels-idx1-ubyte.gz
4. Rename two files to be t10k-images.idx3-ubyte and t10k-labels.idx1-ubyte and copy files to the data folder on the SP7021.
mv t10k-images-idx3-ubyte t10k-images.idx3-ubyte
mv t10k-labels-idx1-ubyte t10k-labels.idx1-ubyte
cp t10k-images-idx3-ubyte ./data/
cp t10k-labels-idx1-ubyte ./data/
5. Run the test:
OnnxMnist-Armnn --data-dir=data --model-dir=models
2.4.2 OnnxMobileNet-Armnn
- Download and unpack the model file:
cd ~/ArmnnTests
curl -L -o mobilenetv2-1.0.tar.gz https://s3.amazonaws.com/onnx-model-zoo/mobilenet/mobilenetv2-1.0/mobilenetv2-1.0.tar.gz
tar zxvf mobilenetv2-1.0.tar.gz
2. Copy the unpacked mobilenetv2-1.0.onnx file to the models folder on the SP7021
cp ./mobilenetv2-1.0/mobilenetv2-1.0.onnx ./models/
3. Find a .jpg file containing a shark (great white shark). Rename it to shark.jpg and copy it to the data folder on the SP7021.
4. Find a .jpg file containing a dog (labrador retriever). Rename it to Dog.jpg and copy it to the data folder on the SP7021.
5. Find a .jpg file containing a cat (tiger cat). Rename it to Cat.jpg and copy it to the data folder on the SP7021.
6. Run the test:
OnnxMobileNet-Armnn --data-dir=data --model-dir=models
3. Python interface to Arm NN (PyArmNN)
For a more complete Arm NN experience, there are examples located in /armnn-pi/armnn/python/pyarmnn/examples/, which require requests, PIL and maybe some other Python3 modules. You may install the missing modules using pip3.
To run these examples you may simply execute them using the Python3 interpreter. There are no arguments and the resources are downloaded by the scripts:
cd ~/armnn-pi/armnn/python/pyarmnn/examples/
python3 tflite_mobilenetv1_quantized.py
...
python3 onnx_mobilenetv2.py
Anchor | ||||
---|---|---|---|---|
|
...