Face Recognition on Jetson Nano using DeepStream and Python.
install-deepstream.sh
will install DeepStream and its dependencies
- Download DeepStream using this link
- get Jetpack version
$ dpkg-query --show nvidia-l4t-core
nvidia-l4t-core 32.3.1-20191209225816
- export needed variables
export JETPACK_VERSION=32.3
export PLATFORM=<platform>
export DEEPSTREAM_SDK_TAR_PATH=<path>
Where identifies the platform’s processor:
t186
for Jetson TX2 seriest194
for Jetson AGX Xavier series or Jetson Xavier NXt210
for Jetson Nano or Jetson TX1
- running installation script
chmod +x install-deepstream.sh
sudo -E ./install-deepstream.sh
- Making sure installation is fine by running a sample app
cd /opt/nvidia/deepsteream/deepstream-5.0/sources/deepstream_python_apps/apps/deepstream-test1
python3 deepstream-test1.py /opt/nvidia/deepstream/deepstream-5.0/samples/streams/sample_720p.h264
take some time to compile the model and running the application for first time.
This demo is built on top of Python sample app deepstream-test2
- Download back-to-back-detectors (the mode can detect faces). It is primary inference.
- The secondary inference facenet engine.
- No changes regarding the tracker.
- Note: embedding dataset (npz file) should be generate by your dataset.
- Note: you should count avg mean and avg std for your dataset:
- Put avg mean in offsets parameter and in the net-scale-factor parameter put (1/avg std) in classifier_config.txt to make facenet model work efficient.
-
Generate the engine file for Facenet model
-
facenet_keras.h5 can be found in the models folder. The model is taken from nyoki-mtl/keras-facenet
-
Convert facenet model to TensorRT engine using this jupyter notebook. The steps in the jupyter notebook is taken from Nvidia official tutorial.
-
when converting pb file to onnx use below command instead:
python -m tf2onnx.convert --input facenet.pb --inputs input_1:0[1,160,160,3] --inputs-as-nchw input_1:0 --outputs Bottleneck_BatchNorm/batchnorm_1/add_1:0 --output facenet.onnx
Note: make sure to use this command --inputs-as-nchw input_1:0 while converting to ONNX to avoid having this error:Error in NvDsInferContextImpl::preparePreprocess() <nvdsinfer_context_impl.cpp:874> [UID = 2]: RGB/BGR input format specified but network input channels is not 3
-
-
Change the model-engine-file path to the your facenet engine file in
classifier_config.txt
. -
python3 deepstream_test_2.py <h264_elementary_stream_contains_faces
You can find more resources about our face recognition work and inference results at https://www.riotu-lab.org/face/