Quick Start of Prediction Library

Prerequisites

  • Make sure you have HyperPose installed. (if not, you can refer to here).

  • Make sure you have svn(subversion) and python3-pip installed. (will be used in command line scripts)

For Linux users, you may:

sudo apt -y install subversion python3 python3-pip

Install Test Data

# cd to the git repo.

sh scripts/download-test-data.sh

You download them manually to ${HyperPose}/data/media/ via LINK if the network is not working.

Install Test Models

# cd to the git repo. And download pre-trained models you want. 

sh scripts/download-openpose-thin-model.sh      # ~20  MB
sh scripts/download-tinyvgg-model.sh            # ~30  MB
sh scripts/download-openpose-res50-model.sh     # ~45  MB
sh scripts/download-openpose-coco-model.sh      # ~200 MB
sh scripts/download-ppn-res50-model.sh          # ~50  MB (PoseProposal Algorithm)

You can download them manually to ${HyperPose}/data/models/ via LINK if the network is not working.

Predict a sequence of images

Using a fast model

# cd to your build directory.

# Predict all images in `../data/media`
./hyperpose-cli --source ../data/media --model ../data/models/lopps-resnet50-V2-HW=368x432.onnx --w 368 --h 432
# The source flag can be ignored as the default value is `../data/media`.

The output images will be in the build folder.

If you met logging message like ERROR: Tensor image cannot be both input and output please just ignore it.

Table of flags for hyperpose-cli

Note that the entry point of our official docker image is also hyperpose-cli in the /hyperpose/build folder.

Also see: ./hyperpose-cli --help

Flag Meaning Default
model Path to your model. ../data/models/TinyVGG-V1-HW=256x384.uff
source Path to your source.
The source can be a folder path, a video path, an image path or the key word camera to open your camera.
../data/media/video.avi
post Post-processing methods. This key can be paf or ppn. paf
keep_ratio The DNN takes a fixed input size, where the images must resize to fit that input resolution. However, not hurt the original human scale, we may want to resize by padding. And this is flag enable you to do inference without break original human ratio. (Good for accuracy) true
w The input width of your model. Currently, the trained models we provide all have specific requirements for input resolution. 384(for your the tiny-vgg model)
h The input height of your model. 256(for your the tiny-vgg model)
max_batch_size Maximum batch size for inference engine to execute. 8
runtime Which runtime type to use. This can be operator or stream. If you want to open your camera or producing imshow window, please use operator. For better processing throughput on videos, please use stream. operator
imshow true Whether to open an imshow window.
saving_prefix The output media resource will be named after '$(saving_prefix)_$(ID).$(format) "output"
alpha The weight of key point visualization. (from 0 to 1) 0.5
logging Print the internal logging information or not. false

Using a precise model

./hyperpose-cli --model ../data/models/openpose-thin-V2-HW=368x432.onnx --w 432 --h 368 

./hyperpose-cli --model ../data/models/openpose-coco-V2-HW=368x656.onnx --w 656 --h 368 

Use PoseProposal model

./hyperpose-cli --model ../data/models/ppn-resnet50-V2-HW=384x384.onnx --w 384 --h 384 --post=ppn

Convert models into TensorRT Engine Protobuf format

You may find that it takes one or two minutes before the real prediction starts. This is because TensorRT will try to profile the model to get a optimized runtime model.

To save the model conversion time, you can convert it in advance.

./example.gen_serialized_engine --model_file ../data/models/openpose-coco-V2-HW=368x656.onnx --input_width 656 --input_height 368 --max_batch_size 20
# You'll get ../data/models/openpose-coco-V2-HW=368x656.onnx.trt
# If you only want to do inference on single images(batch size = 1), please use `--max_batch_size 1` and this will improve the engine's performance.

# Use the converted model to do prediction
./hyperpose-cli --model ../data/models/openpose-coco-V2-HW=368x656.onnx.trt --w 656 --h 368

Currently, we run the models in TensorRT float32 mode. Other data type is not supported(welcome to contribute!).

Predict a video using Operator API

./hyperpose-cli --runtime=operator --source=../data/media/video.avi

The output video will be in the building folder.

Predict a video using Stream API(faster)

./hyperpose-cli --runtime=stream --source=../data/media/video.avi
# In stream API, the imshow functionality will be closed.

Play with camera

./hyperpose-cli --source=camera
# Note that camera mode is not compatible with Stream API. If you want to do inference on your camera in real time, the Operator API is designed for it.