Skip to content

MIVisionX Python Inference Application using pre-trained ONNX/NNEF/Caffe models

License

Notifications You must be signed in to change notification settings

kiritigowda/MIVisionX-inference

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 

Repository files navigation

MIVisionX Python Inference Application

MIVisionX Inference Application using pre-trained ONNX/NNEF/Caffe models.

Pre-trained models in ONNX, NNEF, & Caffe formats are supported by MIVisionX. The app first converts the pre-trained models to AMD Neural Net Intermediate Representation (NNIR), once the model has been translated into AMD NNIR (AMD's internal open format), the Optimizer goes through the NNIR and applies various optimizations which would allow the model to be deployed on to target hardware most efficiently. Finally, AMD NNIR is converted into OpenVX C code, which is compiled and wrapped with a python API to run on any targeted hardware.

Prerequisites

usage: mivisionx_classifier.py  [-h] 
                                --model_format MODEL_FORMAT 
                                --model_name MODEL_NAME 
                                --model MODEL 
                                --model_input_dims MODEL_INPUT_DIMS 
                                --model_output_dims MODEL_OUTPUT_DIMS 
                                --label LABEL 
                                [--add ADD]
                                [--multiply MULTIPLY] 
                                [--video VIDEO]
                                [--capture CAPTURE] 
                                [--replace REPLACE]
                                [--verbose VERBOSE]

Usage help

  -h, --help            show help message and exit
  --model_format        pre-trained model format, options:caffe/onnx/nnef     [required]
  --model_name          model name                                            [required]
  --model               pre_trained model file                                [required]
  --model_input_dims    c,h,w - channel,height,width                          [required]
  --model_output_dims   c,h,w - channel,height,width                          [required]
  --label               labels text file                                      [required]
  --add                 input preprocessing factor               [optional - default:0 ]
  --multiply            input preprocessing factor               [optional - default:1 ]
  --video               video file for classification            [optional - default:'']
  --capture             capture device id                        [optional - default:0 ]
  --replace             replace/overwrite model                  [optional - default:no]
  --verbose             verbose                                  [optional - default:no]

Releases

No releases published

Packages

No packages published

Languages