onuralpszr's picture
feat: ✨ YOLO-World-Seg files uploaded
b291f6a verified

A newer version of the Gradio SDK is available: 5.4.0

Upgrade

Inference MMYOLO Models with DeepStream

This project demonstrates how to inference MMYOLO models with customized parsers in DeepStream SDK.

Pre-requisites

1. Install Nvidia Driver and CUDA

First, please follow the official documents and instructions to install dedicated Nvidia graphic driver and CUDA matched to your gpu and target Nvidia AIoT devices.

2. Install DeepStream SDK

Second, please follow the official instruction to download and install DeepStream SDK. Currently stable version of DeepStream is v6.2.

3. Generate TensorRT Engine

As DeepStream builds on top of several NVIDIA libraries, you need to first convert your trained MMYOLO models to TensorRT engine files. We strongly recommend you to try the supported TensorRT deployment solution in EasyDeploy.

Build and Run

Please make sure that your converted TensorRT engine is already located in the deepstream folder as the config shows. Create your own model config files and change the config-file parameter in deepstream_app_config.txt to the model you want to run with.

mkdir build && cd build
cmake ..
make -j$(nproc) && make install

Then you can run the inference with this command.

deepstream-app -c deepstream_app_config.txt

Code Structure

β”œβ”€β”€ deepstream
β”‚   β”œβ”€β”€ configs                   # config file for MMYOLO models
β”‚   β”‚   └── config_infer_rtmdet.txt
β”‚   β”œβ”€β”€ custom_mmyolo_bbox_parser # customized parser for MMYOLO models to DeepStream formats
β”‚   β”‚   └── nvdsparsebbox_mmyolo.cpp
|   β”œβ”€β”€ CMakeLists.txt
β”‚   β”œβ”€β”€ coco_labels.txt           # labels for coco detection
β”‚   β”œβ”€β”€ deepstream_app_config.txt # deepStream reference app configs for MMYOLO models
β”‚   β”œβ”€β”€ README_zh-CN.md
β”‚   └── README.md