-
Notifications
You must be signed in to change notification settings - Fork 829
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
3 changed files
with
391 additions
and
145 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
505 changes: 361 additions & 144 deletions
505
notebooks/287-yolov9-optimization/287-yolov9-optimization.ipynb
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,27 @@ | ||
# Convert and Optimize YOLOv9 with OpenVINO™ | ||
|
||
<p align="center"> | ||
<img src="https://github.com/openvinotoolkit/openvino_notebooks/assets/29454499/ae3a7653-eead-4c41-9cad-a7c95d3a4578"/> | ||
</p> | ||
|
||
YOLOv9 marks a significant advancement in real-time object detection, introducing groundbreaking techniques such as Programmable Gradient Information (PGI) and the Generalized Efficient Layer Aggregation Network (GELAN). This model demonstrates remarkable improvements in efficiency, accuracy, and adaptability, setting new benchmarks on the MS COCO dataset. More details about model can be found in [paper](https://arxiv.org/abs/2402.13616) and [original repository](https://github.com/WongKinYiu/yolov9). | ||
|
||
## Notebook Contents | ||
|
||
This tutorial demonstrates step-by-step instructions on how to run and optimize PyTorch YOLO V9 with OpenVINO. | ||
|
||
The tutorial consists of the following steps: | ||
|
||
- Prepare PyTorch model | ||
- Convert PyTorch model to OpenVINO IR | ||
- Run model inference with OpenVINO | ||
- Prepare and run optimization pipeline | ||
- Compare performance of the FP32 and quantized models. | ||
- Run optimized model inference on video | ||
|
||
|
||
## Installation Instructions | ||
|
||
This is a self-contained example that relies solely on its own code.</br> | ||
We recommend running the notebook in a virtual environment. You only need a Jupyter server to start. | ||
For details, please refer to [Installation Guide](../../README.md). |