Skip to content

Commit

Permalink
Merge pull request #331 from yatarkan/yt/sd-to-optimum-cli
Browse files Browse the repository at this point in the history
Move from image_generation python conversion scripts to optimum-cli
  • Loading branch information
ilya-lavrenov authored Apr 7, 2024
2 parents bcfc124 + 2865224 commit 99f9a32
Show file tree
Hide file tree
Showing 7 changed files with 270 additions and 159 deletions.
115 changes: 73 additions & 42 deletions .github/workflows/stable_diffusion_1_5_cpp.yml
Original file line number Diff line number Diff line change
@@ -1,84 +1,115 @@
name: stable_diffusion_1_5_cpp

on:
pull_request:
paths:
- image_generation/stable_diffusion_1_5/cpp/**
- image_generation/common/**
- .github/workflows/stable_diffusion_1_5_cpp.yml
- thirdparty/openvino_tokenizers

env:
working_directory: "./image_generation/stable_diffusion_1_5/cpp/"

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true

jobs:
stable_diffusion_1_5_cpp-linux:
runs-on: ubuntu-20.04-8-cores
defaults:
run:
# Do not ignore bash profile files. From:
# https://github.com/marketplace/actions/setup-miniconda#important
shell: bash -l {0}
steps:
- uses: actions/checkout@v4
with:
submodules: recursive
- uses: actions/setup-python@v4

- name: Setup conda
uses: conda-incubator/setup-miniconda@v3
with:
python-version: 3.8
- name: Install OpenVINO
miniconda-version: "latest"
activate-environment: openvino_sd_cpp
python-version: "3.10"

- name: Install OpenVINO and other conda dependencies
run: |
set -e
mkdir openvino
curl https://storage.openvinotoolkit.org/repositories/openvino/packages/nightly/2024.1.0-14645-e6dc0865128/l_openvino_toolkit_ubuntu20_2024.1.0.dev20240304_x86_64.tgz | tar --directory ./openvino/ --strip-components 1 -xz
sudo ./openvino/install_dependencies/install_openvino_dependencies.sh
- name: Download / convert models
conda activate openvino_sd_cpp
conda install -c conda-forge openvino c-compiler cxx-compiler make cmake
conda env config vars set LD_LIBRARY_PATH=$CONDA_PREFIX/lib:$LD_LIBRARY_PATH
- name: Install python dependencies
working-directory: ${{ env.working_directory }}
run: |
set -e
source ./openvino/setupvars.sh
cd ./image_generation/stable_diffusion_1_5/cpp/scripts/
python -m pip install -U pip
python -m pip install -r ./requirements.txt
python -m pip install ../../../../thirdparty/openvino_tokenizers/
python convert_model.py -sd runwayml/stable-diffusion-v1-5 -b 1 -t FP16 -dyn True
conda activate openvino_sd_cpp
python -m pip install -r requirements.txt
python -m pip install ../../../thirdparty/openvino_tokenizers/[transformers]
- name: Download and convert model and tokenizer
working-directory: ${{ env.working_directory }}
run: |
conda activate openvino_sd_cpp
export MODEL_PATH="models/stable_diffusion_v1_5_ov/FP16"
optimum-cli export openvino --model runwayml/stable-diffusion-v1-5 --task stable-diffusion --convert-tokenizer --weight-format fp16 $MODEL_PATH
convert_tokenizer $MODEL_PATH/tokenizer/ --tokenizer-output-type i32 -o $MODEL_PATH/tokenizer/
- name: Build app
working-directory: ${{ env.working_directory }}
run: |
set -e
source ./openvino/setupvars.sh
cd ./image_generation/stable_diffusion_1_5/cpp/
conda activate openvino_sd_cpp
cmake -DCMAKE_BUILD_TYPE=Release -S ./ -B ./build/
cmake --build ./build/ --config Release --parallel
- name: Run app
working-directory: ${{ env.working_directory }}
run: |
set -e
source ./openvino/setupvars.sh
cd ./image_generation/stable_diffusion_1_5/cpp/build
./stable_diffusion -m ../scripts/runwayml/stable-diffusion-v1-5 -t FP16_dyn
./build/stable_diffusion -m ./models/stable_diffusion_v1_5_ov -t FP16
stable_diffusion_1_5_cpp-windows:
runs-on: windows-latest
steps:
- uses: actions/checkout@v4
with:
submodules: recursive
- uses: actions/setup-python@v4

- name: Setup conda
uses: conda-incubator/setup-miniconda@v3
with:
python-version: 3.8
- name: Initialize OpenVINO
shell: cmd
miniconda-version: "latest"
activate-environment: openvino_sd_cpp
python-version: "3.10"

- name: Install OpenVINO and other conda dependencies
run: |
conda activate openvino_sd_cpp
conda install -c conda-forge openvino c-compiler cxx-compiler make cmake
- name: Install python dependencies
working-directory: ${{ env.working_directory }}
run: |
curl --output ov.zip https://storage.openvinotoolkit.org/repositories/openvino/packages/nightly/2024.1.0-14645-e6dc0865128/w_openvino_toolkit_windows_2024.1.0.dev20240304_x86_64.zip
unzip ov.zip
- name: Download / convert a model / tokenizer
shell: cmd
conda activate openvino_sd_cpp
python -m pip install -r requirements.txt
python -m pip install ../../../thirdparty/openvino_tokenizers/[transformers]
- name: Download and convert model and tokenizer
working-directory: ${{ env.working_directory }}
run: |
call w_openvino_toolkit_windows_2024.1.0.dev20240304_x86_64/setupvars.bat
cd ./image_generation/stable_diffusion_1_5/cpp/scripts/
python -m pip install -r ./requirements.txt
python -m pip install ../../../../thirdparty/openvino_tokenizers/
python convert_model.py -sd runwayml/stable-diffusion-v1-5 -b 1 -t FP16 -dyn True
conda activate openvino_sd_cpp
$env:MODEL_PATH='models/stable_diffusion_v1_5_ov/FP16'
optimum-cli export openvino --model runwayml/stable-diffusion-v1-5 --task stable-diffusion --convert-tokenizer --weight-format fp16 $env:MODEL_PATH
convert_tokenizer $env:MODEL_PATH/tokenizer/ --tokenizer-output-type i32 -o $env:MODEL_PATH/tokenizer/
- name: Build app
shell: cmd
working-directory: ${{ env.working_directory }}
run: |
call w_openvino_toolkit_windows_2024.1.0.dev20240304_x86_64/setupvars.bat
cd ./image_generation/stable_diffusion_1_5/cpp/
conda activate openvino_sd_cpp
cmake -DCMAKE_BUILD_TYPE=Release -S ./ -B ./build/
cmake --build ./build/ --config Release --parallel
- name: Run app
shell: cmd
working-directory: ${{ env.working_directory }}
run: |
call w_openvino_toolkit_windows_2024.1.0.dev20240304_x86_64/setupvars.bat
cd ./image_generation/stable_diffusion_1_5/cpp/build/
call "./Release/stable_diffusion.exe" -m ../scripts/runwayml/stable-diffusion-v1-5 -t FP16_dyn
& "./build/Release/stable_diffusion.exe" -m ./models/stable_diffusion_v1_5_ov -t FP16 --dynamic
3 changes: 3 additions & 0 deletions image_generation/stable_diffusion_1_5/cpp/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
build
images
models
49 changes: 27 additions & 22 deletions image_generation/stable_diffusion_1_5/cpp/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,10 @@ The pure C++ text-to-image pipeline, driven by the OpenVINO native C++ API for S
## Step 1: Prepare build environment

Prerequisites:
- Conda ([installation guide](https://conda.io/projects/conda/en/latest/user-guide/install/index.html))


C++ Packages:
* [CMake](https://cmake.org/download/): Cross-platform build tool
* [OpenVINO](https://docs.openvino.ai/install): Model inference
Expand All @@ -14,7 +18,9 @@ Prepare a python environment and install dependencies:
```shell
conda create -n openvino_sd_cpp python==3.10
conda activate openvino_sd_cpp
conda install openvino c-compiler cxx-compiler make
conda install -c conda-forge openvino c-compiler cxx-compiler make cmake
# Ensure that Conda standard libraries are used
conda env config vars set LD_LIBRARY_PATH=$CONDA_PREFIX/lib:$LD_LIBRARY_PATH
```

## Step 2: Convert Stable Diffusion v1.5 and Tokenizer models
Expand All @@ -24,32 +30,30 @@ conda install openvino c-compiler cxx-compiler make
1. Install dependencies to import models from HuggingFace:
```shell
git submodule update --init
# Reactivate Conda environment after installing dependencies and setting env vars
conda activate openvino_sd_cpp
python -m pip install -r scripts/requirements.txt
python -m pip install -r requirements.txt
python -m pip install ../../../thirdparty/openvino_tokenizers/[transformers]
```
2. Download a huggingface SD v1.5 model like:
- [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
- [dreamlike-anime-1.0](https://huggingface.co/dreamlike-art/dreamlike-anime-1.0) to run Stable Diffusion with LoRA adapters.

Example command for downloading and exporting FP16 model:
```shell
export MODEL_PATH="models/dreamlike_anime_1_0_ov/FP16"
# Using optimum-cli for exporting model to OpenVINO format
optimum-cli export openvino --model dreamlike-art/dreamlike-anime-1.0 --task stable-diffusion --convert-tokenizer --weight-format fp16 $MODEL_PATH
# Converting tokenizer manually (`--convert-tokenizer` flag of `optimum-cli` results in "OpenVINO Tokenizer export for CLIPTokenizer is not supported.")
convert_tokenizer $MODEL_PATH/tokenizer/ --tokenizer-output-type i32 -o $MODEL_PATH/tokenizer/
```

Example command:
```shell
huggingface-cli download --resume-download --local-dir-use-symlinks False dreamlike-art/dreamlike-anime-1.0 --local-dir models/dreamlike-anime-1.0
```
You can also choose other precision and export FP32 or INT8 model.

Please, refer to the official website for [model downloading](https://huggingface.co/docs/hub/models-downloading) to read more details.

3. Run model conversion script to convert PyTorch model to OpenVINO IR via [optimum-intel](https://github.com/huggingface/optimum-intel). Please, use the script `scripts/convert_model.py` to convert the model into `FP16_static` or `FP16_dyn`, which will be saved into the `models` folder:
```shell
cd scripts
python convert_model.py -b 1 -t FP16 -sd ../models/dreamlike-anime-1.0 # to convert to models with static shapes
python convert_model.py -b 1 -t FP16 -sd ../models/dreamlike-anime-1.0 -dyn True # to keep models with dynamic shapes
python convert_model.py -b 1 -t INT8 -sd ../models/dreamlike-anime-1.0 -dyn True # to compress the models to INT8
```
Please, refer to the official website for [🤗 Optimum](https://huggingface.co/docs/optimum/main/en/index) and [optimum-intel](https://github.com/huggingface/optimum-intel) to read more details.

> [!NOTE]
>Now the pipeline support batch size = 1 only, i.e. static model `(1, 3, 512, 512)`
> Now the pipeline support batch size = 1 only, i.e. static model `(1, 3, 512, 512)`
### LoRA enabling with safetensors

Expand All @@ -70,7 +74,7 @@ cmake --build build --parallel

## Step 4: Run Pipeline
```shell
./stable_diffusion [-p <posPrompt>] [-n <negPrompt>] [-s <seed>] [--height <output image>] [--width <output image>] [-d <device>] [-r <readNPLatent>] [-l <lora.safetensors>] [-a <alpha>] [-h <help>] [-m <modelPath>] [-t <modelType>]
./build/stable_diffusion [-p <posPrompt>] [-n <negPrompt>] [-s <seed>] [--height <output image>] [--width <output image>] [-d <device>] [-r <readNPLatent>] [-l <lora.safetensors>] [-a <alpha>] [-h <help>] [-m <modelPath>] [-t <modelType>] [--dynamic]

Usage:
stable_diffusion [OPTION...]
Expand All @@ -86,8 +90,9 @@ Usage:
* `--width arg` Width of output image (default: 512)
* `-c, --useCache` Use model caching
* `-r, --readNPLatent` Read numpy generated latents from file
* `-m, --modelPath arg` Specify path of SD model IR (default: ../models/dreamlike-anime-1.0)
* `-t, --type arg` Specify the type of SD model IR (FP16_static or FP16_dyn) (default: FP16_static)
* `-m, --modelPath arg` Specify path of SD model IR (default: ../models/dreamlike_anime_1_0_ov)
* `-t, --type arg` Specify the type of SD model IRs (FP32, FP16 or INT8) (default: FP16)
* `--dynamic` Specify the model input shape to use dynamic shape
* `-l, --loraPath arg` Specify path of lora file. (*.safetensors). (default: )
* `-a, --alpha arg` alpha for lora (default: 0.75)
* `-h, --help` Print usage
Expand All @@ -103,15 +108,15 @@ Negative prompt: (empty, here couldn't use OV tokenizer, check the issues for de

Read the numpy latent instead of C++ std lib for the alignment with Python pipeline

* Generate image without lora `./stable_diffusion -r`
* Generate image without lora `./build/stable_diffusion -r`

![](./without_lora.bmp)

* Generate image with soulcard lora `./stable_diffusion -r`
* Generate image with soulcard lora `./build/stable_diffusion -r`

![](./soulcard_lora.bmp)

* Generate different size image with dynamic model (C++ lib generated latent): `./stable_diffusion -m ../models/dreamlike-anime-1.0 -t FP16_dyn --height 448 --width 704`
* Generate different size image with dynamic model (C++ lib generated latent): `./build/stable_diffusion -m ./models/dreamlike_anime_1_0_ov -t FP16 --dynamic --height 448 --width 704`

![](./704x448.bmp)

Expand Down
46 changes: 0 additions & 46 deletions image_generation/stable_diffusion_1_5/cpp/scripts/convert_model.py

This file was deleted.

Loading

0 comments on commit 99f9a32

Please sign in to comment.