Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Notebook to run inference for LCM using Optimum Intel with OpenVINO #1696

Merged
merged 28 commits into from
Mar 6, 2024
Merged

Notebook to run inference for LCM using Optimum Intel with OpenVINO #1696

merged 28 commits into from
Mar 6, 2024

Conversation

DimaPastushenkov
Copy link
Contributor

Notebook allows to run inference with the standard Diffusers pipeline and the Optimum Intel pipeline on CPU and GPU

Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@DimaPastushenkov DimaPastushenkov marked this pull request as draft February 12, 2024 07:20
@raymondlo84 raymondlo84 self-assigned this Feb 13, 2024
@raymondlo84 raymondlo84 requested a review from eaidova February 13, 2024 19:21
@raymondlo84
Copy link
Collaborator

Who else shall we add as reviewer?

@eaidova
Copy link
Collaborator

eaidova commented Feb 14, 2024

@DimaPastushenkov thanks, important general notes:

  1. we do not store images in repository and prefer downloading or using them as links on external resources if they should be embedded in markdown
  2. we avoid naming Core object as ie in code as ie is associated with old API 1.0 and for new users it brings more questions then answers
  3. We prefer install optimum without openvino extra as in one day our required version and optimum intel may be different
  4. What good for specific training section or blog post not always good for regular users. Notebook has assumption that both CPU and GPU exists in user setup but on practice it is not always true (user may have GPU from different manufacturer or multiple cards, or even his platform does not support GPU inference via openvino (I mean ARM)), instead of loading on specific devices, we prefer to allow user choice it by device selection widget. Please rewrite using it or at least make GPU part optional (skip execution if there is no GPU in available devices).

@DimaPastushenkov
Copy link
Contributor Author

@eaidova, I have made all requested changes and set PR to "Ready for review".
One image is still stored in the repo, but I've significantly reduced its size. Please let me know if it is acceptable.

@DimaPastushenkov DimaPastushenkov marked this pull request as ready for review February 20, 2024 11:26
@eaidova
Copy link
Collaborator

eaidova commented Feb 20, 2024

@DimaPastushenkov my suggestion to replace it with
https://github.com/openvinotoolkit/openvino_notebooks/assets/29454499/ede014d1-64cd-4692-9f6f-59fd66083807

(it is the same image, just uploaded on github in different way, for doing that it is required to put image into any comment field on github, it will be automatically uploaded and in preview mode you can see link on you image)

@eaidova eaidova requested a review from a team February 20, 2024 13:40
@eaidova eaidova requested review from apaniukov, itrushkin and aleksandr-mokrov and removed request for a team February 20, 2024 13:40
@DimaPastushenkov
Copy link
Contributor Author

DimaPastushenkov commented Feb 22, 2024

  1. What good for specific training section or blog post not always good for regular users. Notebook has assumption that both CPU and GPU exists in user setup but on practice it is not always true (user may have GPU from different manufacturer or multiple cards, or even his platform does not support GPU inference via openvino (I mean ARM)), instead of loading on specific devices, we prefer to allow user choice it by device selection widget. Please rewrite using it or at least make GPU part optional (skip execution if there is no GPU in available devices).

@eaidova , I have made GPU part optional. Please let me know whether it is better to make device selectable by device selection widget and will rework the notebook accordingly.

Copy link

review-notebook-app bot commented Feb 22, 2024

View / edit / reply to this conversation on ReviewNB

aleksandr-mokrov commented on 2024-02-22T13:10:38Z
----------------------------------------------------------------

Line #1.    %pip install -q "optimum-intel[diffusers]@git+https://github.com/huggingface/optimum-intel.git" "ipywidgets" "transformers>=4.33.0" --extra-index-url https://download.pytorch.org/whl/cpu

Add openvino


Copy link

review-notebook-app bot commented Feb 22, 2024

View / edit / reply to this conversation on ReviewNB

aleksandr-mokrov commented on 2024-02-22T13:10:39Z
----------------------------------------------------------------

Line #1.    from openvino.runtime import Core

Use please

import openvino as ov

core = ov.Core()



Copy link

review-notebook-app bot commented Feb 22, 2024

View / edit / reply to this conversation on ReviewNB

aleksandr-mokrov commented on 2024-02-22T13:10:40Z
----------------------------------------------------------------

Line #1.    from optimum.intel import OVLatentConsistencyModelPipeline

It causes error:

No module named 'onnx'

If onnx is installed:

KeyError: 'clip-text-model is not supported yet with the onnx backend. Only [] are supported. If you want to support onnx please propose a PR or open up an issue.'



@DimaPastushenkov
Copy link
Contributor Author

View / edit / reply to this conversation on ReviewNB

aleksandr-mokrov commented on 2024-02-22T13:10:40Z ----------------------------------------------------------------

Line #1. from optimum.intel import OVLatentConsistencyModelPipeline
It causes error:

No module named 'onnx'
If onnx is installed:

KeyError: 'clip-text-model is not supported yet with the onnx backend. Only [] are supported. If you want to support onnx please propose a PR or open up an issue.'

@aleksandr-mokrov , I cannot reproduce the issue in Linux and in Windows. Could you please let me know which environment do you use?
This is the output, which I get:
Using framework PyTorch: 2.1.0+cpu
Using framework PyTorch: 2.1.0+cpu
Using framework PyTorch: 2.1.0+cpu
Using framework PyTorch: 2.1.0+cpu
Compiling the vae_decoder to CPU ...
Compiling the unet to CPU ...
Compiling the text_encoder to CPU ...
Compiling the vae_encoder to CPU ...

Copy link

review-notebook-app bot commented Feb 28, 2024

View / edit / reply to this conversation on ReviewNB

aleksandr-mokrov commented on 2024-02-28T11:14:33Z
----------------------------------------------------------------

Line #1.    %pip install -q "openvino>=2023.3.0

Add " to the end


@aleksandr-mokrov
Copy link
Contributor

aleksandr-mokrov commented Feb 28, 2024

View / edit / reply to this conversation on ReviewNB
aleksandr-mokrov commented on 2024-02-22T13:10:40Z ----------------------------------------------------------------
Line #1. from optimum.intel import OVLatentConsistencyModelPipeline
It causes error:
No module named 'onnx'
If onnx is installed:
KeyError: 'clip-text-model is not supported yet with the onnx backend. Only [] are supported. If you want to support onnx please propose a PR or open up an issue.'

@aleksandr-mokrov , I cannot reproduce the issue in Linux and in Windows. Could you please let me know which environment do you use? This is the output, which I get: Using framework PyTorch: 2.1.0+cpu Using framework PyTorch: 2.1.0+cpu Using framework PyTorch: 2.1.0+cpu Using framework PyTorch: 2.1.0+cpu Compiling the vae_decoder to CPU ... Compiling the unet to CPU ... Compiling the text_encoder to CPU ... Compiling the vae_encoder to CPU ...

I created new empty virtual environment, all packages are installed by running this notebook. Ubuntu, python3.10, torch==2.2.1+cpu

After installing onnx and restarting the kernel it works. Add onnx for installation

@DimaPastushenkov
Copy link
Contributor Author

View / edit / reply to this conversation on ReviewNB
aleksandr-mokrov commented on 2024-02-22T13:10:40Z ----------------------------------------------------------------
Line #1. from optimum.intel import OVLatentConsistencyModelPipeline
It causes error:
No module named 'onnx'
If onnx is installed:
KeyError: 'clip-text-model is not supported yet with the onnx backend. Only [] are supported. If you want to support onnx please propose a PR or open up an issue.'

@aleksandr-mokrov , I cannot reproduce the issue in Linux and in Windows. Could you please let me know which environment do you use? This is the output, which I get: Using framework PyTorch: 2.1.0+cpu Using framework PyTorch: 2.1.0+cpu Using framework PyTorch: 2.1.0+cpu Using framework PyTorch: 2.1.0+cpu Compiling the vae_decoder to CPU ... Compiling the unet to CPU ... Compiling the text_encoder to CPU ... Compiling the vae_encoder to CPU ...

I created new empty virtual environment, all packages are installed by running this notebook. Ubuntu, python3.10, torch==2.2.1+cpu

After installing onnx and restarting the kernel it works. Add onnx for installation

I have added onnx to the dependencies

Copy link

review-notebook-app bot commented Mar 4, 2024

View / edit / reply to this conversation on ReviewNB

eaidova commented on 2024-03-04T09:44:26Z
----------------------------------------------------------------

something wrong with formatting, text should be places on the next line after back to top


Copy link

review-notebook-app bot commented Mar 4, 2024

View / edit / reply to this conversation on ReviewNB

eaidova commented on 2024-03-04T09:44:27Z
----------------------------------------------------------------

Line #5.    pipeline.save_pretrained("./cpu")

why saving directory named cpu and why do you need to save pytorch model on disk at all if you always load it using hub (it is extra several GB disk space)?


Copy link

review-notebook-app bot commented Mar 4, 2024

View / edit / reply to this conversation on ReviewNB

eaidova commented on 2024-03-04T09:44:28Z
----------------------------------------------------------------

Line #6.    image.save("image_cpu.png")

here also naming may mislead users as it is hard to understand it was generated by ov using cpu or pytorch


Copy link

review-notebook-app bot commented Mar 4, 2024

View / edit / reply to this conversation on ReviewNB

eaidova commented on 2024-03-04T09:44:29Z
----------------------------------------------------------------

gc.collect();

for removing prints of free memory, it is recommended to add ; at the end


Copy link

review-notebook-app bot commented Mar 4, 2024

View / edit / reply to this conversation on ReviewNB

eaidova commented on 2024-03-04T09:44:30Z
----------------------------------------------------------------

here also issue with formatting


@eaidova eaidova merged commit 5b8d9cc into openvinotoolkit:main Mar 6, 2024
13 of 15 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants