Skip to content

Commit

Permalink
Corrections after code review
Browse files Browse the repository at this point in the history
  • Loading branch information
DimaPastushenkov committed Mar 5, 2024
1 parent 67bcc25 commit 593a75b
Showing 1 changed file with 6 additions and 6 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -118,6 +118,7 @@
"source": [
"### Using full precision model in CPU with `LatentConsistencyModelPipeline`\n",
"[back to top ⬆️](#Table-of-contents:)\n",
"\n",
"Standard pipeline for the Latent Consistency Model(LCM) from Diffusers library is used here. For more information please refer to https://huggingface.co/docs/diffusers/en/api/pipelines/latent_consistency_models\n"
]
},
Expand Down Expand Up @@ -146,9 +147,7 @@
"from diffusers import LatentConsistencyModelPipeline\n",
"import gc\n",
"\n",
"pipeline = LatentConsistencyModelPipeline.from_pretrained(\"SimianLuo/LCM_Dreamshaper_v7\")\n",
"pipeline.save_pretrained(\"./cpu\")\n",
"\n"
"pipeline = LatentConsistencyModelPipeline.from_pretrained(\"SimianLuo/LCM_Dreamshaper_v7\")\n"
]
},
{
Expand Down Expand Up @@ -190,7 +189,7 @@
"image = pipeline(\n",
" prompt=prompt, num_inference_steps=4, guidance_scale=8.0\n",
").images[0]\n",
"image.save(\"image_cpu.png\")\n",
"image.save(\"image_standard_pipeline.png\")\n",
"image"
]
},
Expand All @@ -213,7 +212,7 @@
],
"source": [
"del pipeline\n",
"gc.collect()"
"gc.collect();"
]
},
{
Expand Down Expand Up @@ -252,6 +251,7 @@
"source": [
"### Running inference using Optimum Intel `OVLatentConsistencyModelPipeline`\n",
"[back to top ⬆️](#Table-of-contents:)\n",
"\n",
"Accelerating inference of LCM using Intel Optimum with OpenVINO backend. For more information please refer to https://huggingface.co/docs/optimum/intel/inference#latent-consistency-models. \n",
"The pretrained model in this notebook is available on Hugging Face in FP32 precision and in case if CPU is selected as a device, then inference runs with full precision. For GPU accelerated AI-inference is supported for FP16 data type and FP32 precision for GPU may produce high memory footprint and latency. Therefore, default precision for GPU in OpenVINO is FP16. OpenVINO GPU Plugin converts FP32 to FP16 on the fly and there is no need to do it manually"
]
Expand Down Expand Up @@ -330,7 +330,7 @@
"outputs": [],
"source": [
"del ov_pipeline\n",
"gc.collect()"
"gc.collect();"
]
}
],
Expand Down

0 comments on commit 593a75b

Please sign in to comment.