Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[LLM Bench] Additional Config for PT Benchmarking in LLM Bench #1352

Open
anzr299 opened this issue Dec 10, 2024 · 0 comments
Open

[LLM Bench] Additional Config for PT Benchmarking in LLM Bench #1352

anzr299 opened this issue Dec 10, 2024 · 0 comments

Comments

@anzr299
Copy link
Contributor

anzr299 commented Dec 10, 2024

I would like to request the ability to pass additional configuration for pytorch framework in LLM bench when loading a pipeling.
While loading SD3 pipeling without text_encoder_3 and tokenizer_3, it is expected to pass these values as None while loading SD3 pipeline.
At the moment loading such a pipeline requires modifying This line like so:
pipe = model_class.from_pretrained(model_path, text_encoder_3=None, tokenizer_3=None)

@anzr299 anzr299 changed the title Additional Config for PT Benchmarking in LLM Bench [LLM Bench Additional Config for PT Benchmarking in LLM Bench Dec 10, 2024
@anzr299 anzr299 changed the title [LLM Bench Additional Config for PT Benchmarking in LLM Bench [LLM Bench] Additional Config for PT Benchmarking in LLM Bench Dec 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant