Skip to content

How to make 4bit pytorch_quantization model export to .engine model? #6547

How to make 4bit pytorch_quantization model export to .engine model?

How to make 4bit pytorch_quantization model export to .engine model? #6547

Triggered via issue December 11, 2024 07:20
Status Skipped
Total duration 6s
Artifacts

blossom-ci.yml

on: issue_comment
Authorization
0s
Authorization
Upload log
0s
Upload log
Vulnerability scan
0s
Vulnerability scan
Start ci job
0s
Start ci job
Fit to window
Zoom out
Zoom in