Skip to content

Commit

Permalink
setting use_onnx_custom_quantizer_ops to true in tutorials pytorch ON…
Browse files Browse the repository at this point in the history
…NX export
  • Loading branch information
Ofir Gordon authored and Ofir Gordon committed Nov 22, 2023
1 parent d15d9fc commit 794538f
Showing 1 changed file with 2 additions and 1 deletion.
3 changes: 2 additions & 1 deletion tutorials/quick_start/pytorch_fw/quant.py
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,8 @@ def quantize(model: nn.Module,
_, onnx_file_path = tempfile.mkstemp('.onnx') # Path of exported model
mct.exporter.pytorch_export_model(model=quantized_model, save_model_path=onnx_file_path,
repr_dataset=representative_data_gen, target_platform_capabilities=tpc,
serialization_format=mct.exporter.PytorchExportSerializationFormat.ONNX)
serialization_format=mct.exporter.PytorchExportSerializationFormat.ONNX,
use_onnx_custom_quantizer_ops=True)


return quantized_model, QuantInfo(user_info=quantization_info, tpc_info=tpc.get_info(), quantization_workflow=workflow, mp_weights_compression=mp_wcr)

0 comments on commit 794538f

Please sign in to comment.