Skip to content

Releases: intel/neural-compressor

Intel® Low Precision Optimization Tool v1.5 Release

12 Jul 14:23
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.5 release is featured by:

  • Add pattern-lock sparsity algorithm for NLP fine-tuning tasks
    • Up to 70% unstructured sparsity and 50% structured sparsity with <2% accuracy loss on 5 Bert finetuning tasks
  • Add NLP head pruning algorithm for HuggingFace models
    • Performance speedup up to 3.0X within 1.5% accuracy loss on HuggingFace BERT SST-2
  • Support model optimization pipeline
  • Integrate SigOPT with multi-metrics optimization
    • Complementary as basic strategy to speed up the tuning
  • Support TensorFlow 2.5, PyTorch 1.8, and ONNX Runtime 1.8

Validated Configurations:

  • Python 3.6 & 3.7 & 3.8 & 3.9
  • Centos 8.3 & Ubuntu 18.04
  • Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0, 2.5.0 and 1.15.0 UP1 & UP2 & UP3
  • PyTorch 1.5.0+cpu, 1.6.0+cpu, 1.8.0+cpu, ipex
  • MxNet 1.6.0, 1.7.0
  • ONNX Runtime 1.6.0, 1.7.0, 1.8.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lpot.git $ git clone https://github.com/intel/lpot.git
Binary Pip https://pypi.org/project/lpot $ pip install lpot
Binary Conda https://anaconda.org/intel/lpot $ conda install lpot -c conda-forge -c intel

Contact:

Please feel free to contact [email protected], if you get any questions.

Intel® Low Precision Optimization Tool v1.4.1 Release

25 Jun 16:20
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.4.1 release is featured by:

  1. Support TensorFlow 2.5.0
  2. Support PyTorch 1.8.0
  3. Support TensorFlow Object Detection YOLO-V3 model

Validated Configurations:

  • Python 3.6 & 3.7 & 3.8
  • Centos 7 & Ubuntu 18.04
  • Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0, 2.5.0 and 1.15.0 UP1 & UP2
  • PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
  • MxNet 1.7.0
  • ONNX Runtime 1.6.0, 1.7.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lpot.git $ git clone https://github.com/intel/lpot.git
Binary Pip https://pypi.org/project/lpot $ pip install lpot
Binary Conda https://anaconda.org/intel/lpot $ conda install lpot -c conda-forge -c intel

Contact:

Please feel free to contact [email protected], if you get any questions.

Intel® Low Precision Optimization Tool v1.4 Release

30 May 18:21
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.4 release is featured by:

Quantization

  1. PyTorch FX-based quantization support
  2. TensorFlow & ONNX RT quantization enhancement

Pruning

  1. Pruning/sparsity API refinement
  2. Magnitude-based pruning on PyTorch

Model Zoo

  1. INT8 key models updated (BERT on TensorFlow, DLRM on PyTorch, etc.)
  2. 20+ HuggingFace model quantization

User Experience

  1. More comprehensive logging message
  2. UI enhancement with FP32 optimization, auto-mixed precision (BF16/FP32), and graph visualization
  3. Online document: https://intel.github.io/lpot

Extended Capabilities

  1. Model conversion from QAT to Intel Optimized TensorFlow model

Validated Configurations:

  • Python 3.6 & 3.7 & 3.8
  • Centos 7 & Ubuntu 18.04
  • Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0 and 1.15.0 UP1 & UP2
  • PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
  • MxNet 1.7.0
  • ONNX Runtime 1.6.0, 1.7.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lpot.git $ git clone https://github.com/intel/lpot.git
Binary Pip https://pypi.org/project/lpot $ pip install lpot
Binary Conda https://anaconda.org/intel/lpot $ conda install lpot -c conda-forge -c intel

Contact:

Please feel free to contact [email protected], if you get any questions.

Intel® Low Precision Optimization Tool v1.3.1 Release

11 May 05:26
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.3 release is featured by:

  1. Improve graph optimization without explicit input/output setting

Validated Configurations:

  • Python 3.6 & 3.7 & 3.8
  • Centos 7 & Ubuntu 18.04
  • Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0 and 1.15.0 UP1 & UP2
  • PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
  • MxNet 1.7.0
  • ONNX Runtime 1.6.0, 1.7.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lpot.git $ git clone https://github.com/intel/lpot.git
Binary Pip https://pypi.org/project/lpot $ pip install lpot
Binary Conda https://anaconda.org/intel/lpot $ conda install lpot -c conda-forge -c intel

Contact:

Please feel free to contact [email protected], if you get any questions.

Intel® Low Precision Optimization Tool v1.3 Release

16 Apr 14:58
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.3 release is featured by:

  1. FP32 optimization & auto-mixed precision (BF16/FP32) for TensorFlow
  2. Dynamic quantization support for PyTorch
  3. ONNX Runtime v1.7 support
  4. Configurable benchmarking support (multi-instances, warmup, etc.)
  5. Multiple batch size calibration & mAP metrics for object detection models
  6. Experimental user facing APIs for better usability
  7. Various HuggingFace models support

Validated Configurations:

  • Python 3.6 & 3.7 & 3.8
  • Centos 7 & Ubuntu 18.04
  • Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0 and 1.15.0 UP1 & UP2
  • PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
  • MxNet 1.7.0
  • ONNX Runtime 1.6.0, 1.7.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lpot.git $ git clone https://github.com/intel/lpot.git
Binary Pip https://pypi.org/project/lpot $ pip install lpot
Binary Conda https://anaconda.org/intel/lpot $ conda install lpot -c conda-forge -c intel

Contact:

Please feel free to contact [email protected], if you get any questions.

Intel® Low Precision Optimization Tool v1.2.1 Release

02 Apr 14:53
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.2.1 release is featured by:

  1. user-facing APIs backward compatibility with v1.1 and v1.0.
  2. refined experimental user-facing APIs for better out-of-box experience.

Validated Configurations:

  • Python 3.6 & 3.7 & 3.8
  • Centos 7 & Ubuntu 18.04
  • Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0 and 1.15.0 UP1 & UP2
  • PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
  • MxNet 1.7.0
  • ONNX Runtime 1.6.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lpot.git $ git clone https://github.com/intel/lpot.git
Binary Pip https://pypi.org/project/lpot $ pip install lpot
Binary Conda https://anaconda.org/intel/lpot $ conda install lpot -c conda-forge -c intel

Contact:

Please feel free to contact [email protected], if you get any questions.

Intel® Low Precision Optimization Tool v1.2 Release

12 Mar 15:31
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.2 release is featured by:

  • Broad TensorFlow model type support
  • operator-wise quantization scheme for ONNX RT
  • MSE driven tuning for metric-free use cases
  • UX improvement, including UI web server preview support
  • More key model supports

Validated Configurations:

  • Python 3.6 & 3.7 & 3.8
  • Centos 7 & Ubuntu 18.04
  • Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0 and 1.15.0 UP1 & UP2
  • PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
  • MxNet 1.7.0
  • ONNX Runtime 1.6.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lpot.git $ git clone https://github.com/intel/lpot.git
Binary Pip https://pypi.org/project/lpot $ pip install lpot
Binary Conda https://anaconda.org/intel/lpot $ conda install lpot -c conda-forge -c intel

Contact:

Please feel free to contact [email protected], if you get any questions.

Intel® Low Precision Optimization Tool v1.1 Release

31 Dec 13:41
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.1 release is featured by:

  • New backends (PyTorch/IPEX, ONNX Runtime) backend preview support
  • Add built-in industry dataset/metric and custom registration
  • Preliminary input/output node auto-detection on TensorFlow models
  • New INT8 quantization recipes: bias correction and label balance

Validated Configurations:

  • Python 3.6 & 3.7
  • Centos 7
  • Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0 and 1.15.0 UP1 & UP2
  • PyTorch 1.5.0+cpu
  • MxNet 1.7.0
  • ONNX Runtime 1.6.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lpot.git $ git clone https://github.com/intel/lpot.git
Binary Pip https://pypi.org/project/lpot $ pip install lpot
Binary Conda https://anaconda.org/intel/lpot $ conda install lpot -c conda-forge -c intel

Contact:

Please feel free to contact [email protected], if you get any questions.

Intel® Low Precision Optimization Tool v1.0 Release

30 Oct 15:24
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.0 release is featured by:

  • Refined user facing APIs for best OOB.
  • Add TPE tuning strategies (Experimental).
  • Pruning POC support on PyTorch
  • TensorBoard POC support for tuning analysis.
  • Built-in INT8/Dummy dataloader Support.
  • Built-in Benchmarking support.
  • Tuning history for strategy finetune.
  • Support TF Keras and checkpoint model type as input.

Validated Configurations:

  • Python 3.6 & 3.7
  • Centos 7
  • Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0 and 1.15UP1
  • PyTorch 1.5.0+cpu
  • MxNet 1.7.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lp-opt-tool.git $ git clone https://github.com/intel/lp-opt-tool.git
Binary Pip https://pypi.org/project/ilit $ pip install ilit
Binary Conda https://anaconda.org/intel/ilit $ conda install ilit -c intel

Contact:

Please feel free to contact [email protected], if you get any questions.

Intel® Low Precision Optimization Tool v1.0 Beta Release

31 Aug 10:16
Compare
Choose a tag to compare

Intel® Low Precision Optimization Tool v1.0 beta release is featured by:

  • Built-in dataloaders and evaluators
  • Add random and exhaustive tuning strategies
  • Mix precision tuning support on TensorFlow (INT8/BF16/FP32)
  • Quantization-aware training POC support on Pytorch
  • TensorFlow mainstream version support, including 1.15.2, 1.15UP1 and 2.1.0
  • 50+ models validated

Supported Models:

TensorFlow Model Category
ResNet50 V1 Image Recognition
ResNet50 V1.5 Image Recognition
ResNet101 Image Recognition
Inception V1 Image Recognition
Inception V2 Image Recognition
Inception V3 Image Recognition
Inception V4 Image Recognition
ResNetV2_50 Image Recognition
ResNetV2_101 Image Recognition
ResNetV2_152 Image Recognition
Inception ResNet V2 Image Recognition
SSD ResNet50 V1 Object Detection
Wide & Deep Recommendation
VGG16 Image Recognition
VGG19 Image Recognition
Style_transfer Style Transfer
PyTorch Model Category
BERT-Large RTE Language Translation
BERT-Large QNLI Language Translation
BERT-Large CoLA Language Translation
BERT-Base SST-2 Language Translation
BERT-Base RTE Language Translation
BERT-Base STS-B Language Translation
BERT-Base CoLA Language Translation
BERT-Base MRPC Language Translation
DLRM Recommendation
BERT-Large MRPC Language Translation
ResNext101_32x8d Image Recognition
BERT-Large SQUAD Language Translation
ResNet50 V1.5 Image Recognition
ResNet18 Image Recognition
Inception V3 Image Recognition
YOLO V3 Object Detection
Peleenet Image Recognition
ResNest50 Image Recognition
SE_ResNext50_32x4d Image Recognition
ResNet50 V1.5 QAT Image Recognition
ResNet18 QAT Image Recognition
MxNet Model Category
ResNet50 V1 Image Recognition
MobileNet V1 Image Recognition
MobileNet V2 Image Recognition
SSD-ResNet50 Object Detection
SqueezeNet V1 Image Recognition
ResNet18 Image Recognition
Inception V3 Image Recognition

Known Issues:

  • TensorFlow ResNet50 v1.5 int8 model will crash on TensorFlow 1.15 UP1 branch

Validated Configurations:

  • Python 3.6 & 3.7
  • Centos 7
  • Intel TensorFlow 1.15.2, 2.1.0 and 1.15UP1
  • PyTorch 1.5
  • MxNet 1.6

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lp-opt-tool.git $ git clone https://github.com/intel/lp-opt-tool.git
Binary Pip https://pypi.org/project/ilit $ pip install ilit
Binary Conda https://anaconda.org/intel/ilit $ conda config --add channels intel $ conda install ilit

Contact:

Please feel free to contact [email protected], if you get any questions.