Skip to content

Releases: warner-benjamin/fastxtend

v0.1.7

18 Dec 15:29
Compare
Choose a tag to compare

What's Changed

  • Easily Access Original Transformers Model #22
  • Bug Fix: Remove delegates for log_wandb_table #20

Full Changelog: v0.1.6...v0.1.7

v0.1.6

19 Nov 03:53
Compare
Choose a tag to compare

What's Changed

  • Audio Fixes for Current Versions of fastcore & torchaudio
  • Foreach EMA fixed to work with multiple types

Full Changelog: v0.1.5...v0.1.6

v0.1.5.post1

17 Oct 06:11
Compare
Choose a tag to compare

Fixes a few doc strings.

v0.1.5

16 Oct 04:27
Compare
Choose a tag to compare
  • Adds support for training Hugging Face Transformers models via HuggingFaceCallback & HuggingFaceLoader
  • Profiler Callbacks updated to work with HuggingFaceCallback & HuggingFaceLoader
  • Compile Callback improvements
    • Support LR Finder
    • Support PyTorch 2.1 Compile format
    • Improve saved model compatibility
  • GradientAccumulation & GradientAccumulationSchedule improvements
    • Auto-detect micro-batch size if using fastai dataloader
    • Drop last macro-batch to match full size training
  • Add DataLoaderMixin for adding fastai dataloader features to non-fastai dataloaders
    • Switch FFCV Loader to use DataLoaderMixin

Full Changelog: v0.1.4...v0.1.5

v0.1.4

21 Jun 06:18
Compare
Choose a tag to compare
  • Adds support for bfloat16 mixed precision training via fastxtend's MixedPrecision callback
  • Adds two callback utilities for callback developers:
    • CallbackScheduler: a mixin for callback scheduling values during training
    • LogDispatch: a new default callback for logging values from callbacks to WandBCallback & TensorBoardCallback
  • Adds GradientAccumulation callback which logs full batches instead of micro-batches
  • Adds GradientAccumulationSchedule callback which supports batch size warmup via a schedulable accumulation batch size

Full Changelog: v0.1.3...v0.1.4

v0.1.3

07 Jun 16:44
Compare
Choose a tag to compare

Optimizers

  • Adds Sophia & StableAdam optimizers
  • Adds native fastai support for bitsandbytes 8-bit optimizers
  • Reduce ForEach L2 weight decay and RAdam, LAMB, & Ranger optimizer memory usage
  • Increase LAMB step speed

Other Features

  • Add asynchronous fastai batch transforms to the FFCV Loader
  • Add DynamoExplain callback for diagnosing torch.compile results

Full Changelog: v0.1.2...v0.1.3

v0.1.2

29 Mar 04:20
Compare
Choose a tag to compare
  • Fix issue with ProgressiveResize and multiple inputs and/or labels.
  • Add experimental CompilerCallback and patches to integrate torch.compile with fastai

Full Changelog: v0.1.1...v0.1.2

v0.1.1

27 Mar 22:58
Compare
Choose a tag to compare

Bug fixes for profilers and progressive resizing preallocation.

Full Changelog: v0.1...v0.1.1

v0.1

27 Mar 07:24
ee5ac9e
Compare
Choose a tag to compare

What's Changed

  • Add the Initial Integration of the FFCV DataLoader
  • fastxtend+FFCV Documentation
  • Bug fixes for Progressive Resize Callback and updated documentation
  • Throughput and Simple Profiler improvements and bug fixes
  • An example imagenette.py script and config file which can use of most of fastxtend's features
  • Add a conda install script for prerequisites

Full Changelog: v0.0.19...v0.1

v0.0.19

01 Mar 08:12
Compare
Choose a tag to compare
v0.0.19 Pre-release
Pre-release

What's Changed

  • Progressive Resize Fixes & Improvements
  • Losses Documentation Pass and Bug Fix
  • TensorBase deepcopy support
  • CutMixUpAgment augment_finetune is number of epochs or percent of training to finetunf for

Full Changelog: v0.0.18...v0.0.19