Releases: warner-benjamin/fastxtend
Releases · warner-benjamin/fastxtend
v0.1.7
v0.1.6
What's Changed
- Audio Fixes for Current Versions of fastcore & torchaudio
- Foreach EMA fixed to work with multiple types
Full Changelog: v0.1.5...v0.1.6
v0.1.5.post1
Fixes a few doc strings.
v0.1.5
- Adds support for training Hugging Face Transformers models via
HuggingFaceCallback
&HuggingFaceLoader
- Profiler Callbacks updated to work with
HuggingFaceCallback
&HuggingFaceLoader
- Compile Callback improvements
- Support LR Finder
- Support PyTorch 2.1 Compile format
- Improve saved model compatibility
GradientAccumulation
&GradientAccumulationSchedule
improvements- Auto-detect micro-batch size if using fastai dataloader
- Drop last macro-batch to match full size training
- Add
DataLoaderMixin
for adding fastai dataloader features to non-fastai dataloaders- Switch FFCV
Loader
to useDataLoaderMixin
- Switch FFCV
Full Changelog: v0.1.4...v0.1.5
v0.1.4
- Adds support for bfloat16 mixed precision training via fastxtend's
MixedPrecision
callback - Adds two callback utilities for callback developers:
CallbackScheduler
: a mixin for callback scheduling values during trainingLogDispatch
: a new default callback for logging values from callbacks to WandBCallback & TensorBoardCallback
- Adds
GradientAccumulation
callback which logs full batches instead of micro-batches - Adds
GradientAccumulationSchedule
callback which supports batch size warmup via a schedulable accumulation batch size
Full Changelog: v0.1.3...v0.1.4
v0.1.3
Optimizers
- Adds Sophia & StableAdam optimizers
- Adds native fastai support for bitsandbytes 8-bit optimizers
- Reduce ForEach L2 weight decay and RAdam, LAMB, & Ranger optimizer memory usage
- Increase LAMB step speed
Other Features
- Add asynchronous fastai batch transforms to the FFCV Loader
- Add DynamoExplain callback for diagnosing
torch.compile
results
Full Changelog: v0.1.2...v0.1.3
v0.1.2
- Fix issue with
ProgressiveResize
and multiple inputs and/or labels. - Add experimental
CompilerCallback
and patches to integratetorch.compile
with fastai
Full Changelog: v0.1.1...v0.1.2
v0.1.1
Bug fixes for profilers and progressive resizing preallocation.
Full Changelog: v0.1...v0.1.1
v0.1
What's Changed
- Add the Initial Integration of the FFCV DataLoader
- fastxtend+FFCV Documentation
- Bug fixes for Progressive Resize Callback and updated documentation
- Throughput and Simple Profiler improvements and bug fixes
- An example imagenette.py script and config file which can use of most of fastxtend's features
- Add a conda install script for prerequisites
Full Changelog: v0.0.19...v0.1
v0.0.19
What's Changed
- Progressive Resize Fixes & Improvements
- Losses Documentation Pass and Bug Fix
- TensorBase deepcopy support
- CutMixUpAgment augment_finetune is number of epochs or percent of training to finetunf for
Full Changelog: v0.0.18...v0.0.19