Skip to content

apple/ml-sigmoid-attention

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Sigmoid Attention

This repo contains the code associated with Theory, Analysis, and Best Practices for Sigmoid Self-Attention

Components

The three components of this release are:

  • FlashSigmoid: A hardware aware implementation of Sigmoid Attention.
  • Optorch: PyTorch-based functional implementation of standard optimizers.
  • Attention Simulator: A research friendly codebase for diagnosing and debugging attention.

Installation

See the README.md in the corresponding component for installation and usage instructions.

We provide a convenience installation helper for all three packages:

# Create an environment for sigmoid attention, if not done already.
conda create -n sigmoid-attn-py310 python=3.10
conda activate sigmoid-attn-py310

# Setup Flashsigmoid -> Optorch -> Attention Simulator.
bash setup.bash

Performance

Forward pass kernels on H100. Backward pass kernels on H100.
Sigmoid vs. Softmax Forward Kernels Sigmoid vs. Softmax Backward Kernels
Train losses comparing SigmoidAttn with SoftmaxAttn.
SigmoidAttn vs. SoftmaxAttn Train Losses

Citation

If you find this work useful in your research, please cite:

@misc{ramapuram2024theoryanalysisbestpractices,
      title={Theory, Analysis, and Best Practices for Sigmoid Self-Attention},
      author={Jason Ramapuram and Federico Danieli and Eeshan Dhekane and Floris Weers and Dan Busbridge and Pierre Ablin and Tatiana Likhomanenko and Jagrit Digani and Zijin Gu and Amitis Shidani and Russ Webb},
      year={2024},
      eprint={2409.04431},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2409.04431},
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published