Skip to content

95anantsingh/NYU-Attacking-Compressed-NLP

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

46 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Attacking Compressed NLP

LICENSE PYTHON PYTORCH CUDA

This project aims to investigate the transferability of adversarial samples across the sate of the art NLP models and their compressed versions and infer the effects different compression techniques have on adversarial attacks.


🔧 Dependencies and Installation

Dependencies

Installation

  1. Clone repo

    git clone https://github.com/95anantsingh/NYU-Attacking-Compressed-NLP.git
    cd NYU-Attacking-Compressed-NLP
  2. Create conda environment

    conda env create -f environment.yml
  3. Download BERT model weights

    wget -i bert_weight_urls --directory-prefix models/data/weights
  4. Download LSTM model weights

    wget -i lstm_weight_urls --directory-prefix models/data/weights

Additionally the big pretrained models are stored on a drive link: please download them and store them to the corresponding location, more details in individual READMEs.


📁 Project Structure

This repo is structured:

  • BERT based SST attacks folder: see documentation here
  • LSTM based SST attacks folder: see documentation here

📚 Datasets

Dataset used: https://huggingface.co/datasets/sst, will be downloaded automatically on running the code


⚡ Quick Inference

conda activate NLPattack
cd models/bert/sst

The instruction to run the code and description of the files in each folder is in a separate README.md inside the folder.


📘 Documentation

Project presentation and results can be found at docs/presentation.pdf
Demo video can be downloaded from docs/attack-demo.webm


📜 License

This repo is licensed under GPL License Version 3.0


📧 Contact

If you have any question, please email [email protected] or [email protected]

This Project was part of graduate level High Performance Machine Learning course at New York University.