This repository contains the official implementation of "Towards Automatic Instrumentation by Learning to Separate Parts in Symbolic Multitrack Music" (ISMIR 2021).
Towards Automatic Instrumentation by Learning to Separate Parts in Symbolic Multitrack Music
Hao-Wen Dong, Chris Donahue, Taylor Berg-Kirkpatrick and Julian McAuley
Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), 2021
[homepage]
[paper]
[video]
[slides]
[video (long)]
[slides (long)]
[code]
- Content
- Prerequisites
- Directory structure
- Data Collection
- Data Preprocessing
- Models
- Baseline algorithms
- Configuration
- Citation
You can install the dependencies by running pipenv install
(recommended) or python3 setup.py install -e .
. Python>3.6 is required.
├─ analysis Notebooks for analysis
├─ scripts Scripts for running experiments
├─ models Pretrained models
└─ arranger Main Python module
├─ config.yaml Configuration file
├─ data Code for collecting and processing data
├─ common Most-common algorithm
├─ zone Zone-based algorithm
├─ closest Closest-pitch algorithm
├─ lstm LSTM model
└─ transformer Transformer model
# Collect Bach chorales from the music21 corpus
import shutil
import music21.corpus
for path in music21.corpus.getComposer("bach"):
if path.suffix in (".mxl", ".xml"):
shutil.copyfile(path, "data/bach/raw/" + path.name)
# Download the metadata
wget -O data/musicnet https://homes.cs.washington.edu/~thickstn/media/musicnet_metadata.csv
# Download the dataset
wget -O data/nes http://deepyeti.ucsd.edu/cdonahue/nesmdb/nesmdb_midi.tar.gz
# Extract the archive
tar zxf data/nes/nesmdb_midi.tar.gz
# Rename the folder for consistency
mv nesmdb_midi/ raw/
# Download the dataset
wget -O data/lmd http://hog.ee.columbia.edu/craffel/lmd/lmd_matched.tar.gz
# Extract the archive
tar zxf data/lmd/lmd_matched.tar.gz
# Rename the folder for consistency
mv lmd_matched/ raw/
# Download the filenames
wget -O data/lmd http://hog.ee.columbia.edu/craffel/lmd/md5_to_paths.json
The following commands assume Bach chorales. You might want to replace the dataset identifier
bach
with identifiers of other datasets (musicnet
for MusicNet,nes
for NES Music Database andlmd
for Lakh MIDI Dataset).
# Preprocess the data
python3 arranger/data/collect_bach.py -i data/bach/raw/ -o data/bach/json/ -j 1
# Collect training data
python3 arranger/data/collect.py -i data/bach/json/ -o data/bach/s_500_m_10/ -d bach -s 500 -m 10 -j 1
- LSTM model
arranger/lstm/train.py
: Train the LSTM modelarranger/lstm/infer.py
: Infer with the LSTM model
- Transformer model
arranger/transformer/train.py
: Train the Transformer modelarranger/transformer/infer.py
: Infer with the Transformer model
Pretrained models can be found in the models/
directory.
To run a pretrained model, please pass the corresponding command line options to the infer.py
scripts. You may want to follow the commands used in the experiment scripts provided in scripts/infer_*.sh
.
For example, use the following command to run the pretrained BiLSTM model with embeddings.
# Assuming we are at the root of the repository
cp models/bach/lstm/bidirectional_embedding/best_models.hdf5 OUTPUT_DIRECTORY
python3 arranger/lstm/infer.py \
-i {INPUT_DIRECTORY} -o {OUTPUT_DIRECTORY} \
-d bach -g 0 -bi -pe -bp -be -fi
The input directory (INPUT_DIRECTORY
) contains the input JSON files, which can be generated by muspy.save()
. The output directory (OUTPUT_DIRECTORY
) should contain the pretrained model and will contain the output files. The -d bach
option indicates that we are using the Bach chorale dataset. The -g 0
option will run the model on the first GPU. The -bi -pe -bp -be -fi
specifies the model options (run python3 arranger/lstm/infer.py -h
for more information).
- Most-common algorithm
arranger/common/learn.py
: Learn the most common labelarranger/common/infer.py
: Infer with the most-common algorithm
- Zone-based algorithm
arranger/zone/learn.py
: Learn the optimal zone settingarranger/zone/infer.py
: Infer with the zone-based algorithm
- Closest-pitch algorithm
arranger/closest/infer.py
: Infer with the closest-pitch algorithm
- MLP model
arranger/mlp/train.py
: Train the MLP modelarranger/mlp/infer.py
: Infer with the MLP model
In arranger/config.yaml
, you can configure the MIDI program numbers used for each track in the sample files generated. You can also configure the color of the generated sample piano roll visualization.
Please cite the following paper if you use the code provided in this repository.
Hao-Wen Dong, Chris Donahue, Taylor Berg-Kirkpatrick and Julian McAuley, "Towards Automatic Instrumentation by Learning to Separate Parts in Symbolic Multitrack Music," Proceedings of the International Society for Music Information Retrieval Conference (ISMIR), 2021.
@inproceedings{dong2021arranger,
author = {Hao-Wen Dong and Chris Donahue and Taylor Berg-Kirkpatrick and Julian McAuley},
title = {Towards Automatic Instrumentation by Learning to Separate Parts in Symbolic Multitrack Music},
booktitle = {Proceedings of the International Society for Music Information Retrieval Conference (ISMIR)},
year = 2021,
}