Skip to content

Latest commit

 

History

History
42 lines (27 loc) · 1.67 KB

README.md

File metadata and controls

42 lines (27 loc) · 1.67 KB

Vector Quantized Contrastive Predictive Coding for Template-based Music Generation

Gaëtan Hadjeres, Sony CSL, Paris, France ([email protected])
Léopold Crestel, Sony CSL, Paris, France ([email protected])

This is the companion github of the paper Vector Quantized Contrastive Predictive Coding for Template-based Music Generation. Results are available on our accompanying website.

Installation

To install

  • clone the repository.

  • run (we recommend using a virtualenv)

      pip install -r requirements.txt
    

How to use it

All the experiments reported here can be reproduced with the different configuration files located in VQCPCB/configs.

Encoders are trained independently from the decode, in a self-supervised manner. To train a particular encoder, run the following command

python main_encoder.py -t -c VQCPCB/configs/encoder_*.py

with encoder_* being the name of the configuration file.

Trained models are stored in models/. To observe the clusters learned by a trained encoder, you can run the command

python main_encoder.py -l -c models/encoder_*/config.py

To train a decoder for a particular encoder, you can run

python main_decoder.py -t -c VQCPCB/configs/decoder_*.py 

after having specified in the configuration file VQCPCB/configs/decoder_*.py the path to the encoder:

'config_encoder':              'models/encoder_*/config.py',

Variations of chorales excerpts as well as the complete re-harmonisation of all the chorales found in our corpus can be generated by running

python main_decoder.py -l -c models/decoder_*/config.py