Seeking Guidance on Training a Model for Generating Gregorian Chant Music

Hi everyone,

I’m interested in training a model to generate music, specifically in the style of Gregorian chant. I’m reaching out to ask where I should start looking for resources, frameworks, or foundational knowledge in this area.

I’m not looking for specific recipes or detailed instructions at this point, just some guidance on the best places to begin my research. Any recommendations would be greatly appreciated.

Thank you in advance for your help!

Best,
Martim

1 Like

1. Foundational Knowledge

  • Understand Music Representation:
    • Learn how music is represented digitally (e.g., MIDI, MusicXML, or symbolic formats).
    • Explore music theory concepts relevant to Gregorian chant, such as modes, monophony, and modal scales.
  • Study Music Generation Basics:
    • Review research papers on music generation. A good starting point is OpenAI’s MuseNet or Google’s Magenta.
    • Learn about generative models such as Transformers, Variational Autoencoders (VAEs), and Recurrent Neural Networks (RNNs) for sequential data.

2. Datasets for Gregorian Chant

  • Available Datasets:
    • Essen Folk Song Collection: Includes European folk songs, with some Gregorian-style melodies.
    • Corpus Monodicum: A collection of medieval chant manuscripts.
    • Music21’s Chant Corpus: Music21 provides a symbolic corpus of chant data.
  • Creating Your Dataset:
    • Use optical music recognition (OMR) tools (e.g., Audiveris, MuseScore) to digitize Gregorian chant manuscripts.
    • Extract chant data from public domain sources like the Choral Public Domain Library (CPDL).

3. Frameworks and Tools

  • Music-Specific Frameworks:
    • Magenta: A research project by Google focusing on music and art generation using TensorFlow.
    • MuseScore: For notation and data preprocessing.
    • Music21: A Python library for analyzing and working with symbolic music data.
  • General Deep Learning Frameworks:
    • PyTorch or TensorFlow: To build and train generative models.
    • Hugging Face Transformers: For adapting text-based generative models (like GPT or T5) to music generation.

4. Research and Inspiration

  • Papers and Projects:
    • “Music Transformer” by Google Magenta: Focuses on generating music with attention-based models.
    • “DeepBach” by Gaëtan Hadjeres: A model generating Bach-style chorales, which can be adapted for Gregorian chant.
  • Existing Models:
    • Explore pre-trained models on symbolic music data. For example, MuseNet or models from the Magenta library.
2 Likes

There are so many music generation models to choose from, I don’t know which one is the easiest to use…
No matter which model you use or train, as long as you learn how to use transformers, you should be fine, so you might want to try taking the HF Audio Course.

1 Like