Polymath is a tool that uses machine learning to convert any music library into a music production sample-library. It automatically separates songs into stems, quantizes them to the same tempo and beat-grid, analyzes musical structure, key, and other information, and converts audio to MIDI. The result is a searchable sample library that streamlines the workflow for music producers, DJs, and ML audio developers. Polymath is released under the MIT license and can be installed using pip and the provided Dockerfile.
⚡Top 5 Polymath Features:
Music Source Separation: Polymath uses the Demucs neural network to separate songs into stems (beats, bass, etc.).
Music Structure Segmentation/Labeling: The sf_segmenter neural network is used for music structure segmentation and labeling.
Music Pitch Tracking and Key Detection: The Crepe neural network is responsible for music pitch tracking and key detection.
Music to MIDI transcription: The Basic Pitch neural network is used for music to MIDI transcription.
Music Quantization and Alignment: Polymath uses pyrubberband for music quantization and alignment.
⚡Top 5 Polymath Use Cases:
Music Production: Polymath simplifies the process of creating a large music dataset for training generative models.
DJing: It makes it easy to create a polished, hour-long mash-up DJ set by using its search capability to discover related tracks.
ML Audio Development: Polymath streamlines the workflow for music producers, DJs, and ML audio developers.
Music Analysis: It automatically analyzes songs once, which takes some time, but once in the database, they can be accessed rapidly.
Music Quantization: Polymath quantizes songs to the same tempo and beat-grid and saves them to the folder “/processedâ€.