Starter kit for getting started in the Music Demixing Challenge.

Overview

Airborne Banner

Music Demixing Challenge - Starter Kit

๐Ÿ‘‰ Challenge page

Discord

This repository is the Music Demixing Challenge Submission template and Starter kit!

Clone the repository to compete now!

This repository contains:

  • Documentation on how to submit your models to the leaderboard
  • The procedure for best practices and information on how we evaluate your agent, etc.
  • Starter code for you to get started!

Table of Contents

  1. Competition Procedure
  2. How to access and use dataset
  3. How to start participating
  4. How do I specify my software runtime / dependencies?
  5. What should my code structure be like ?
  6. How to make submission
  7. Other concepts
  8. Important links

Competition Procedure

The Music Demixing (MDX) Challenge is an opportunity for researchers and machine learning enthusiasts to test their skills by creating a system able to perform audio source separation.

In this challenge, you will train your models locally and then upload them to AIcrowd (via git) to be evaluated.

The following is a high level description of how this process works

  1. Sign up to join the competition on the AIcrowd website.
  2. Clone this repo and start developing your solution.
  3. Train your models for audio seperation and write prediction code in test.py.
  4. Submit your trained models to AIcrowd Gitlab for evaluation (full instructions below). The automated evaluation setup will evaluate the submissions against the test dataset to compute and report the metrics on the leaderboard of the competition.

How to access and use the dataset

You are allowed to train your system either exclusively on the training part of MUSDB18-HQ dataset or you can use your choice of data. According to the dataset used, you will be eligible for different leaderboards.

๐Ÿ‘‰ Download MUSDB18-HQ dataset

In case you are using external dataset, please mention it in your aicrowd.json.

{
  [...],
  "external_dataset_used": true
}

The MUSDB18 dataset contains 150 songs (100 songs in train and 50 songs in test) together with their seperations in the following manner:

|
โ”œโ”€โ”€ train
โ”‚   โ”œโ”€โ”€ A Classic Education - NightOwl
โ”‚   โ”‚   โ”œโ”€โ”€ bass.wav
โ”‚   โ”‚   โ”œโ”€โ”€ drums.wav
โ”‚   โ”‚   โ”œโ”€โ”€ mixture.wav
โ”‚   โ”‚   โ”œโ”€โ”€ other.wav
โ”‚   โ”‚   โ””โ”€โ”€ vocals.wav
โ”‚   โ””โ”€โ”€ ANiMAL - Clinic A
โ”‚       โ”œโ”€โ”€ bass.wav
โ”‚       โ”œโ”€โ”€ drums.wav
โ”‚       โ”œโ”€โ”€ mixture.wav
โ”‚       โ”œโ”€โ”€ other.wav
โ”‚       โ””โ”€โ”€ vocals.wav
[...]

Here the mixture.wav file is the original music on which you need to do audio source seperation.
While bass.wav, drums.wav, other.wav and vocals.wav contain files for your training purposes.
Please note again: To be eligible for Leaderboard A, you are only allowed to train on the songs in train.

How to start participating

Setup

  1. Add your SSH key to AIcrowd GitLab

You can add your SSH Keys to your GitLab account by going to your profile settings here. If you do not have SSH Keys, you will first need to generate one.

  1. Clone the repository

    git clone [email protected]:AIcrowd/music-demixing-challenge-starter-kit.git
    
  2. Install competition specific dependencies!

    cd music-demixing-challenge-starter-kit
    pip3 install -r requirements.txt
    
  3. Try out random prediction codebase present in test.py.

How do I specify my software runtime / dependencies ?

We accept submissions with custom runtime, so you don't need to worry about which libraries or framework to pick from.

The configuration files typically include requirements.txt (pypi packages), environment.yml (conda environment), apt.txt (apt packages) or even your own Dockerfile.

You can check detailed information about the same in the ๐Ÿ‘‰ RUNTIME.md file.

What should my code structure be like ?

Please follow the example structure as it is in the starter kit for the code structure. The different files and directories have following meaning:

.
โ”œโ”€โ”€ aicrowd.json           # Submission meta information - like your username
โ”œโ”€โ”€ apt.txt                # Packages to be installed inside docker image
โ”œโ”€โ”€ data                   # Your local dataset copy - you don't need to upload it (read DATASET.md)
โ”œโ”€โ”€ requirements.txt       # Python packages to be installed
โ”œโ”€โ”€ test.py                # IMPORTANT: Your testing/prediction code, must be derived from MusicDemixingPredictor (example in test.py)
โ””โ”€โ”€ utility                # The utility scripts to provide smoother experience to you.
    โ”œโ”€โ”€ docker_build.sh
    โ”œโ”€โ”€ docker_run.sh
    โ”œโ”€โ”€ environ.sh
    โ””โ”€โ”€ verify_or_download_data.sh

Finally, you must specify an AIcrowd submission JSON in aicrowd.json to be scored!

The aicrowd.json of each submission should contain the following content:

{
  "challenge_id": "evaluations-api-music-demixing",
  "authors": ["your-aicrowd-username"],
  "description": "(optional) description about your awesome agent",
  "external_dataset_used": false
}

This JSON is used to map your submission to the challenge - so please remember to use the correct challenge_id as specified above.

How to make submission

๐Ÿ‘‰ SUBMISSION.md

Best of Luck ๐ŸŽ‰ ๐ŸŽ‰

Other Concepts

Time constraints

You need to make sure that your model can do audio seperation for each song within 4 minutes, otherwise the submission will be marked as failed.

Local Run

๐Ÿ‘‰ LOCAL_RUN.md

Contributing

๐Ÿ™ You can share your solutions or any other baselines by contributing directly to this repository by opening merge request.

  • Add your implemntation as test_<approach-name>.py
  • Test it out using python test_<approach-name>.py
  • Add any documentation for your approach at top of your file.
  • Import it in predict.py
  • Create merge request! ๐ŸŽ‰ ๐ŸŽ‰ ๐ŸŽ‰

Contributors

๐Ÿ“Ž Important links

๐Ÿ’ช  Challenge Page: https://www.aicrowd.com/challenges/music-demixing-challenge-ismir-2021

๐Ÿ—ฃ๏ธ  Discussion Forum: https://www.aicrowd.com/challenges/music-demixing-challenge-ismir-2021/discussion

๐Ÿ†  Leaderboard: https://www.aicrowd.com/challenges/music-demixing-challenge-ismir-2021/leaderboards

Owner
AIcrowd
AIcrowd
Yolox-bytetrack-sample - Python sample of MOT (Multiple Object Tracking) using YOLOX and ByteTrack

yolox-bytetrack-sample YOLOXใจByteTrackใ‚’็”จใ„ใŸMOT(Multiple Object Tracking)ใฎPythonใ‚ตใƒณ

KazuhitoTakahashi 12 Nov 09, 2022
Lolviz - A simple Python data-structure visualization tool for lists of lists, lists, dictionaries; primarily for use in Jupyter notebooks / presentations

lolviz By Terence Parr. See Explained.ai for more stuff. A very nice looking javascript lolviz port with improvements by Adnan M.Sagar. A simple Pytho

Terence Parr 785 Dec 30, 2022
An implementation of "Optimal Textures: Fast and Robust Texture Synthesis and Style Transfer through Optimal Transport"

Optex An implementation of Optimal Textures: Fast and Robust Texture Synthesis and Style Transfer through Optimal Transport for TU Delft CS4240. You c

Hans Brouwer 33 Jan 05, 2023
The codebase for Data-driven general-purpose voice activity detection.

Data driven GPVAD Repository for the work in TASLP 2021 Voice activity detection in the wild: A data-driven approach using teacher-student training. S

Heinrich Dinkel 75 Nov 27, 2022
PyTorch Implementation of ByteDance's Cross-speaker Emotion Transfer Based on Speaker Condition Layer Normalization and Semi-Supervised Training in Text-To-Speech

Cross-Speaker-Emotion-Transfer - PyTorch Implementation PyTorch Implementation of ByteDance's Cross-speaker Emotion Transfer Based on Speaker Conditio

Keon Lee 114 Jan 08, 2023
Finite Element Analysis

FElupe - Finite Element Analysis FElupe is a Python 3.6+ finite element analysis package focussing on the formulation and numerical solution of nonlin

Andreas D. 20 Jan 09, 2023
(NeurIPS 2020) Wasserstein Distances for Stereo Disparity Estimation

Wasserstein Distances for Stereo Disparity Estimation Accepted in NeurIPS 2020 as Spotlight. [Project Page] Wasserstein Distances for Stereo Disparity

Divyansh Garg 92 Dec 12, 2022
SpinalNet: Deep Neural Network with Gradual Input

SpinalNet: Deep Neural Network with Gradual Input This repository contains scripts for training different variations of the SpinalNet and its counterp

H M Dipu Kabir 142 Dec 30, 2022
Using this you can control your PC/Laptop volume by Hand Gestures (pinch-in, pinch-out) created with Python.

Hand Gesture Volume Controller Using this you can control your PC/Laptop volume by Hand Gestures (pinch-in, pinch-out). Code Firstly I have created a

Tejas Prajapati 16 Sep 11, 2021
Multi Agent Path Finding Algorithms

MATP-solver Simulator collision check path step random initial states or given states Traditional method Seperate A* algorithem Confict-based Search S

30 Dec 12, 2022
gitใ€ŠJoint Entity and Relation Extraction with Set Prediction Networksใ€‹(2020) GitHub:

Joint Entity and Relation Extraction with Set Prediction Networks Source code for Joint Entity and Relation Extraction with Set Prediction Networks. W

130 Dec 13, 2022
Pytorch implementation of

EfficientTTS Unofficial Pytorch implementation of "EfficientTTS: An Efficient and High-Quality Text-to-Speech Architecture"(arXiv). Disclaimer: Somebo

Liu Songxiang 109 Nov 16, 2022
Supplementary code for the experiments described in the 2021 ISMIR submission: Leveraging Hierarchical Structures for Few Shot Musical Instrument Recognition.

Music Trees Supplementary code for the experiments described in the 2021 ISMIR submission: Leveraging Hierarchical Structures for Few Shot Musical Ins

Hugo Flores Garcรญa 32 Nov 22, 2022
[CVPR 2022 Oral] TubeDETR: Spatio-Temporal Video Grounding with Transformers

TubeDETR: Spatio-Temporal Video Grounding with Transformers Website โ€ข STVG Demo โ€ข Paper This repository provides the code for our paper. This includes

Antoine Yang 108 Dec 27, 2022
Iterative Normalization: Beyond Standardization towards Efficient Whitening

IterNorm Code for reproducing the results in the following paper: Iterative Normalization: Beyond Standardization towards Efficient Whitening Lei Huan

Lei Huang 21 Dec 27, 2022
The repository for the paper "When Do You Need Billions of Words of Pretraining Data?"

pretraining-learning-curves This is the repository for the paper When Do You Need Billions of Words of Pretraining Data? Edge Probing We use jiant1 fo

MLยฒ AT CILVR 19 Nov 25, 2022
Algebraic effect handlers in Python

PyEffect: Algebraic effects in Python What IDK. Usage effects.handle(operation, handlers=None) effects.set_handler(effect, handler) Supported effects

Greg Werbin 5 Dec 27, 2021
This repository is maintained for the scientific paper tittled " Study of keyword extraction techniques for Electric Double Layer Capacitor domain using text similarity indexes: An experimental analysis "

kwd-extraction-study This repository is maintained for the scientific paper tittled " Study of keyword extraction techniques for Electric Double Layer

ping 543f 1 Dec 05, 2022
CKD - Collaborative Knowledge Distillation for Heterogeneous Information Network Embedding

Collaborative Knowledge Distillation for Heterogeneous Information Network Embed

zhousheng 9 Dec 05, 2022
My course projects for the 2021 Spring Machine Learning course at the National Taiwan University (NTU)

ML2021Spring There are my projects for the 2021 Spring Machine Learning course at the National Taiwan University (NTU) Course Web : https://speech.ee.

Ding-Li Chen 15 Aug 29, 2022