NeuroMusicLab / README.md
sofieff's picture
Update README.md
e59d8f6 verified

A newer version of the Gradio SDK is available: 6.2.0

Upgrade
metadata
title: NeuroMusicLab
emoji: 🧠🎵
colorFrom: indigo
colorTo: red
sdk: gradio
pinned: false
license: mit
short_description: A demo for EEG-based music composition and manipulation.

Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference

EEG Motor Imagery Music Composer

A user-friendly, accessible neuro-music studio for motor rehabilitation and creative exploration. Compose and remix music using EEG motor imagery signals—no musical experience required!

Features

  • Automatic Composition: Layer musical stems (bass, drums, instruments, vocals) by imagining left/right hand or leg movements. Each correct, high-confidence prediction adds a new sound.
  • DJ Mode: After all four layers are added, apply real-time audio effects (Echo, Low Pass, Compressor, Fade In/Out) to remix your composition using new brain commands.
  • Seamless Playback: All completed layers play continuously, with smooth transitions and effect toggling.
  • Manual Classifier: Test the classifier on individual movements and visualize EEG data, class probabilities, and confusion matrix.
  • Accessible UI: Built with Gradio for easy use in a browser or on Hugging Face Spaces.

How It Works

  1. Compose:
    • Click "Start Composing" and follow the on-screen prompts.
    • Imagine the prompted movement (left hand, right hand, left leg, right leg) to add musical layers.
    • Each correct, confident prediction adds a new instrument to the mix.
  2. DJ Mode:
    • After all four layers are added, enter DJ mode.
    • Imagine movements in a specific order to toggle effects on each stem.
    • Effects are sticky and only toggle every 4th repetition for smoothness.
  3. Manual Classifier:
    • Switch to the Manual Classifier tab to test the model on random epochs for each movement.
    • Visualize predictions, probabilities, and the confusion matrix.

Project Structure

app.py                # Main Gradio app and UI logic
sound_control.py      # Audio processing and effect logic
classifier.py         # EEG classifier
config.py             # Configuration and constants
data_processor.py     # EEG data loading and preprocessing
requirements.txt      # Python dependencies
.gitignore            # Files/folders to ignore in git
SoundHelix-Song-6/    # Demo audio stems (bass, drums, instruments, vocals)

Quick Start

  1. Install dependencies:
    pip install -r requirements.txt
    
  2. Add required data:
    • Ensure the SoundHelix-Song-6/ folder with all audio stems (bass.wav, drums.wav, instruments.wav or other.wav, vocals.wav) is present and tracked in your repository.
    • Include at least one demo EEG .mat file (as referenced in your DEMO_DATA_PATHS in config.py) for the app to run out-of-the-box. Place it in the correct location and ensure it is tracked by git.
  3. Run the app:
    python app.py
    
  4. Open in browser:
    • Go to http://localhost:7860 (or the port shown in the terminal)

Deployment

  • Ready for Hugging Face Spaces or any Gradio-compatible cloud platform.
  • Minimal .gitignore and clean repo for easy deployment.
  • Make sure to include all required audio stems and at least two demo .mat EEG files in your deployment for full functionality.

Credits

  • Developed by Sofia Fregni. Model training by Katarzyna Kuhlmann. Deployment by Hamed Koochaki Kelardeh.
  • Audio stems: SoundHelix

License

MIT License - see LICENSE file for details.