Spaces:
Sleeping
Sleeping
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,81 +1,81 @@
|
|
| 1 |
-
---
|
| 2 |
-
title: NeuroMusicLab
|
| 3 |
-
emoji:
|
| 4 |
-
colorFrom: indigo
|
| 5 |
-
colorTo: red
|
| 6 |
-
sdk:
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
- **
|
| 22 |
-
- **
|
| 23 |
-
- **
|
| 24 |
-
- **
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
-
|
| 31 |
-
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
-
|
| 35 |
-
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
-
|
| 73 |
-
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: NeuroMusicLab
|
| 3 |
+
emoji: 🧠🎵
|
| 4 |
+
colorFrom: indigo
|
| 5 |
+
colorTo: red
|
| 6 |
+
sdk: docker
|
| 7 |
+
pinned: false
|
| 8 |
+
license: mit
|
| 9 |
+
short_description: A demo for EEG-based music composition and manipulation.
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
| 13 |
+
|
| 14 |
+
# EEG Motor Imagery Music Composer
|
| 15 |
+
|
| 16 |
+
A user-friendly, accessible neuro-music studio for motor rehabilitation and creative exploration. Compose and remix music using EEG motor imagery signals—no musical experience required!
|
| 17 |
+
|
| 18 |
+
## Features
|
| 19 |
+
|
| 20 |
+
- **Automatic Composition:** Layer musical stems (bass, drums, instruments, vocals) by imagining left/right hand or leg movements. Each correct, high-confidence prediction adds a new sound.
|
| 21 |
+
- **DJ Mode:** After all four layers are added, apply real-time audio effects (Echo, Low Pass, Compressor, Fade In/Out) to remix your composition using new brain commands.
|
| 22 |
+
- **Seamless Playback:** All completed layers play continuously, with smooth transitions and effect toggling.
|
| 23 |
+
- **Manual Classifier:** Test the classifier on individual movements and visualize EEG data, class probabilities, and confusion matrix.
|
| 24 |
+
- **Accessible UI:** Built with Gradio for easy use in browser or on Hugging Face Spaces.
|
| 25 |
+
|
| 26 |
+
## How It Works
|
| 27 |
+
|
| 28 |
+
1. **Compose:**
|
| 29 |
+
- Click "Start Composing" and follow the on-screen prompts.
|
| 30 |
+
- Imagine the prompted movement (left hand, right hand, left leg, right leg) to add musical layers.
|
| 31 |
+
- Each correct, confident prediction adds a new instrument to the mix.
|
| 32 |
+
2. **DJ Mode:**
|
| 33 |
+
- After all four layers are added, enter DJ mode.
|
| 34 |
+
- Imagine movements in a specific order to toggle effects on each stem.
|
| 35 |
+
- Effects are sticky and only toggle every 4th repetition for smoothness.
|
| 36 |
+
3. **Manual Classifier:**
|
| 37 |
+
- Switch to the Manual Classifier tab to test the model on random epochs for each movement.
|
| 38 |
+
- Visualize predictions, probabilities, and confusion matrix.
|
| 39 |
+
|
| 40 |
+
## Project Structure
|
| 41 |
+
|
| 42 |
+
```
|
| 43 |
+
app.py # Main Gradio app and UI logic
|
| 44 |
+
sound_control.py # Audio processing and effect logic
|
| 45 |
+
classifier.py # EEG classifier
|
| 46 |
+
config.py # Configuration and constants
|
| 47 |
+
data_processor.py # EEG data loading and preprocessing
|
| 48 |
+
requirements.txt # Python dependencies
|
| 49 |
+
.gitignore # Files/folders to ignore in git
|
| 50 |
+
SoundHelix-Song-6/ # Demo audio stems (bass, drums, instruments, vocals)
|
| 51 |
+
```
|
| 52 |
+
|
| 53 |
+
## Quick Start
|
| 54 |
+
|
| 55 |
+
1. **Install dependencies:**
|
| 56 |
+
```bash
|
| 57 |
+
pip install -r requirements.txt
|
| 58 |
+
```
|
| 59 |
+
2. **Add required data:**
|
| 60 |
+
- Ensure the `SoundHelix-Song-1/` folder with all audio stems (`bass.wav`, `drums.wav`, `instruments.wav` or `other.wav`, `vocals.wav`) is present and tracked in your repository.
|
| 61 |
+
- Include at least one demo EEG `.mat` file (as referenced in your `DEMO_DATA_PATHS` in `config.py`) for the app to run out-of-the-box. Place it in the correct location and ensure it is tracked by git.
|
| 62 |
+
3. **Run the app:**
|
| 63 |
+
```bash
|
| 64 |
+
python app.py
|
| 65 |
+
```
|
| 66 |
+
4. **Open in browser:**
|
| 67 |
+
- Go to `http://localhost:7867` (or the port shown in the terminal)
|
| 68 |
+
|
| 69 |
+
## Deployment
|
| 70 |
+
|
| 71 |
+
- Ready for Hugging Face Spaces or any Gradio-compatible cloud platform.
|
| 72 |
+
- Minimal `.gitignore` and clean repo for easy deployment.
|
| 73 |
+
- Make sure to include all required audio stems and at least two demo `.mat` EEG file in your deployment for full functionality.
|
| 74 |
+
|
| 75 |
+
## Credits
|
| 76 |
+
|
| 77 |
+
- Developed by Sofia Fregni. Model training by Katarzyna Kuhlmann. Deployment by Hamed Koochaki Kelardeh.
|
| 78 |
+
- Audio stems: [SoundHelix](https://www.soundhelix.com/)
|
| 79 |
+
|
| 80 |
+
## License
|
| 81 |
+
MIT License - see LICENSE file for details.
|