sofieff commited on
Commit
84ae57b
·
verified ·
1 Parent(s): 8ed98b9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -81
README.md CHANGED
@@ -1,81 +1,81 @@
1
- ---
2
- title: NeuroMusicLab
3
- emoji: 🐢
4
- colorFrom: indigo
5
- colorTo: red
6
- sdk: gradio
7
- sdk_version: 5.47.2
8
- app_file: app.py
9
- pinned: false
10
- short_description: A demo for EEG-based music composition and manipulation.
11
- ---
12
-
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
14
-
15
- # EEG Motor Imagery Music Composer
16
-
17
- A user-friendly, accessible neuro-music studio for motor rehabilitation and creative exploration. Compose and remix music using EEG motor imagery signals—no musical experience required!
18
-
19
- ## Features
20
-
21
- - **Automatic Composition:** Layer musical stems (bass, drums, instruments, vocals) by imagining left/right hand or leg movements. Each correct, high-confidence prediction adds a new sound.
22
- - **DJ Mode:** After all four layers are added, apply real-time audio effects (Echo, Low Pass, Compressor, Fade In/Out) to remix your composition using new brain commands.
23
- - **Seamless Playback:** All completed layers play continuously, with smooth transitions and effect toggling.
24
- - **Manual Classifier:** Test the classifier on individual movements and visualize EEG data, class probabilities, and confusion matrix.
25
- - **Accessible UI:** Built with Gradio for easy use in browser or on Hugging Face Spaces.
26
-
27
- ## How It Works
28
-
29
- 1. **Compose:**
30
- - Click "Start Composing" and follow the on-screen prompts.
31
- - Imagine the prompted movement (left hand, right hand, left leg, right leg) to add musical layers.
32
- - Each correct, confident prediction adds a new instrument to the mix.
33
- 2. **DJ Mode:**
34
- - After all four layers are added, enter DJ mode.
35
- - Imagine movements in a specific order to toggle effects on each stem.
36
- - Effects are sticky and only toggle every 4th repetition for smoothness.
37
- 3. **Manual Classifier:**
38
- - Switch to the Manual Classifier tab to test the model on random epochs for each movement.
39
- - Visualize predictions, probabilities, and confusion matrix.
40
-
41
- ## Project Structure
42
-
43
- ```
44
- app.py # Main Gradio app and UI logic
45
- sound_control.py # Audio processing and effect logic
46
- classifier.py # EEG classifier
47
- config.py # Configuration and constants
48
- data_processor.py # EEG data loading and preprocessing
49
- requirements.txt # Python dependencies
50
- .gitignore # Files/folders to ignore in git
51
- SoundHelix-Song-6/ # Demo audio stems (bass, drums, instruments, vocals)
52
- ```
53
-
54
- ## Quick Start
55
-
56
- 1. **Install dependencies:**
57
- ```bash
58
- pip install -r requirements.txt
59
- ```
60
- 2. **Add required data:**
61
- - Ensure the `SoundHelix-Song-1/` folder with all audio stems (`bass.wav`, `drums.wav`, `instruments.wav` or `other.wav`, `vocals.wav`) is present and tracked in your repository.
62
- - Include at least one demo EEG `.mat` file (as referenced in your `DEMO_DATA_PATHS` in `config.py`) for the app to run out-of-the-box. Place it in the correct location and ensure it is tracked by git.
63
- 3. **Run the app:**
64
- ```bash
65
- python app.py
66
- ```
67
- 4. **Open in browser:**
68
- - Go to `http://localhost:7867` (or the port shown in the terminal)
69
-
70
- ## Deployment
71
-
72
- - Ready for Hugging Face Spaces or any Gradio-compatible cloud platform.
73
- - Minimal `.gitignore` and clean repo for easy deployment.
74
- - Make sure to include all required audio stems and at least two demo `.mat` EEG file in your deployment for full functionality.
75
-
76
- ## Credits
77
-
78
- - Developed by Sofia Fregni. Model training by Katarzyna Kuhlmann. Deployment by Hamed Koochaki Kelardeh.
79
- - Audio stems: [SoundHelix](https://www.soundhelix.com/)
80
-
81
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
1
+ ---
2
+ title: NeuroMusicLab
3
+ emoji: 🧠🎵
4
+ colorFrom: indigo
5
+ colorTo: red
6
+ sdk: docker
7
+ pinned: false
8
+ license: mit
9
+ short_description: A demo for EEG-based music composition and manipulation.
10
+ ---
11
+
12
+ Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
13
+
14
+ # EEG Motor Imagery Music Composer
15
+
16
+ A user-friendly, accessible neuro-music studio for motor rehabilitation and creative exploration. Compose and remix music using EEG motor imagery signals—no musical experience required!
17
+
18
+ ## Features
19
+
20
+ - **Automatic Composition:** Layer musical stems (bass, drums, instruments, vocals) by imagining left/right hand or leg movements. Each correct, high-confidence prediction adds a new sound.
21
+ - **DJ Mode:** After all four layers are added, apply real-time audio effects (Echo, Low Pass, Compressor, Fade In/Out) to remix your composition using new brain commands.
22
+ - **Seamless Playback:** All completed layers play continuously, with smooth transitions and effect toggling.
23
+ - **Manual Classifier:** Test the classifier on individual movements and visualize EEG data, class probabilities, and confusion matrix.
24
+ - **Accessible UI:** Built with Gradio for easy use in browser or on Hugging Face Spaces.
25
+
26
+ ## How It Works
27
+
28
+ 1. **Compose:**
29
+ - Click "Start Composing" and follow the on-screen prompts.
30
+ - Imagine the prompted movement (left hand, right hand, left leg, right leg) to add musical layers.
31
+ - Each correct, confident prediction adds a new instrument to the mix.
32
+ 2. **DJ Mode:**
33
+ - After all four layers are added, enter DJ mode.
34
+ - Imagine movements in a specific order to toggle effects on each stem.
35
+ - Effects are sticky and only toggle every 4th repetition for smoothness.
36
+ 3. **Manual Classifier:**
37
+ - Switch to the Manual Classifier tab to test the model on random epochs for each movement.
38
+ - Visualize predictions, probabilities, and confusion matrix.
39
+
40
+ ## Project Structure
41
+
42
+ ```
43
+ app.py # Main Gradio app and UI logic
44
+ sound_control.py # Audio processing and effect logic
45
+ classifier.py # EEG classifier
46
+ config.py # Configuration and constants
47
+ data_processor.py # EEG data loading and preprocessing
48
+ requirements.txt # Python dependencies
49
+ .gitignore # Files/folders to ignore in git
50
+ SoundHelix-Song-6/ # Demo audio stems (bass, drums, instruments, vocals)
51
+ ```
52
+
53
+ ## Quick Start
54
+
55
+ 1. **Install dependencies:**
56
+ ```bash
57
+ pip install -r requirements.txt
58
+ ```
59
+ 2. **Add required data:**
60
+ - Ensure the `SoundHelix-Song-1/` folder with all audio stems (`bass.wav`, `drums.wav`, `instruments.wav` or `other.wav`, `vocals.wav`) is present and tracked in your repository.
61
+ - Include at least one demo EEG `.mat` file (as referenced in your `DEMO_DATA_PATHS` in `config.py`) for the app to run out-of-the-box. Place it in the correct location and ensure it is tracked by git.
62
+ 3. **Run the app:**
63
+ ```bash
64
+ python app.py
65
+ ```
66
+ 4. **Open in browser:**
67
+ - Go to `http://localhost:7867` (or the port shown in the terminal)
68
+
69
+ ## Deployment
70
+
71
+ - Ready for Hugging Face Spaces or any Gradio-compatible cloud platform.
72
+ - Minimal `.gitignore` and clean repo for easy deployment.
73
+ - Make sure to include all required audio stems and at least two demo `.mat` EEG file in your deployment for full functionality.
74
+
75
+ ## Credits
76
+
77
+ - Developed by Sofia Fregni. Model training by Katarzyna Kuhlmann. Deployment by Hamed Koochaki Kelardeh.
78
+ - Audio stems: [SoundHelix](https://www.soundhelix.com/)
79
+
80
+ ## License
81
+ MIT License - see LICENSE file for details.