sofieff commited on
Commit
c6beb41
·
1 Parent(s): f8bae12

cleaned up

Browse files
.gitignore CHANGED
@@ -17,14 +17,10 @@ __pycache__/
17
  .DS_Store
18
 
19
  # Project specific
20
- *.wav
21
- *.mp3
22
  *.pth
23
- *.mat
24
  app.log
25
 
26
  # Data and generated folders
27
- data/
28
- sounds/
29
  otherfiles/
30
  source/
 
17
  .DS_Store
18
 
19
  # Project specific
 
 
20
  *.pth
21
+
22
  app.log
23
 
24
  # Data and generated folders
 
 
25
  otherfiles/
26
  source/
LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Sofia Fregni
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
README.md CHANGED
@@ -1,78 +1,68 @@
1
- # 🧠 EEG Motor Imagery Music Composer
2
 
3
- A sophisticated machine learning application that transforms brain signals into music compositions using motor imagery classification. This system uses a trained ShallowFBCSPNet model to classify different motor imagery tasks from EEG data and creates layered musical compositions based on the classification results.
4
 
5
- ## 🎯 Features
6
 
7
- - **Real-time EEG Classification**: Uses ShallowFBCSPNet architecture for motor imagery classification
8
- - **Music Composition**: Automatically creates layered music compositions from classification results
9
- - **Interactive Gradio Interface**: User-friendly web interface for real-time interaction
10
- - **Six Motor Imagery Classes**: Left/right hand, left/right leg, tongue, and neutral states
11
- - **Sound Mapping**: Each motor imagery class is mapped to different musical instruments
12
- - **Composition Management**: Save, clear, and manage your musical creations
13
 
14
- ## 🏗️ Architecture
15
 
16
- ### Project Structure
17
- ```
18
- ├── app.py # Main Gradio application
19
- ├── classifier.py # Motor imagery classifier with ShallowFBCSPNet
20
- ├── data_processor.py # EEG data loading and preprocessing
21
- ├── sound_library.py # Sound management and composition system
22
- ├── config.py # Configuration settings
23
- ├── requirements.txt # Python dependencies
24
- ├── SoundHelix-Song-1/ # Audio files for different instruments
25
- ├── bass.wav
26
- ├── drums.wav
27
- │ ├── other.wav
28
- │ └── vocals.wav
29
- └── src/ # Additional source files
30
- ├── model.py
31
- ├── preprocessing.py
32
- ├── train.py
33
- └── visualize.py
34
- ```
35
-
36
- ### System Components
37
-
38
- 1. **EEGDataProcessor** (`data_processor.py`)
39
- - Loads and processes .mat EEG files
40
- - Handles epoching and preprocessing
41
- - Simulates real-time data for demo purposes
42
-
43
- 2. **MotorImageryClassifier** (`classifier.py`)
44
- - Implements ShallowFBCSPNet model
45
- - Performs real-time classification
46
- - Provides confidence scores and probability distributions
47
 
48
- 3. **SoundManager** (`sound_library.py`)
49
- - Maps classifications to audio files
50
- - Manages composition layers
51
- - Handles audio file loading and playback
52
 
53
- 4. **Gradio Interface** (`app.py`)
54
- - Web-based user interface
55
- - Real-time visualization
56
- - Composition management tools
 
 
 
 
 
 
57
 
58
- ## 🚀 Quick Start
59
 
60
- ### Requirements
61
 
62
- Python 3.9–3.11 recommended. Install dependencies:
 
 
 
 
 
 
 
 
 
 
 
 
63
 
64
- ```bash
65
- python -m pip install -r requirements.txt
66
- ```
67
 
68
- ### How to run (Gradio)
 
 
69
 
70
- Local launch:
71
 
72
- ```bash
73
- python app.py
74
- ```
75
 
76
- This starts a server on `http://127.0.0.1:7860` by default.
77
 
78
- #
 
1
+ # EEG Motor Imagery Music Composer
2
 
3
+ A user-friendly, accessible neuro-music studio for motor rehabilitation and creative exploration. Compose and remix music using EEG motor imagery signals—no musical experience required!
4
 
5
+ ## Features
6
 
7
+ - **Automatic Composition:** Layer musical stems (bass, drums, instruments, vocals) by imagining left/right hand or leg movements. Each correct, high-confidence prediction adds a new sound.
8
+ - **DJ Mode:** After all four layers are added, apply real-time audio effects (Echo, Low Pass, Compressor, Fade In/Out) to remix your composition using new brain commands.
9
+ - **Seamless Playback:** All completed layers play continuously, with smooth transitions and effect toggling.
10
+ - **Manual Classifier:** Test the classifier on individual movements and visualize EEG data, class probabilities, and confusion matrix.
11
+ - **Accessible UI:** Built with Gradio for easy use in browser or on Hugging Face Spaces.
 
12
 
13
+ ## How It Works
14
 
15
+ 1. **Compose:**
16
+ - Click "Start Composing" and follow the on-screen prompts.
17
+ - Imagine the prompted movement (left hand, right hand, left leg, right leg) to add musical layers.
18
+ - Each correct, confident prediction adds a new instrument to the mix.
19
+ 2. **DJ Mode:**
20
+ - After all four layers are added, enter DJ mode.
21
+ - Imagine movements in a specific order to toggle effects on each stem.
22
+ - Effects are sticky and only toggle every 4th repetition for smoothness.
23
+ 3. **Manual Classifier:**
24
+ - Switch to the Manual Classifier tab to test the model on random epochs for each movement.
25
+ - Visualize predictions, probabilities, and confusion matrix.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
+ ## Project Structure
 
 
 
28
 
29
+ ```
30
+ app.py # Main Gradio app and UI logic
31
+ sound_manager.py # Audio processing and effect logic
32
+ classifier.py # EEG classifier
33
+ config.py # Configuration and constants
34
+ data_processor.py # EEG data loading and preprocessing
35
+ requirements.txt # Python dependencies
36
+ .gitignore # Files/folders to ignore in git
37
+ SoundHelix-Song-6/ # Demo audio stems (bass, drums, instruments, vocals)
38
+ ```
39
 
40
+ ## Quick Start
41
 
 
42
 
43
+ 1. **Install dependencies:**
44
+ ```bash
45
+ pip install -r requirements.txt
46
+ ```
47
+ 2. **Add required data:**
48
+ - Ensure the `SoundHelix-Song-1/` folder with all audio stems (`bass.wav`, `drums.wav`, `instruments.wav` or `other.wav`, `vocals.wav`) is present and tracked in your repository.
49
+ - Include at least one demo EEG `.mat` file (as referenced in your `DEMO_DATA_PATHS` in `config.py`) for the app to run out-of-the-box. Place it in the correct location and ensure it is tracked by git.
50
+ 3. **Run the app:**
51
+ ```bash
52
+ python app.py
53
+ ```
54
+ 4. **Open in browser:**
55
+ - Go to `http://localhost:7867` (or the port shown in the terminal)
56
 
57
+ ## Deployment
 
 
58
 
59
+ - Ready for Hugging Face Spaces or any Gradio-compatible cloud platform.
60
+ - Minimal `.gitignore` and clean repo for easy deployment.
61
+ - Make sure to include all required audio stems and at least two demo `.mat` EEG file in your deployment for full functionality.
62
 
63
+ ## Credits
64
 
65
+ - Developed by Sofia Fregni. Model training by Kasia. Deployment by Hamed Koochaki Kelardeh.
66
+ - Audio stems: [SoundHelix](https://www.soundhelix.com/)
 
67
 
 
68
 
 
app.py CHANGED
@@ -97,21 +97,18 @@ def get_movement_sounds() -> Dict[str, str]:
97
  # Save to temp file (persistent for this effect state)
98
  tmp = tempfile.NamedTemporaryFile(delete=False, suffix=f'_{movement}_effect.wav')
99
  sf.write(tmp.name, processed, sr)
100
- print(f"DEBUG: Playing PROCESSED audio for {movement}: {tmp.name}")
101
  get_movement_sounds.audio_cache[movement][True] = tmp.name
102
  sounds[movement] = tmp.name
103
  else:
104
- print(f"DEBUG: Playing ORIGINAL audio for {movement}: {sound_path.resolve()}")
105
  get_movement_sounds.audio_cache[movement][False] = str(sound_path.resolve())
106
  sounds[movement] = str(sound_path.resolve())
107
  get_movement_sounds.last_effect_state[movement] = effect_on
108
  get_movement_sounds.play_counter[movement] += 1
 
109
  get_movement_sounds.total_calls += 1
110
- # Print summary every 20 calls
111
- if get_movement_sounds.total_calls % 20 == 0:
112
- print("AUDIO PLAY COUNTS (DJ mode):", dict(get_movement_sounds.play_counter))
113
  return sounds
114
 
 
115
  def create_eeg_plot(eeg_data: np.ndarray, target_movement: str, predicted_name: str, confidence: float, sound_added: bool, ch_names=None) -> plt.Figure:
116
  '''Create a plot of EEG data with annotations. Plots C3 and C4 channels by name.'''
117
  if ch_names is None:
@@ -159,7 +156,6 @@ def start_composition():
159
  if not app_state['composition_active']:
160
  app_state['composition_active'] = True
161
  sound_manager.start_new_cycle()
162
- print(f"DEBUG: [start_composition] current_phase={sound_manager.current_phase}, movements_completed={sound_manager.movements_completed}")
163
  if app_state['demo_data'] is None:
164
  return "❌ No data", "❌ No data", "❌ No data", None, None, None, None, None, None, "No EEG data available"
165
  # Force first trial to always be left_hand/instrumental
@@ -177,11 +173,9 @@ def start_composition():
177
  true_label_name = classifier.class_names[true_label]
178
  next_movement = sound_manager.get_current_target_movement()
179
  if next_movement == "cycle_complete":
180
- print("DEBUG: [start_composition] Transitioning to DJ mode!")
181
  return continue_dj_phase()
182
  predicted_class, confidence, probabilities = classifier.predict(epoch_data)
183
  predicted_name = classifier.class_names[predicted_class]
184
- print(f"TRIAL: true_label={true_label_name}, presented_target={next_movement}, predicted={predicted_name}")
185
  # Only add sound if confidence > threshold, predicted == true label, and true label matches the prompt
186
  if confidence > CONFIDENCE_THRESHOLD and predicted_name == true_label_name:
187
  result = sound_manager.process_classification(predicted_name, confidence, CONFIDENCE_THRESHOLD, force_add=True)
@@ -251,7 +245,6 @@ def continue_dj_phase():
251
  ''' Continue in DJ phase, applying effects and always playing all layered sounds.
252
  '''
253
  global app_state
254
- print(f"DEBUG: [continue_dj_phase] Entered DJ mode. current_phase={sound_manager.current_phase}")
255
  if not app_state['composition_active']:
256
  return "❌ Not active", "❌ Not active", "❌ Not active", None, None, None, None, None, None, "Click 'Start Composing' first"
257
  if app_state['demo_data'] is None:
@@ -441,13 +434,10 @@ def create_interface():
441
  if len(result) == 8:
442
  # Pre-DJ mode: add timer and button updates
443
  if any(isinstance(x, str) and "DJ Mode" in x for x in result):
444
- print("DEBUG: [timer_tick] DJ mode detected in outputs, stopping timer and showing continue button.")
445
  return (*result, gr.update(active=False), gr.update(visible=True))
446
  else:
447
- print("DEBUG: [timer_tick] Not in DJ mode, continuing trials.")
448
  return (*result, gr.update(active=True), gr.update(visible=False))
449
  elif len(result) == 10:
450
- print("DEBUG: [timer_tick] Already in DJ mode, returning result as is.")
451
  return tuple(result)
452
  else:
453
  raise ValueError(f"Unexpected result length in timer_tick: {len(result)}")
 
97
  # Save to temp file (persistent for this effect state)
98
  tmp = tempfile.NamedTemporaryFile(delete=False, suffix=f'_{movement}_effect.wav')
99
  sf.write(tmp.name, processed, sr)
 
100
  get_movement_sounds.audio_cache[movement][True] = tmp.name
101
  sounds[movement] = tmp.name
102
  else:
 
103
  get_movement_sounds.audio_cache[movement][False] = str(sound_path.resolve())
104
  sounds[movement] = str(sound_path.resolve())
105
  get_movement_sounds.last_effect_state[movement] = effect_on
106
  get_movement_sounds.play_counter[movement] += 1
107
+
108
  get_movement_sounds.total_calls += 1
 
 
 
109
  return sounds
110
 
111
+
112
  def create_eeg_plot(eeg_data: np.ndarray, target_movement: str, predicted_name: str, confidence: float, sound_added: bool, ch_names=None) -> plt.Figure:
113
  '''Create a plot of EEG data with annotations. Plots C3 and C4 channels by name.'''
114
  if ch_names is None:
 
156
  if not app_state['composition_active']:
157
  app_state['composition_active'] = True
158
  sound_manager.start_new_cycle()
 
159
  if app_state['demo_data'] is None:
160
  return "❌ No data", "❌ No data", "❌ No data", None, None, None, None, None, None, "No EEG data available"
161
  # Force first trial to always be left_hand/instrumental
 
173
  true_label_name = classifier.class_names[true_label]
174
  next_movement = sound_manager.get_current_target_movement()
175
  if next_movement == "cycle_complete":
 
176
  return continue_dj_phase()
177
  predicted_class, confidence, probabilities = classifier.predict(epoch_data)
178
  predicted_name = classifier.class_names[predicted_class]
 
179
  # Only add sound if confidence > threshold, predicted == true label, and true label matches the prompt
180
  if confidence > CONFIDENCE_THRESHOLD and predicted_name == true_label_name:
181
  result = sound_manager.process_classification(predicted_name, confidence, CONFIDENCE_THRESHOLD, force_add=True)
 
245
  ''' Continue in DJ phase, applying effects and always playing all layered sounds.
246
  '''
247
  global app_state
 
248
  if not app_state['composition_active']:
249
  return "❌ Not active", "❌ Not active", "❌ Not active", None, None, None, None, None, None, "Click 'Start Composing' first"
250
  if app_state['demo_data'] is None:
 
434
  if len(result) == 8:
435
  # Pre-DJ mode: add timer and button updates
436
  if any(isinstance(x, str) and "DJ Mode" in x for x in result):
 
437
  return (*result, gr.update(active=False), gr.update(visible=True))
438
  else:
 
439
  return (*result, gr.update(active=True), gr.update(visible=False))
440
  elif len(result) == 10:
 
441
  return tuple(result)
442
  else:
443
  raise ValueError(f"Unexpected result length in timer_tick: {len(result)}")
classifier.py CHANGED
@@ -71,19 +71,12 @@ class MotorImageryClassifier:
71
  #self.model.load_state_dict(state_dict)
72
  self.model.eval()
73
  self.is_loaded = True
74
- print(f"✅ Pre-trained model loaded successfully from {self.model_path}")
75
- except Exception as model_error:
76
- print(f"⚠️ Pre-trained model found but incompatible: {model_error}")
77
- print("🔄 Starting LOSO training with available EEG data...")
78
  self.is_loaded = False
79
  else:
80
- print(f"❌ Pre-trained model weights not found at {self.model_path}")
81
- print("🔄 Starting LOSO training with available EEG data...")
82
  self.is_loaded = False
83
 
84
- except Exception as e:
85
- print(f"❌ Error loading model: {e}")
86
- print("🔄 Starting LOSO training with available EEG data...")
87
  self.is_loaded = False
88
 
89
  def get_model_status(self) -> str:
@@ -135,8 +128,6 @@ class MotorImageryClassifier:
135
  Trains a model on available data when pre-trained model isn't available.
136
  """
137
  try:
138
- print("🔄 No pre-trained model available. Training new model using LOSO method...")
139
- print("⏳ This may take a moment - training on real EEG data...")
140
 
141
  # Initialize data processor
142
  processor = EEGDataProcessor()
@@ -181,20 +172,16 @@ class MotorImageryClassifier:
181
  loss.backward()
182
  optimizer.step()
183
 
184
- if epoch % 5 == 0:
185
- print(f"LOSO Training - Epoch {epoch}, Loss: {loss.item():.4f}")
186
 
187
  # Switch to evaluation mode
188
  self.model.eval()
189
  self.is_loaded = True
190
 
191
- print("✅ LOSO model trained successfully! Ready for classification.")
192
 
193
  # Now make prediction with the trained model
194
  return self.predict(eeg_data)
195
 
196
  except Exception as e:
197
- print(f"Error in LOSO training: {e}")
198
  raise RuntimeError(f"Failed to initialize classifier. Neither pre-trained model nor LOSO training succeeded: {e}")
199
 
200
 
 
71
  #self.model.load_state_dict(state_dict)
72
  self.model.eval()
73
  self.is_loaded = True
74
+ except Exception:
 
 
 
75
  self.is_loaded = False
76
  else:
 
 
77
  self.is_loaded = False
78
 
79
+ except Exception:
 
 
80
  self.is_loaded = False
81
 
82
  def get_model_status(self) -> str:
 
128
  Trains a model on available data when pre-trained model isn't available.
129
  """
130
  try:
 
 
131
 
132
  # Initialize data processor
133
  processor = EEGDataProcessor()
 
172
  loss.backward()
173
  optimizer.step()
174
 
 
 
175
 
176
  # Switch to evaluation mode
177
  self.model.eval()
178
  self.is_loaded = True
179
 
 
180
 
181
  # Now make prediction with the trained model
182
  return self.predict(eeg_data)
183
 
184
  except Exception as e:
 
185
  raise RuntimeError(f"Failed to initialize classifier. Neither pre-trained model nor LOSO training succeeded: {e}")
186
 
187
 
data/HaLTSubjectA1602236StLRHandLegTongue.mat ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc8647c55ddcb9854de234a35e4261576ecca957a8381d3ef6554aa209f89231
3
+ size 39673990
data/HaLTSubjectA1603086StLRHandLegTongue.mat ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:76c9f2eb53f2c1e6192f1bd7082e24c8c5b7c4c362089d3bac6320e340970bb7
3
+ size 40401694
data/HaLTSubjectA1603106StLRHandLegTongue.mat ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f2f0d913e29ff150c2929b163e0151e8b2d4f469de2b11985e13114938cb2df
3
+ size 49162097
sound_manager.py CHANGED
@@ -126,12 +126,11 @@ class SoundManager:
126
  # Fixed movement order and mapping
127
  self.current_movement_sequence = ["left_hand", "right_hand", "left_leg", "right_leg"]
128
  self.current_sound_mapping = {
129
- "left_hand": "SoundHelix-Song-4_instruments.wav",
130
- "right_hand": "SoundHelix-Song-4_bass.wav",
131
- "left_leg": "SoundHelix-Song-4_drums.wav",
132
- "right_leg": "SoundHelix-Song-4_vocals.wav"
133
  }
134
- print(f"DEBUG: Fixed sound mapping for this cycle: {self.current_sound_mapping}")
135
  self.movements_completed = set()
136
  self.current_step = 0
137
  self._load_sound_files()
@@ -140,7 +139,6 @@ class SoundManager:
140
  # Always process left_hand last in DJ mode
141
  incomplete = [m for m in self.active_movements if m not in self.movements_completed]
142
  if not incomplete:
143
- print("DEBUG: All movements completed, cycle complete.")
144
  return "cycle_complete"
145
  # If in DJ mode, left_hand should be last
146
  if getattr(self, 'current_phase', None) == 'dj_effects':
@@ -149,7 +147,6 @@ class SoundManager:
149
  incomplete = [m for m in incomplete if m != 'left_hand']
150
  import random
151
  movement = random.choice(incomplete)
152
- print(f"DEBUG: Next target is {movement}, completed: {self.movements_completed}")
153
  return movement
154
 
155
 
@@ -162,31 +159,28 @@ class SoundManager:
162
  predicted_class in self.loaded_sounds and
163
  predicted_class not in self.composition_layers
164
  ):
165
- print(f"DEBUG: [FORCE] Adding sound for {predicted_class}")
166
  sound_info = dict(self.loaded_sounds[predicted_class])
167
  sound_info['confidence'] = confidence
168
  self.composition_layers[predicted_class] = sound_info
169
  self.movements_completed.add(predicted_class)
170
  result['sound_added'] = True
171
  else:
172
- print("DEBUG: [FORCE] Not adding sound. Condition failed.")
173
  else:
174
  current_target = self.get_current_target_movement()
175
- print(f"DEBUG: process_classification: predicted={predicted_class}, target={current_target}, confidence={confidence}, completed={self.movements_completed}")
176
  if (
177
  predicted_class == current_target and
178
  confidence >= threshold and
179
  predicted_class in self.loaded_sounds and
180
  predicted_class not in self.composition_layers
181
  ):
182
- print(f"DEBUG: Adding sound for {predicted_class} (target={current_target})")
183
  sound_info = dict(self.loaded_sounds[predicted_class])
184
  sound_info['confidence'] = confidence
185
  self.composition_layers[predicted_class] = sound_info
186
  self.movements_completed.add(predicted_class)
187
  result['sound_added'] = True
188
  else:
189
- print("DEBUG: Not adding sound. Condition failed.")
190
  if len(self.movements_completed) >= len(self.active_movements):
191
  result['cycle_complete'] = True
192
  self.current_phase = "dj_effects"
@@ -219,16 +213,13 @@ class SoundManager:
219
  self.dj_effect_counters[movement] += 1
220
  count = self.dj_effect_counters[movement]
221
  if count != 1 and (count - 1) % 4 != 0:
222
- print(f"🎛️ {movement}: Skipped effect toggle (count={count})")
223
  return {"effect_applied": False, "message": f"Effect for {movement} only toggled at 1, 4, 8, ... (count={count})"}
224
  # Toggle effect ON
225
  self.active_effects[movement] = True
226
  effect_status = "ON"
227
- print(f"🎛️ {movement}: {effect_status} (brief={brief}) [count={count}]")
228
  # Schedule effect OFF after duration if brief
229
  def turn_off_effect():
230
  self.active_effects[movement] = False
231
- print(f"🎛️ {movement}: OFF (auto)")
232
  if brief:
233
  timer = threading.Timer(duration, turn_off_effect)
234
  timer.daemon = True
 
126
  # Fixed movement order and mapping
127
  self.current_movement_sequence = ["left_hand", "right_hand", "left_leg", "right_leg"]
128
  self.current_sound_mapping = {
129
+ "left_hand": "SoundHelix-Song-6_instruments.wav",
130
+ "right_hand": "SoundHelix-Song-6_bass.wav",
131
+ "left_leg": "SoundHelix-Song-6_drums.wav",
132
+ "right_leg": "SoundHelix-Song-6_vocals.wav"
133
  }
 
134
  self.movements_completed = set()
135
  self.current_step = 0
136
  self._load_sound_files()
 
139
  # Always process left_hand last in DJ mode
140
  incomplete = [m for m in self.active_movements if m not in self.movements_completed]
141
  if not incomplete:
 
142
  return "cycle_complete"
143
  # If in DJ mode, left_hand should be last
144
  if getattr(self, 'current_phase', None) == 'dj_effects':
 
147
  incomplete = [m for m in incomplete if m != 'left_hand']
148
  import random
149
  movement = random.choice(incomplete)
 
150
  return movement
151
 
152
 
 
159
  predicted_class in self.loaded_sounds and
160
  predicted_class not in self.composition_layers
161
  ):
 
162
  sound_info = dict(self.loaded_sounds[predicted_class])
163
  sound_info['confidence'] = confidence
164
  self.composition_layers[predicted_class] = sound_info
165
  self.movements_completed.add(predicted_class)
166
  result['sound_added'] = True
167
  else:
168
+ pass
169
  else:
170
  current_target = self.get_current_target_movement()
 
171
  if (
172
  predicted_class == current_target and
173
  confidence >= threshold and
174
  predicted_class in self.loaded_sounds and
175
  predicted_class not in self.composition_layers
176
  ):
 
177
  sound_info = dict(self.loaded_sounds[predicted_class])
178
  sound_info['confidence'] = confidence
179
  self.composition_layers[predicted_class] = sound_info
180
  self.movements_completed.add(predicted_class)
181
  result['sound_added'] = True
182
  else:
183
+ pass
184
  if len(self.movements_completed) >= len(self.active_movements):
185
  result['cycle_complete'] = True
186
  self.current_phase = "dj_effects"
 
213
  self.dj_effect_counters[movement] += 1
214
  count = self.dj_effect_counters[movement]
215
  if count != 1 and (count - 1) % 4 != 0:
 
216
  return {"effect_applied": False, "message": f"Effect for {movement} only toggled at 1, 4, 8, ... (count={count})"}
217
  # Toggle effect ON
218
  self.active_effects[movement] = True
219
  effect_status = "ON"
 
220
  # Schedule effect OFF after duration if brief
221
  def turn_off_effect():
222
  self.active_effects[movement] = False
 
223
  if brief:
224
  timer = threading.Timer(duration, turn_off_effect)
225
  timer.daemon = True
sounds/SoundHelix-Song-6_bass.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6877ac76aacfdb672ac665e96fc1a70bba38ac0625d3259f275a072d53fa0abc
3
+ size 98657548
sounds/SoundHelix-Song-6_drums.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37e78d23c940663297b32eec80ec3db0788b8e3d230241d33a244f68c64633fd
3
+ size 98657548
sounds/SoundHelix-Song-6_instruments.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c441fb96c015ca02768d76262d4de5379567eba6d79897e8f3b8d1952b206e8
3
+ size 98657548
sounds/SoundHelix-Song-6_vocals.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fdd13cd7f4e3a9c87aecb03aa5e1278c1b41317f6f99a52549f2781db19fb930
3
+ size 98657548