Spaces:
Sleeping
Sleeping
:rocket
Browse files
README.md
CHANGED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
---
|
| 2 |
-
title: Sise
|
| 3 |
emoji: 🎤
|
| 4 |
colorFrom: yellow
|
| 5 |
colorTo: green
|
|
@@ -7,56 +7,87 @@ sdk: docker
|
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
|
| 10 |
-
# SISE
|
| 11 |
-

|
| 12 |
|
| 13 |
-
Ceci est le Ultimate Challenge pour le Master SISE.
|
| 14 |
|
| 15 |
-
## Aperçu
|
| 16 |
|
| 17 |
-
|
| 18 |
|
| 19 |
-
|
| 20 |
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
|
| 27 |
-
|
| 28 |
-
Pour exécuter ce projet localement, suivez ces étapes :
|
| 29 |
|
| 30 |
-
|
| 31 |
-
```sh
|
| 32 |
-
git clone https://github.com/jdalfons/sise-ultimate-challenge.git
|
| 33 |
-
cd sise-ultimate-challenge
|
| 34 |
-
```
|
| 35 |
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 41 |
|
| 42 |
-
|
| 43 |
-
```sh
|
| 44 |
-
pip install -r requirements.txt
|
| 45 |
-
```
|
| 46 |
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
```
|
| 52 |
|
| 53 |
-
|
| 54 |
-
```sh
|
| 55 |
-
docker run -p 7860:7860 sise-ultimate-challenge
|
| 56 |
-
```
|
| 57 |
-
## Utilisation
|
| 58 |
|
| 59 |
-
Pour démarrer l'application Streamlit, exécutez la commande suivante :
|
| 60 |
-
```sh
|
| 61 |
-
streamlit run app.py
|
| 62 |
-
```
|
|
|
|
| 1 |
---
|
| 2 |
+
title: Sise Challenge Emotional Report
|
| 3 |
emoji: 🎤
|
| 4 |
colorFrom: yellow
|
| 5 |
colorTo: green
|
|
|
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
|
| 10 |
+
# SISE Ultimate Challenge - Emotional Report
|
|
|
|
| 11 |
|
|
|
|
| 12 |
|
|
|
|
| 13 |
|
| 14 |
+
Welcome to **Emotional Report**! This AI-powered application lets users send or record an audio clip 📢, analyzing their emotional state based on vocal tone and speed. The AI predicts whether the emotion falls into one of three categories: **Anger (Colère) 😡, Joy (Joie) 😃, or Neutral (Neutre) 😐**.
|
| 15 |
|
| 16 |
+
Using **Wav2Vec**, a pre-trained AI model, the app not only detects emotions but also attempts to transcribe the speech into text. 🧠🎙️
|
| 17 |
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
## 🎬 Fun Fact
|
| 21 |
+
|
| 22 |
+
The name **Emotional Report** is inspired by the movie *Minority Report*, where AI predicts crimes before they happen! 🔮
|
| 23 |
+
This challenge is the **Ultimate Challenge** for Master SISE students. 🏆
|
| 24 |
+
|
| 25 |
+
---
|
| 26 |
|
| 27 |
+
## 👀 Overview
|
| 28 |
+
|
| 29 |
+
This project features a **Streamlit-based dashboard** 📊 that helps analyze security logs, data trends, and apply machine learning models.
|
| 30 |
+
|
| 31 |
+
### ✨ Features
|
| 32 |
+
|
| 33 |
+
✅ **Home** - Overview of the challenge 🏠
|
| 34 |
+
✅ **Analytics** - Visualize & analyze security logs and data trends 📈
|
| 35 |
+
✅ **Machine Learning** - Train & evaluate ML models 🤖
|
| 36 |
+
|
| 37 |
+
---
|
| 38 |
|
| 39 |
+
## 🚀 Installation Guide
|
|
|
|
| 40 |
|
| 41 |
+
### 🔧 Local Setup
|
|
|
|
|
|
|
|
|
|
|
|
|
| 42 |
|
| 43 |
+
Follow these steps to run the project locally:
|
| 44 |
+
|
| 45 |
+
1. **Clone the repository:**
|
| 46 |
+
```sh
|
| 47 |
+
git clone https://github.com/jdalfons/sise-ultimate-challenge.git
|
| 48 |
+
cd sise-ultimate-challenge
|
| 49 |
+
```
|
| 50 |
+
2. **Create and activate a virtual environment:**
|
| 51 |
+
```sh
|
| 52 |
+
python3 -m venv venv
|
| 53 |
+
source venv/bin/activate # On Windows, use `venv\Scripts\activate`
|
| 54 |
+
```
|
| 55 |
+
3. **Install dependencies:**
|
| 56 |
+
```sh
|
| 57 |
+
pip install -r requirements.txt
|
| 58 |
+
```
|
| 59 |
+
4. **Run the Streamlit application:**
|
| 60 |
+
```sh
|
| 61 |
+
streamlit run app.py
|
| 62 |
+
```
|
| 63 |
+
|
| 64 |
+
### 🐳 Docker Setup
|
| 65 |
+
|
| 66 |
+
1. **Build the Docker image:**
|
| 67 |
+
```sh
|
| 68 |
+
docker build -t sise-ultimate-challenge .
|
| 69 |
+
```
|
| 70 |
+
2. **Run the container:**
|
| 71 |
+
```sh
|
| 72 |
+
docker run -p 7860:7860 sise-ultimate-challenge
|
| 73 |
+
```
|
| 74 |
+
|
| 75 |
+
---
|
| 76 |
+
|
| 77 |
+
## ⚙️ Technical Details
|
| 78 |
+
|
| 79 |
+
- 🐍 **Python 3.12**
|
| 80 |
+
- 🎨 **Streamlit**
|
| 81 |
+
- 🎙️ **Wav2Vec2**
|
| 82 |
+
|
| 83 |
+
---
|
| 84 |
|
| 85 |
+
## 🤝 Contributors
|
|
|
|
|
|
|
|
|
|
| 86 |
|
| 87 |
+
- [Cyril KOCAB](https://github.com/Cyr-CK) 👨💻
|
| 88 |
+
- [Falonne KPAMEGAN](https://github.com/marinaKpamegan) 👩💻
|
| 89 |
+
- [Juan ALFONSO](https://github.com/jdalfons) 🎤
|
| 90 |
+
- [Nancy RANDRIAMIARIJAONA](https://github.com/yminanc) 🔍
|
|
|
|
| 91 |
|
| 92 |
+
🔥 *Join us in making AI-powered emotion detection awesome!*
|
|
|
|
|
|
|
|
|
|
|
|
|
| 93 |
|
|
|
|
|
|
|
|
|
|
|
|
app.py
CHANGED
|
@@ -15,9 +15,11 @@ from predict import predict_emotion
|
|
| 15 |
# pip install streamlit-audiorec
|
| 16 |
from st_audiorec import st_audiorec
|
| 17 |
|
|
|
|
|
|
|
| 18 |
# Page configuration
|
| 19 |
st.set_page_config(
|
| 20 |
-
page_title="
|
| 21 |
page_icon="🎤",
|
| 22 |
layout="wide"
|
| 23 |
)
|
|
@@ -107,8 +109,9 @@ if st.session_state.needs_rerun:
|
|
| 107 |
st.session_state.needs_rerun = False
|
| 108 |
st.rerun() # Using st.rerun() instead of experimental_rerun
|
| 109 |
|
| 110 |
-
|
| 111 |
-
|
|
|
|
| 112 |
|
| 113 |
# Create two columns for the main layout
|
| 114 |
col1, col2 = st.columns([1, 1])
|
|
@@ -147,12 +150,16 @@ with col1:
|
|
| 147 |
with tab2:
|
| 148 |
uploaded_file = st.file_uploader("Upload an audio file (WAV format)", type=['wav'])
|
| 149 |
|
| 150 |
-
if uploaded_file is not None:
|
| 151 |
-
|
| 152 |
-
|
| 153 |
-
|
| 154 |
-
|
| 155 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 156 |
st.audio(uploaded_file, format="audio/wav")
|
| 157 |
|
| 158 |
# Process button
|
|
@@ -169,7 +176,53 @@ with col1:
|
|
| 169 |
# Set flag for rerun instead of calling experimental_rerun
|
| 170 |
st.success("Audio processed successfully!")
|
| 171 |
st.session_state.needs_rerun = True
|
|
|
|
|
|
|
| 172 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 173 |
with col2:
|
| 174 |
st.header("Results")
|
| 175 |
|
|
@@ -196,53 +249,8 @@ with col2:
|
|
| 196 |
else:
|
| 197 |
st.info("Record or upload audio to see results")
|
| 198 |
|
| 199 |
-
# Audio History and Analytics Section
|
| 200 |
-
st.header("Audio History and Analytics")
|
| 201 |
|
| 202 |
-
if len(st.session_state.audio_history_csv) > 0:
|
| 203 |
-
# Display a select box to choose from audio history
|
| 204 |
-
timestamps = st.session_state.audio_history_csv['timestamp'].tolist()
|
| 205 |
-
selected_timestamp = st.selectbox(
|
| 206 |
-
"Select audio from history:",
|
| 207 |
-
options=timestamps,
|
| 208 |
-
index=len(timestamps) - 1 # Default to most recent
|
| 209 |
-
)
|
| 210 |
-
|
| 211 |
-
# Update current index when selection changes
|
| 212 |
-
selected_index = st.session_state.audio_history_csv[
|
| 213 |
-
st.session_state.audio_history_csv['timestamp'] == selected_timestamp
|
| 214 |
-
].index[0]
|
| 215 |
-
|
| 216 |
-
# Only update if different
|
| 217 |
-
if st.session_state.current_audio_index != selected_index:
|
| 218 |
-
st.session_state.current_audio_index = selected_index
|
| 219 |
-
st.session_state.needs_rerun = True
|
| 220 |
-
|
| 221 |
-
# Analytics button
|
| 222 |
-
if st.button("Run Analytics on Selected Audio"):
|
| 223 |
-
st.subheader("Analytics Results")
|
| 224 |
-
|
| 225 |
-
# Get the selected audio data
|
| 226 |
-
selected_data = st.session_state.audio_history_csv.iloc[selected_index]
|
| 227 |
-
|
| 228 |
-
# Display analytics (this is where you would add more sophisticated analytics)
|
| 229 |
-
st.write(f"Selected Audio: {selected_data['timestamp']}")
|
| 230 |
-
st.write(f"Emotion: {selected_data['emotion']}")
|
| 231 |
-
st.write(f"File Path: {selected_data['file_path']}")
|
| 232 |
-
|
| 233 |
-
# Add any additional analytics you want here
|
| 234 |
-
|
| 235 |
-
# Try to play the selected audio
|
| 236 |
-
try:
|
| 237 |
-
if os.path.exists(selected_data['file_path']):
|
| 238 |
-
st.audio(selected_data['file_path'], format="audio/wav")
|
| 239 |
-
else:
|
| 240 |
-
st.warning("Audio file not found - it may have been deleted or moved.")
|
| 241 |
-
except Exception as e:
|
| 242 |
-
st.error(f"Error playing audio: {str(e)}")
|
| 243 |
-
else:
|
| 244 |
-
st.info("No audio history available. Record or upload audio to create history.")
|
| 245 |
|
| 246 |
# Footer
|
| 247 |
st.markdown("---")
|
| 248 |
-
st.caption("
|
|
|
|
| 15 |
# pip install streamlit-audiorec
|
| 16 |
from st_audiorec import st_audiorec
|
| 17 |
|
| 18 |
+
AUDIO_WAV = 'audio/wav'
|
| 19 |
+
MAX_FILE_SIZE_MB = 10
|
| 20 |
# Page configuration
|
| 21 |
st.set_page_config(
|
| 22 |
+
page_title="Emotional Report Analyzer",
|
| 23 |
page_icon="🎤",
|
| 24 |
layout="wide"
|
| 25 |
)
|
|
|
|
| 109 |
st.session_state.needs_rerun = False
|
| 110 |
st.rerun() # Using st.rerun() instead of experimental_rerun
|
| 111 |
|
| 112 |
+
col_logo, col_name = st.columns([3, 1])
|
| 113 |
+
col_logo.image("./img/logo_01.png", width=400)
|
| 114 |
+
col_name.title("Emotional Report")
|
| 115 |
|
| 116 |
# Create two columns for the main layout
|
| 117 |
col1, col2 = st.columns([1, 1])
|
|
|
|
| 150 |
with tab2:
|
| 151 |
uploaded_file = st.file_uploader("Upload an audio file (WAV format)", type=['wav'])
|
| 152 |
|
| 153 |
+
if uploaded_file is not None and uploaded_file.type == AUDIO_WAV and uploaded_file.size < MAX_FILE_SIZE_MB * 1_000_000:
|
| 154 |
+
try:
|
| 155 |
+
# Save the uploaded file to a temporary location
|
| 156 |
+
with tempfile.NamedTemporaryFile(delete=False, suffix='.wav') as tmp_file:
|
| 157 |
+
tmp_file.write(uploaded_file.getbuffer())
|
| 158 |
+
tmp_file_path = tmp_file.name
|
| 159 |
+
except Exception as e:
|
| 160 |
+
st.error(f"Error saving uploaded file: {str(e)}")
|
| 161 |
+
st.error(f"Try to record your voice directly, maybe your storage is locked.")
|
| 162 |
+
|
| 163 |
st.audio(uploaded_file, format="audio/wav")
|
| 164 |
|
| 165 |
# Process button
|
|
|
|
| 176 |
# Set flag for rerun instead of calling experimental_rerun
|
| 177 |
st.success("Audio processed successfully!")
|
| 178 |
st.session_state.needs_rerun = True
|
| 179 |
+
# Audio History and Analytics Section
|
| 180 |
+
st.header("Audio History and Analytics")
|
| 181 |
|
| 182 |
+
if len(st.session_state.audio_history_csv) > 0:
|
| 183 |
+
# Display a select box to choose from audio history
|
| 184 |
+
timestamps = st.session_state.audio_history_csv['timestamp'].tolist()
|
| 185 |
+
selected_timestamp = st.selectbox(
|
| 186 |
+
"Select audio from history:",
|
| 187 |
+
options=timestamps,
|
| 188 |
+
index=len(timestamps) - 1 # Default to most recent
|
| 189 |
+
)
|
| 190 |
+
|
| 191 |
+
# Update current index when selection changes
|
| 192 |
+
selected_index = st.session_state.audio_history_csv[
|
| 193 |
+
st.session_state.audio_history_csv['timestamp'] == selected_timestamp
|
| 194 |
+
].index[0]
|
| 195 |
+
|
| 196 |
+
# Only update if different
|
| 197 |
+
if st.session_state.current_audio_index != selected_index:
|
| 198 |
+
st.session_state.current_audio_index = selected_index
|
| 199 |
+
st.session_state.needs_rerun = True
|
| 200 |
+
|
| 201 |
+
# Analytics button
|
| 202 |
+
if st.button("Run Analytics on Selected Audio"):
|
| 203 |
+
st.subheader("Analytics Results")
|
| 204 |
+
|
| 205 |
+
# Get the selected audio data
|
| 206 |
+
selected_data = st.session_state.audio_history_csv.iloc[selected_index]
|
| 207 |
+
|
| 208 |
+
# Display analytics (this is where you would add more sophisticated analytics)
|
| 209 |
+
st.write(f"Selected Audio: {selected_data['timestamp']}")
|
| 210 |
+
st.write(f"Emotion: {selected_data['emotion']}")
|
| 211 |
+
st.write(f"File Path: {selected_data['file_path']}")
|
| 212 |
+
|
| 213 |
+
# Add any additional analytics you want here
|
| 214 |
+
|
| 215 |
+
# Try to play the selected audio
|
| 216 |
+
try:
|
| 217 |
+
if os.path.exists(selected_data['file_path']):
|
| 218 |
+
st.audio(selected_data['file_path'], format="audio/wav")
|
| 219 |
+
else:
|
| 220 |
+
st.warning("Audio file not found - it may have been deleted or moved.")
|
| 221 |
+
except Exception as e:
|
| 222 |
+
st.error(f"Error playing audio: {str(e)}")
|
| 223 |
+
else:
|
| 224 |
+
st.info("No audio history available. Record or upload audio to create history.")
|
| 225 |
+
|
| 226 |
with col2:
|
| 227 |
st.header("Results")
|
| 228 |
|
|
|
|
| 249 |
else:
|
| 250 |
st.info("Record or upload audio to see results")
|
| 251 |
|
|
|
|
|
|
|
| 252 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 253 |
|
| 254 |
# Footer
|
| 255 |
st.markdown("---")
|
| 256 |
+
st.caption("Emotional Report Analyzer - Processes audio in 10-second segments and predicts emotions")
|