A newer version of the Gradio SDK is available:
6.1.0
MODAL INTEGRATION GUIDE
Step 1: Install Modal
pip install modal
Step 2: Set up Modal Account
- Go to https://modal.com
- Sign up (free tier available + your $250 hackathon credit)
- Get your token:
modal token new
Step 3: Deploy Modal Function
modal deploy modal_video_processing.py
This will give you a URL like:
https://your-username--aiquoteclipgenerator-process-video-endpoint.modal.run
Step 4: Add to Your Hugging Face Space
Add this environment variable:
MODAL_ENDPOINT_URL=your_modal_endpoint_url_here
Step 5: Update app.py
Replace the create_quote_video_tool function with this Modal-powered version:
@tool
def create_quote_video_tool(video_url: str, quote_text: str, output_path: str, audio_path: str = None) -> dict:
"""
Create a final quote video using Modal for fast processing.
"""
try:
import requests
import base64
modal_endpoint = os.getenv("MODAL_ENDPOINT_URL")
if not modal_endpoint:
# Fallback to local processing if Modal not configured
return create_quote_video_local(video_url, quote_text, output_path, audio_path)
print("π Processing on Modal (fast!)...")
# Upload audio to temporary storage if provided
audio_url = None
if audio_path and os.path.exists(audio_path):
# For now, we'll skip audio in Modal version
# In production, upload audio to S3/GCS and pass URL
pass
# Call Modal endpoint
response = requests.post(
modal_endpoint,
json={
"video_url": video_url,
"quote_text": quote_text,
"audio_url": audio_url
},
timeout=120
)
if response.status_code != 200:
raise Exception(f"Modal error: {response.text}")
result = response.json()
if not result.get("success"):
raise Exception(result.get("error", "Unknown error"))
# Decode video bytes
video_b64 = result["video"]
video_bytes = base64.b64decode(video_b64)
# Save to output path
with open(output_path, 'wb') as f:
f.write(video_bytes)
print(f"β
Modal processing complete! {result['size_mb']:.2f}MB")
return {
"success": True,
"output_path": output_path,
"message": f"Video created via Modal ({result['size_mb']:.2f}MB)"
}
except Exception as e:
print(f"Modal processing failed: {e}")
# Fallback to local processing
return create_quote_video_local(video_url, quote_text, output_path, audio_path)
def create_quote_video_local(video_url: str, quote_text: str, output_path: str, audio_path: str = None) -> dict:
"""
Fallback local processing (your current implementation)
"""
# Your existing create_quote_video_tool code here
pass
Benefits of Modal:
Speed Comparison:
- Before (HF Spaces): 119 seconds
- After (Modal): ~15-30 seconds (4-8x faster!)
Why Modal is Faster:
- β 4 CPUs instead of shared CPU on HF Spaces
- β 4GB RAM dedicated to your function
- β Optimized infrastructure for video processing
- β Fast I/O for downloading/uploading
Cost:
- Uses your $250 hackathon credit
- After that: ~$0.01-0.02 per video (very cheap!)
Testing Modal Function
# Test locally before deploying
python modal_video_processing.py
Monitoring
View logs and metrics at: https://modal.com/apps
Hackathon Impact:
β Much faster - Better UX β Uses sponsor credit - Shows engagement β Professional infrastructure - Impressive to judges β Scalable - Handles multiple users
This is a HUGE upgrade! π