AIQuoteClipGenerator / MODAL_INTEGRATION.md
ladybug11's picture
update
59e4f9e

A newer version of the Gradio SDK is available: 6.1.0

Upgrade

MODAL INTEGRATION GUIDE

Step 1: Install Modal

pip install modal

Step 2: Set up Modal Account

  1. Go to https://modal.com
  2. Sign up (free tier available + your $250 hackathon credit)
  3. Get your token:
    modal token new
    

Step 3: Deploy Modal Function

modal deploy modal_video_processing.py

This will give you a URL like:

https://your-username--aiquoteclipgenerator-process-video-endpoint.modal.run

Step 4: Add to Your Hugging Face Space

Add this environment variable:

MODAL_ENDPOINT_URL=your_modal_endpoint_url_here

Step 5: Update app.py

Replace the create_quote_video_tool function with this Modal-powered version:

@tool
def create_quote_video_tool(video_url: str, quote_text: str, output_path: str, audio_path: str = None) -> dict:
    """
    Create a final quote video using Modal for fast processing.
    """
    
    try:
        import requests
        import base64
        
        modal_endpoint = os.getenv("MODAL_ENDPOINT_URL")
        
        if not modal_endpoint:
            # Fallback to local processing if Modal not configured
            return create_quote_video_local(video_url, quote_text, output_path, audio_path)
        
        print("πŸš€ Processing on Modal (fast!)...")
        
        # Upload audio to temporary storage if provided
        audio_url = None
        if audio_path and os.path.exists(audio_path):
            # For now, we'll skip audio in Modal version
            # In production, upload audio to S3/GCS and pass URL
            pass
        
        # Call Modal endpoint
        response = requests.post(
            modal_endpoint,
            json={
                "video_url": video_url,
                "quote_text": quote_text,
                "audio_url": audio_url
            },
            timeout=120
        )
        
        if response.status_code != 200:
            raise Exception(f"Modal error: {response.text}")
        
        result = response.json()
        
        if not result.get("success"):
            raise Exception(result.get("error", "Unknown error"))
        
        # Decode video bytes
        video_b64 = result["video"]
        video_bytes = base64.b64decode(video_b64)
        
        # Save to output path
        with open(output_path, 'wb') as f:
            f.write(video_bytes)
        
        print(f"βœ… Modal processing complete! {result['size_mb']:.2f}MB")
        
        return {
            "success": True,
            "output_path": output_path,
            "message": f"Video created via Modal ({result['size_mb']:.2f}MB)"
        }
    
    except Exception as e:
        print(f"Modal processing failed: {e}")
        # Fallback to local processing
        return create_quote_video_local(video_url, quote_text, output_path, audio_path)


def create_quote_video_local(video_url: str, quote_text: str, output_path: str, audio_path: str = None) -> dict:
    """
    Fallback local processing (your current implementation)
    """
    # Your existing create_quote_video_tool code here
    pass

Benefits of Modal:

Speed Comparison:

  • Before (HF Spaces): 119 seconds
  • After (Modal): ~15-30 seconds (4-8x faster!)

Why Modal is Faster:

  1. βœ… 4 CPUs instead of shared CPU on HF Spaces
  2. βœ… 4GB RAM dedicated to your function
  3. βœ… Optimized infrastructure for video processing
  4. βœ… Fast I/O for downloading/uploading

Cost:

  • Uses your $250 hackathon credit
  • After that: ~$0.01-0.02 per video (very cheap!)

Testing Modal Function

# Test locally before deploying
python modal_video_processing.py

Monitoring

View logs and metrics at: https://modal.com/apps

Hackathon Impact:

βœ… Much faster - Better UX βœ… Uses sponsor credit - Shows engagement βœ… Professional infrastructure - Impressive to judges βœ… Scalable - Handles multiple users

This is a HUGE upgrade! πŸš€