Spaces:
Running
on
Zero
Running
on
Zero
| # Flash Attention - CUDA 12, PyTorch 2.6, Python 3.10 | |
| flash-attn @ https://github.com/Dao-AILab/flash-attention/releases/download/v2.7.3/flash_attn-2.7.3+cu12torch2.6cxx11abiFALSE-cp310-cp310-linux_x86_64.whl | |
| # Core ML/AI Libraries | |
| torch==2.6.0 | |
| torchvision | |
| accelerate>=0.24.0 | |
| # Transformers - using version compatible with both sets of models | |
| transformers==4.57.1 | |
| tokenizers>=0.20.3 | |
| transformers-stream-generator | |
| # Hugging Face | |
| huggingface_hub | |
| hf_xet | |
| spaces>=0.20.0 | |
| # Vision & Image Processing | |
| qwen-vl-utils | |
| # Web Interface | |
| gradio==5.9.1 | |
| pydantic==2.10.6 | |