Spaces:
Sleeping
Sleeping
File size: 3,395 Bytes
790fb60 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 |
---
title: FathomDeepResearch
emoji: ๐งฎ
colorFrom: blue
colorTo: red
sdk: docker
app_port: 7860
pinned: false
license: apache-2.0
short_description: Advanced research AI with web search capabilities
---
# ๐ฌ FathomDeepResearch
Advanced AI research agent powered by Fathom-Search-4B and Fathom-Synthesizer-4B models. This app provides deep research capabilities with real-time web search and intelligent synthesis.
## ๐ Features
- **๐ง Advanced Reasoning**: Powered by Fathom-R1-14B for sophisticated thinking
- **๐ Real-time Web Search**: Integrated search across multiple sources
- **๐ Intelligent Synthesis**: Combines search results into coherent answers
- **๐จ Rich UI Components**: Streamlined chat interface with progress tracking
- **โก Fast Performance**: Optimized for Hugging Face Spaces
## ๐ ๏ธ How to Use
1. **Enter your research question** in the text box
2. **Click "Research"** to start the deep research process
3. **Watch progress** as the AI searches and synthesizes information
4. **Get comprehensive answers** with source citations
## ๐ก Example Questions
- "What are the latest AI developments in 2024?"
- "DeepResearch on climate change solutions"
- "UPSC 2025 preparation strategy"
- "Comparative analysis of electric vehicle adoption"
## ๐ง Technical Details
### Models Used
- **Fathom-Search-4B**: For web search and retrieval
- **Fathom-Synthesizer-4B**: For answer synthesis
- **Fathom-R1-14B**: For reasoning and planning
### Architecture
- **Backend**: FastAPI with Gradio integration
- **Frontend**: React-based chat interface
- **Search**: Multi-source web search with Serper API
- **Deployment**: Docker containers optimized for HF Spaces
## ๐ Requirements
- Python 3.10+
- Transformers 4.35+
- Gradio 4.0+
- FastAPI
- Hugging Face Transformers
## ๐ Deployment
This app is deployed on Hugging Face Spaces using Docker. The setup includes:
- Automatic model downloading
- Environment configuration
- Error handling and fallbacks
- Multi-modal capabilities
## ๐ License
Apache 2.0 License - See LICENSE file for details
## ๐ค Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
## ๐ Support
For issues or questions:
- Check the docs folder for detailed documentation
- Open an issue on the repository
- Contact the development team
## ๐งฉ Building the Docker image locally (private Hugging Face repo)
If the source is in a private Hugging Face Space, provide a token when building the image. The Dockerfile clones the repository during build using the build-arg `HF_API_TOKEN`.
Examples (PowerShell):
Provide token as a build-arg (less secure, visible in image history):
```powershell
docker build -t fathom-deploy --build-arg HF_API_TOKEN=hf_xxx .
```
Using BuildKit and a secret (recommended):
```powershell
$env:DOCKER_BUILDKIT=1; docker build --secret id=hf_token,src=$env:USERPROFILE\.hf_token -t fathom-deploy .
```
Place your token in a file (e.g. %USERPROFILE%\.hf_token) containing only the token string, then reference it with `--secret`. You would need to adapt the Dockerfile to read from `/run/secrets/hf_token` if you choose this approach.
Note: If the repository is public you can omit the build-arg and the Dockerfile will clone anonymously. |