thisisam commited on
Commit
4fe284b
Β·
0 Parent(s):

Initial commit: Fara-7B chat interface

Browse files
Files changed (7) hide show
  1. .gitignore +15 -0
  2. DEPLOYMENT_GUIDE.md +146 -0
  3. LOCAL_TESTING.md +110 -0
  4. QUICKSTART.md +106 -0
  5. README.md +81 -0
  6. app.py +185 -0
  7. requirements.txt +2 -0
.gitignore ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ __pycache__/
2
+ *.pyc
3
+ *.pyo
4
+ *.pyd
5
+ .Python
6
+ *.so
7
+ *.dylib
8
+ *.egg
9
+ *.egg-info/
10
+ dist/
11
+ build/
12
+ .env
13
+ .venv
14
+ venv/
15
+ flagged/
DEPLOYMENT_GUIDE.md ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # πŸš€ Deployment Guide for Fara-7B Space
2
+
3
+ ## Quick Start
4
+
5
+ Your Hugging Face Space is ready to deploy! Follow these steps:
6
+
7
+ ### Step 1: Create a New Space on Hugging Face
8
+
9
+ 1. Go to [huggingface.co/new-space](https://huggingface.co/new-space)
10
+ 2. Fill in the details:
11
+ - **Space name**: `fara-7b-chat` (or any name you prefer)
12
+ - **License**: MIT
13
+ - **Select SDK**: Gradio
14
+ - **Space hardware**: CPU Basic (free) - this is fine since we're using Inference API
15
+ - **Visibility**: Public or Private (your choice)
16
+ 3. Click **Create Space**
17
+
18
+ ### Step 2: Upload Your Files
19
+
20
+ You have two options:
21
+
22
+ #### Option A: Git Upload (Recommended)
23
+
24
+ ```bash
25
+ # Navigate to your space folder
26
+ cd "c:/Users/Amir/OneDrive - Digital Health CRC Limited/Projects/url2md/fara-7b-space"
27
+
28
+ # Initialize git repository
29
+ git init
30
+ git add .
31
+ git commit -m "Initial commit: Fara-7B chat interface"
32
+
33
+ # Add your HuggingFace Space as remote
34
+ # Replace YOUR_USERNAME and YOUR_SPACE_NAME
35
+ git remote add origin https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME
36
+ git push -u origin main
37
+ ```
38
+
39
+ #### Option B: Web Upload
40
+
41
+ 1. In your newly created Space, click **Files** β†’ **Add file** β†’ **Upload files**
42
+ 2. Drag and drop these files:
43
+ - `app.py`
44
+ - `requirements.txt`
45
+ - `README.md`
46
+ - `.gitignore`
47
+ 3. Click **Commit changes to main**
48
+
49
+ ### Step 3: Add Your HuggingFace Token as a Secret
50
+
51
+ This is **CRITICAL** - the app won't work without this:
52
+
53
+ 1. In your Space, go to **Settings** (gear icon)
54
+ 2. Scroll to **Variables and secrets**
55
+ 3. Click **New secret**
56
+ 4. Enter:
57
+ - **Name**: `HF_TOKEN`
58
+ - **Value**: Your Hugging Face token
59
+ - ⚠️ Make sure this is marked as **Secret** (not a variable)
60
+ 5. Click **Save**
61
+ 6. The Space will automatically rebuild
62
+
63
+ ### Step 4: Wait for Build
64
+
65
+ - The Space will install dependencies and start
66
+ - This usually takes 1-3 minutes
67
+ - Watch the **Logs** tab to see progress
68
+ - Once you see "Running on local URL", it's ready!
69
+
70
+ ### Step 5: Test Your Space
71
+
72
+ 1. Go to the **App** tab
73
+ 2. Try a test message: "Help me find a good coffee shop"
74
+ 3. You should see Fara-7B respond!
75
+
76
+ ## Troubleshooting
77
+
78
+ ### Error: "HF_TOKEN not found"
79
+ - Make sure you added the token as a **Secret**, not a Variable
80
+ - Restart the Space after adding the secret
81
+
82
+ ### Error: "Model not found"
83
+ - Check if `microsoft/Fara-7B` is publicly available
84
+ - Ensure your token has inference permissions
85
+
86
+ ### Error: "Rate limit exceeded"
87
+ - You're using the free inference tier
88
+ - Wait a few minutes and try again
89
+ - Consider upgrading to Inference Endpoints for production use
90
+
91
+ ### Space is slow
92
+ - The free CPU tier is sufficient since inference happens on HF servers
93
+ - Response time depends on model inference, not your Space hardware
94
+
95
+ ## Optional Enhancements
96
+
97
+ ### 1. Request GPU Hardware
98
+ If you want faster Space loading (not needed for inference):
99
+ - Settings β†’ Hardware β†’ Select a GPU tier
100
+ - Note: This costs money, but inference API is separate
101
+
102
+ ### 2. Add Custom Examples
103
+ Edit `app.py` and add example buttons:
104
+ ```python
105
+ gr.Examples(
106
+ examples=[
107
+ "Find Italian restaurants in Seattle",
108
+ "Help me search for running shoes",
109
+ "What's the process to book a hotel?"
110
+ ],
111
+ inputs=msg
112
+ )
113
+ ```
114
+
115
+ ### 3. Enable Analytics
116
+ - Settings β†’ Enable visitor analytics
117
+ - Track usage of your Space
118
+
119
+ ## Cost Breakdown
120
+
121
+ - **Space hosting**: FREE (CPU Basic tier)
122
+ - **Inference API**:
123
+ - Free tier: Limited requests/day
124
+ - Pro account: More requests
125
+ - Inference Endpoints: ~$0.60-1.00/hour for dedicated
126
+
127
+ ## Next Steps
128
+
129
+ Once deployed, you can:
130
+ 1. Share your Space URL with others
131
+ 2. Embed it in websites
132
+ 3. Use the API endpoint programmatically
133
+ 4. Duplicate and customize for other models
134
+
135
+ ## Need Help?
136
+
137
+ Check the logs in your Space for detailed error messages, or refer to:
138
+ - [Gradio Documentation](https://gradio.app/docs)
139
+ - [HuggingFace Spaces](https://huggingface.co/docs/hub/spaces)
140
+ - [Inference API Docs](https://huggingface.co/docs/api-inference/index)
141
+
142
+ ---
143
+
144
+ **Your Space folder**: `c:/Users/Amir/OneDrive - Digital Health CRC Limited/Projects/url2md/fara-7b-space`
145
+
146
+ Happy deploying! πŸš€
LOCAL_TESTING.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Local Testing Instructions
2
+
3
+ ## Test Your Space Locally Before Deploying
4
+
5
+ Before deploying to Hugging Face, you can test the app on your local machine.
6
+
7
+ ### Prerequisites
8
+
9
+ 1. Python 3.8 or higher installed
10
+ 2. Your Hugging Face token ready
11
+
12
+ ### Steps
13
+
14
+ #### 1. Install Dependencies
15
+
16
+ Open PowerShell/Terminal and navigate to this folder:
17
+
18
+ ```bash
19
+ cd "c:/Users/Amir/OneDrive - Digital Health CRC Limited/Projects/url2md/fara-7b-space"
20
+ ```
21
+
22
+ Install required packages:
23
+
24
+ ```bash
25
+ pip install -r requirements.txt
26
+ ```
27
+
28
+ #### 2. Set Your HuggingFace Token
29
+
30
+ Create a `.env` file in this folder (it's already in .gitignore, so it won't be committed):
31
+
32
+ ```bash
33
+ # PowerShell command to create .env file
34
+ echo "HF_TOKEN=your_token_here" > .env
35
+ ```
36
+
37
+ Replace `your_token_here` with your actual Hugging Face token.
38
+
39
+ #### 3. Update app.py to Load .env (Temporary)
40
+
41
+ For local testing only, add these lines at the top of `app.py`:
42
+
43
+ ```python
44
+ from dotenv import load_dotenv
45
+ load_dotenv() # Load .env file
46
+ ```
47
+
48
+ And install python-dotenv:
49
+ ```bash
50
+ pip install python-dotenv
51
+ ```
52
+
53
+ #### 4. Run the App Locally
54
+
55
+ ```bash
56
+ python app.py
57
+ ```
58
+
59
+ You should see output like:
60
+ ```
61
+ Running on local URL: http://127.0.0.1:7860
62
+ ```
63
+
64
+ Open that URL in your browser to test!
65
+
66
+ #### 5. Test the Chat
67
+
68
+ - Type a message
69
+ - Verify you get responses from Fara-7B
70
+ - Test different temperatures and max_tokens settings
71
+ - Check if streaming works properly
72
+
73
+ ### Important Notes
74
+
75
+ ⚠️ **Before Deploying:**
76
+ - Remove the `load_dotenv()` code from `app.py` (Spaces use secrets, not .env)
77
+ - Don't commit your `.env` file (already in .gitignore)
78
+ - The Space will use the `HF_TOKEN` secret instead
79
+
80
+ ### Troubleshooting Local Testing
81
+
82
+ **Import Error for dotenv:**
83
+ ```bash
84
+ pip install python-dotenv
85
+ ```
86
+
87
+ **Token Error:**
88
+ - Check your token is correct in `.env`
89
+ - Ensure no extra spaces or quotes
90
+ - Verify token has inference permissions
91
+
92
+ **Port Already in Use:**
93
+ ```bash
94
+ # Kill the process or run on different port
95
+ python app.py --server-port 7861
96
+ ```
97
+
98
+ ### Alternative: Quick Test Without .env
99
+
100
+ You can also temporarily hardcode your token (FOR TESTING ONLY):
101
+
102
+ ```python
103
+ client = InferenceClient(token="your_token_here") # TEMPORARY - REMOVE BEFORE DEPLOYING
104
+ ```
105
+
106
+ ⚠️ **NEVER commit hardcoded tokens to git!**
107
+
108
+ ---
109
+
110
+ Once local testing works, you're ready to deploy to Hugging Face Spaces! See `DEPLOYMENT_GUIDE.md` for deployment instructions.
QUICKSTART.md ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # πŸŽ‰ Your Fara-7B Hugging Face Space is Ready!
2
+
3
+ ## πŸ“ What You Have
4
+
5
+ Your Space is in: `c:/Users/Amir/OneDrive - Digital Health CRC Limited/Projects/url2md/fara-7b-space`
6
+
7
+ ### Files Created:
8
+
9
+ 1. **app.py** - Main Gradio application with chat interface
10
+ 2. **requirements.txt** - Python dependencies
11
+ 3. **README.md** - Space documentation (shows on HuggingFace)
12
+ 4. **.gitignore** - Git ignore rules
13
+ 5. **DEPLOYMENT_GUIDE.md** - Step-by-step deployment instructions
14
+ 6. **LOCAL_TESTING.md** - How to test locally before deploying
15
+
16
+ ## πŸš€ Quick Start - Deploy to HuggingFace
17
+
18
+ ### Option 1: Web Upload (Easiest)
19
+
20
+ 1. Go to https://huggingface.co/new-space
21
+ 2. Create a new Space (SDK: Gradio)
22
+ 3. Upload all files from the folder
23
+ 4. Add `HF_TOKEN` secret in Settings
24
+ 5. Done! πŸŽ‰
25
+
26
+ ### Option 2: Git Push
27
+
28
+ ```bash
29
+ cd "c:/Users/Amir/OneDrive - Digital Health CRC Limited/Projects/url2md/fara-7b-space"
30
+ git init
31
+ git add .
32
+ git commit -m "Initial commit"
33
+ git remote add origin https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME
34
+ git push -u origin main
35
+ ```
36
+
37
+ Then add `HF_TOKEN` secret in Space Settings.
38
+
39
+ ## πŸ§ͺ Test Locally First (Optional)
40
+
41
+ ```bash
42
+ cd "c:/Users/Amir/OneDrive - Digital Health CRC Limited/Projects/url2md/fara-7b-space"
43
+ pip install -r requirements.txt
44
+ pip install python-dotenv
45
+ echo "HF_TOKEN=your_token_here" > .env
46
+ python app.py
47
+ ```
48
+
49
+ See `LOCAL_TESTING.md` for details.
50
+
51
+ ## ✨ Features
52
+
53
+ Your Space includes:
54
+ - πŸ’¬ **Chat Interface** - Clean, modern Gradio UI
55
+ - 🌊 **Streaming Responses** - Real-time text generation
56
+ - βš™οΈ **Customization** - Temperature and max tokens controls
57
+ - πŸ“± **Responsive** - Works on mobile and desktop
58
+ - 🎨 **Themed** - Soft, professional color scheme
59
+ - πŸ“š **Documentation** - Built-in help and examples
60
+
61
+ ## πŸ”‘ Critical: HuggingFace Token Setup
62
+
63
+ Your app REQUIRES a HuggingFace token with inference permissions:
64
+
65
+ 1. **Get/Check Token**: https://huggingface.co/settings/tokens
66
+ 2. **In Space Settings**: Add as secret named `HF_TOKEN`
67
+ 3. **Restart Space** after adding the secret
68
+
69
+ ## πŸ’° Costs
70
+
71
+ - **Space Hosting**: FREE (CPU Basic)
72
+ - **Inference API**: Uses your HF free tier
73
+ - Free tier: ~100 requests/day
74
+ - For more: Consider HF Pro or dedicated endpoints
75
+
76
+ ## πŸ“– Next Steps
77
+
78
+ 1. **Read DEPLOYMENT_GUIDE.md** for detailed deployment steps
79
+ 2. **Optionally test locally** using LOCAL_TESTING.md
80
+ 3. **Create your Space** on HuggingFace
81
+ 4. **Upload files** and add your token
82
+ 5. **Share your Space** with the world!
83
+
84
+ ## πŸ› οΈ Customization Ideas
85
+
86
+ - Add example prompts with `gr.Examples()`
87
+ - Change theme colors
88
+ - Add file upload for images (when model supports it)
89
+ - Integrate with Magentic-UI for full computer use
90
+ - Add conversation export/save functionality
91
+
92
+ ## πŸ“ž Support Resources
93
+
94
+ - **This Space**: See DEPLOYMENT_GUIDE.md and LOCAL_TESTING.md
95
+ - **Gradio Docs**: https://gradio.app/docs
96
+ - **HuggingFace Spaces**: https://huggingface.co/docs/hub/spaces
97
+ - **Fara-7B Model**: https://huggingface.co/microsoft/Fara-7B
98
+
99
+ ---
100
+
101
+ **Created**: 2025-12-01
102
+ **Model**: Microsoft Fara-7B (7B parameters)
103
+ **Framework**: Gradio 4.x
104
+ **Inference**: HuggingFace Inference API
105
+
106
+ Enjoy your Space! πŸš€πŸ€–
README.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Fara-7B Computer Use Agent
3
+ emoji: πŸ€–
4
+ colorFrom: purple
5
+ colorTo: blue
6
+ sdk: gradio
7
+ sdk_version: 4.44.0
8
+ app_file: app.py
9
+ pinned: false
10
+ license: mit
11
+ ---
12
+
13
+ # Fara-7B: Computer Use Agent Chat Interface
14
+
15
+ This Space provides a chat interface to interact with **Microsoft Fara-7B**, an efficient agentic model designed for computer use and web automation.
16
+
17
+ ## 🌟 Features
18
+
19
+ - **Interactive Chat**: Converse with Fara-7B about web automation tasks
20
+ - **Streaming Responses**: Real-time response generation
21
+ - **Customizable Parameters**: Adjust temperature and max tokens
22
+ - **Clean UI**: Modern, user-friendly interface built with Gradio
23
+
24
+ ## πŸš€ About Fara-7B
25
+
26
+ Fara-7B is Microsoft's first agentic small language model (SLM) designed specifically for computer use. With only 7 billion parameters, it achieves state-of-the-art performance for:
27
+
28
+ - πŸ›’ **Shopping automation**
29
+ - ✈️ **Travel booking research**
30
+ - 🍽️ **Restaurant reservations**
31
+ - πŸ“§ **Account workflows**
32
+ - πŸ” **Information seeking**
33
+
34
+ ## βš™οΈ Setup Instructions
35
+
36
+ ### For Space Owners:
37
+
38
+ 1. **Fork/Duplicate this Space** to your account
39
+ 2. Go to **Settings** β†’ **Variables and secrets**
40
+ 3. Add a new secret:
41
+ - **Name**: `HF_TOKEN`
42
+ - **Value**: Your Hugging Face token (get it from [huggingface.co/settings/tokens](https://huggingface.co/settings/tokens))
43
+ 4. Ensure your token has **inference** permissions
44
+ 5. Restart the Space
45
+
46
+ ### Getting a Hugging Face Token:
47
+
48
+ 1. Go to [huggingface.co/settings/tokens](https://huggingface.co/settings/tokens)
49
+ 2. Click **New token**
50
+ 3. Select **Read** access (sufficient for inference)
51
+ 4. Copy the token and add it to Space secrets
52
+
53
+ ## 🎯 Usage
54
+
55
+ Simply type your request in the chat box! Examples:
56
+
57
+ - "Help me find Italian restaurants in Seattle"
58
+ - "What steps would I take to book a flight to London?"
59
+ - "How can I search for running shoes on an e-commerce site?"
60
+
61
+ ⚠️ **Note**: This is a text-only chat interface. For full computer use capabilities with screenshots and browser automation, check out the [Magentic-UI framework](https://github.com/microsoft/magentic-ui).
62
+
63
+ ## πŸ“š Resources
64
+
65
+ - [Model Card](https://huggingface.co/microsoft/Fara-7B)
66
+ - [Microsoft Research](https://www.microsoft.com/en-us/research/)
67
+ - [Magentic-UI](https://github.com/microsoft/magentic-ui) - Full computer use framework
68
+
69
+ ## πŸ“ License
70
+
71
+ MIT License - See [microsoft/Fara-7B](https://huggingface.co/microsoft/Fara-7B) for model license details.
72
+
73
+ ## 🀝 Credits
74
+
75
+ - **Model**: Microsoft Research
76
+ - **Interface**: Built with Gradio
77
+ - **Inference**: Hugging Face Inference API
78
+
79
+ ---
80
+
81
+ *This Space demonstrates the capabilities of Fara-7B through a simple chat interface. For production use cases requiring actual computer control, integrate with the full Magentic-UI framework.*
app.py ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ from huggingface_hub import InferenceClient
3
+ import os
4
+
5
+ # Initialize the Inference Client
6
+ # The HF_TOKEN will be set as a Space secret
7
+ client = InferenceClient(token=os.getenv("HF_TOKEN"))
8
+
9
+ def chat_with_fara(message, history, temperature, max_tokens):
10
+ """
11
+ Interact with Fara-7B model via Hugging Face Inference API
12
+
13
+ Args:
14
+ message: User's input message
15
+ history: Chat history
16
+ temperature: Sampling temperature
17
+ max_tokens: Maximum tokens to generate
18
+ """
19
+ try:
20
+ # Convert Gradio history format to messages format
21
+ messages = []
22
+
23
+ # Add system prompt as per Fara-7B documentation
24
+ system_prompt = """You are a web automation agent that performs actions on websites to fulfill user requests by calling various tools.
25
+
26
+ You should stop execution at Critical Points. A Critical Point occurs in tasks like:
27
+ - Checkout
28
+ - Book
29
+ - Purchase
30
+ - Call
31
+ - Email
32
+ - Order
33
+
34
+ A Critical Point requires the user's permission or personal/sensitive information (name, email, credit card, address, payment information, resume, etc.) to complete a transaction (purchase, reservation, sign-up, etc.), or to communicate as a human would (call, email, apply to a job, etc.).
35
+
36
+ Guideline: Solve the task as far as possible up until a Critical Point.
37
+
38
+ Examples:
39
+ - If the task is to "call a restaurant to make a reservation," do not actually make the call. Instead, navigate to the restaurant's page and find the phone number.
40
+ - If the task is to "order new size 12 running shoes," do not place the order. Instead, search for the right shoes that meet the criteria and add them to the cart.
41
+
42
+ Some tasks, like answering questions, may not encounter a Critical Point at all."""
43
+
44
+ messages.append({"role": "system", "content": system_prompt})
45
+
46
+ # Add conversation history
47
+ for user_msg, assistant_msg in history:
48
+ messages.append({"role": "user", "content": user_msg})
49
+ if assistant_msg:
50
+ messages.append({"role": "assistant", "content": assistant_msg})
51
+
52
+ # Add current message
53
+ messages.append({"role": "user", "content": message})
54
+
55
+ # Call the inference API with streaming
56
+ response_text = ""
57
+ for chunk in client.chat_completion(
58
+ model="microsoft/Fara-7B",
59
+ messages=messages,
60
+ temperature=temperature,
61
+ max_tokens=max_tokens,
62
+ stream=True
63
+ ):
64
+ if chunk.choices[0].delta.content:
65
+ response_text += chunk.choices[0].delta.content
66
+ yield response_text
67
+
68
+ except Exception as e:
69
+ error_msg = f"❌ Error: {str(e)}\n\nPlease ensure:\n1. Your HF_TOKEN is set in Space secrets\n2. The token has inference permissions\n3. You have access to microsoft/Fara-7B"
70
+ yield error_msg
71
+
72
+ # Custom CSS for better UI
73
+ custom_css = """
74
+ .gradio-container {
75
+ font-family: 'Inter', sans-serif;
76
+ }
77
+ .chat-message {
78
+ padding: 10px;
79
+ border-radius: 8px;
80
+ }
81
+ footer {
82
+ visibility: hidden;
83
+ }
84
+ """
85
+
86
+ # Create the Gradio interface
87
+ with gr.Blocks(css=custom_css, theme=gr.themes.Soft()) as demo:
88
+ gr.Markdown(
89
+ """
90
+ # πŸ€– Fara-7B: Computer Use Agent
91
+
92
+ **Microsoft's first agentic small language model** designed for web automation and computer use tasks.
93
+
94
+ This is a 7B parameter multimodal model that can help with:
95
+ - πŸ›’ Shopping automation
96
+ - ✈️ Travel booking research
97
+ - 🍽️ Restaurant reservations
98
+ - πŸ“§ Account workflows
99
+ - πŸ” Information seeking
100
+
101
+ ⚠️ **Note**: This chat interface provides text-only interaction. For full computer use capabilities
102
+ (screenshots, browser automation), integrate with the Magentic-UI framework.
103
+ """
104
+ )
105
+
106
+ chatbot = gr.Chatbot(
107
+ height=500,
108
+ label="Chat with Fara-7B",
109
+ show_label=True,
110
+ avatar_images=(None, "πŸ€–"),
111
+ bubble_full_width=False
112
+ )
113
+
114
+ with gr.Row():
115
+ msg = gr.Textbox(
116
+ label="Your Message",
117
+ placeholder="Example: Help me find a good Italian restaurant in Seattle...",
118
+ lines=2,
119
+ scale=4
120
+ )
121
+ send_btn = gr.Button("Send πŸ“€", scale=1, variant="primary")
122
+
123
+ with gr.Accordion("βš™οΈ Advanced Settings", open=False):
124
+ temperature = gr.Slider(
125
+ minimum=0.1,
126
+ maximum=1.0,
127
+ value=0.7,
128
+ step=0.1,
129
+ label="Temperature",
130
+ info="Higher values make output more creative, lower values more focused"
131
+ )
132
+ max_tokens = gr.Slider(
133
+ minimum=64,
134
+ maximum=2048,
135
+ value=512,
136
+ step=64,
137
+ label="Max Tokens",
138
+ info="Maximum length of the response"
139
+ )
140
+
141
+ with gr.Row():
142
+ clear_btn = gr.Button("πŸ—‘οΈ Clear Chat", scale=1)
143
+
144
+ gr.Markdown(
145
+ """
146
+ ---
147
+ ### πŸ“š Learn More
148
+ - [Model Card](https://huggingface.co/microsoft/Fara-7B)
149
+ - [Research Paper](https://huggingface.co/microsoft/Fara-7B)
150
+ - [Magentic-UI Framework](https://github.com/microsoft/magentic-ui) for full computer use capabilities
151
+
152
+ ### βš™οΈ Setup Instructions
153
+ 1. Go to Space Settings β†’ Secrets
154
+ 2. Add `HF_TOKEN` with your Hugging Face token
155
+ 3. Ensure your token has inference permissions
156
+ """
157
+ )
158
+
159
+ # Event handlers
160
+ msg.submit(
161
+ chat_with_fara,
162
+ inputs=[msg, chatbot, temperature, max_tokens],
163
+ outputs=[chatbot]
164
+ ).then(
165
+ lambda: gr.update(value=""),
166
+ outputs=[msg]
167
+ )
168
+
169
+ send_btn.click(
170
+ chat_with_fara,
171
+ inputs=[msg, chatbot, temperature, max_tokens],
172
+ outputs=[chatbot]
173
+ ).then(
174
+ lambda: gr.update(value=""),
175
+ outputs=[msg]
176
+ )
177
+
178
+ clear_btn.click(
179
+ lambda: ([], ""),
180
+ outputs=[chatbot, msg]
181
+ )
182
+
183
+ # Launch the app
184
+ if __name__ == "__main__":
185
+ demo.launch()
requirements.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ gradio>=4.0.0
2
+ huggingface-hub>=0.20.0