File size: 10,891 Bytes
ea169b3
 
 
 
 
 
 
 
 
 
8da8945
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b810e9b
 
 
8da8945
 
 
b810e9b
8da8945
b810e9b
8da8945
b810e9b
 
 
8da8945
b810e9b
8da8945
 
b810e9b
8da8945
 
 
b810e9b
8da8945
b810e9b
8da8945
b810e9b
8da8945
b810e9b
8da8945
 
 
b810e9b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8da8945
 
b810e9b
8da8945
 
b810e9b
8da8945
 
 
b810e9b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8da8945
 
 
 
 
 
 
b810e9b
8da8945
b810e9b
8da8945
 
b810e9b
8da8945
 
 
b810e9b
 
 
 
 
 
 
 
 
 
 
 
8da8945
b810e9b
8da8945
 
b810e9b
 
8da8945
 
 
 
 
 
 
 
 
 
 
b810e9b
 
 
8da8945
 
 
 
b810e9b
 
8da8945
b810e9b
 
 
 
8da8945
b810e9b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8da8945
 
b810e9b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8da8945
 
b810e9b
 
8da8945
b810e9b
 
 
8da8945
 
 
 
 
b810e9b
 
8da8945
 
b810e9b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8da8945
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
---
title: Embedding Inference API
emoji: 🤖
colorFrom: blue
colorTo: purple
sdk: docker
app_port: 7860
pinned: false
---

# Embedding Inference API

A FastAPI-based inference service for generating embeddings using JobBERT v2/v3, Jina AI, and Voyage AI.

## Features

- **Multiple Models**: JobBERT v2/v3 (job-specific), Jina AI v3 (general-purpose), Voyage AI (state-of-the-art)
- **RESTful API**: Easy-to-use HTTP endpoints
- **Batch Processing**: Process multiple texts in a single request
- **Task-Specific Embeddings**: Support for different embedding tasks (retrieval, classification, etc.)
- **Docker Ready**: Easy deployment to Hugging Face Spaces or any Docker environment

## Supported Models

| Model | Dimension | Max Tokens | Best For |
|-------|-----------|------------|----------|
| JobBERT v2 | 768 | 512 | Job titles and descriptions |
| JobBERT v3 | 768 | 512 | Job titles (improved performance) |
| Jina AI v3 | 1024 | 8,192 | General text, long documents |
| Voyage AI | 1024 | 32,000 | High-quality embeddings (requires API key) |

## Quick Start

### Local Development

1. **Install dependencies:**
   ```bash
   cd embedding
   pip install -r requirements.txt
   ```

2. **Run the API:**
   ```bash
   python api.py
   ```

3. **Access the API:**
   - API: http://localhost:7860
   - Docs: http://localhost:7860/docs

### Docker Deployment

1. **Build the image:**
   ```bash
   docker build -t embedding-api .
   ```

2. **Run the container:**
   ```bash
   docker run -p 7860:7860 embedding-api
   ```

3. **With Voyage AI (optional):**
   ```bash
   docker run -p 7860:7860 -e VOYAGE_API_KEY=your_key_here embedding-api
   ```

## Hugging Face Spaces Deployment

### Option 1: Using Hugging Face CLI

1. **Install Hugging Face CLI:**
   ```bash
   pip install huggingface_hub
   huggingface-cli login
   ```

2. **Create a new Space:**
   - Go to https://huggingface.co/spaces
   - Click "Create new Space"
   - Choose "Docker" as the Space SDK
   - Name your space (e.g., `your-username/embedding-api`)

3. **Clone and push:**
   ```bash
   git clone https://huggingface.co/spaces/your-username/embedding-api
   cd embedding-api
   
   # Copy files from embedding folder
   cp /path/to/embedding/Dockerfile .
   cp /path/to/embedding/api.py .
   cp /path/to/embedding/requirements.txt .
   cp /path/to/embedding/README.md .
   
   git add .
   git commit -m "Initial commit"
   git push
   ```

4. **Configure environment (optional):**
   - Go to your Space settings
   - Add `VOYAGE_API_KEY` secret if using Voyage AI

### Option 2: Manual Upload

1. Create a new Docker Space on Hugging Face
2. Upload these files:
   - `Dockerfile`
   - `api.py`
   - `requirements.txt`
   - `README.md`
3. Add environment variables in Settings if needed

## API Usage

### Health Check

```bash
curl http://localhost:7860/health
```

Response:
```json
{
  "status": "healthy",
  "models_loaded": ["jobbertv2", "jobbertv3", "jina"],
  "voyage_available": false,
  "api_key_required": false
}
```

### Generate Embeddings (Elasticsearch Compatible)

The main `/embed` endpoint uses Elasticsearch inference API format with model selection via query parameter.

#### Single Text (JobBERT v3 - default)

Without API key:
```bash
curl -X POST "http://localhost:7860/embed" \
  -H "Content-Type: application/json" \
  -d '{
    "input": "Software Engineer"
  }'
```

With API key:
```bash
curl -X POST "http://localhost:7860/embed" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "input": "Software Engineer"
  }'
```

Response:
```json
{
  "embedding": [0.123, -0.456, 0.789, ...]
}
```

#### Single Text with Model Selection

```bash
# JobBERT v2
curl -X POST "http://localhost:7860/embed?model=jobbertv2" \
  -H "Content-Type: application/json" \
  -d '{"input": "Data Scientist"}'

# JobBERT v3 (recommended)
curl -X POST "http://localhost:7860/embed?model=jobbertv3" \
  -H "Content-Type: application/json" \
  -d '{"input": "Product Manager"}'

# Jina AI
curl -X POST "http://localhost:7860/embed?model=jina" \
  -H "Content-Type: application/json" \
  -d '{"input": "Machine Learning Engineer"}'
```

#### Multiple Texts (Batch)

```bash
curl -X POST "http://localhost:7860/embed?model=jobbertv3" \
  -H "Content-Type: application/json" \
  -d '{
    "input": ["Software Engineer", "Data Scientist", "Product Manager"]
  }'
```

Response:
```json
{
  "embeddings": [
    [0.123, -0.456, ...],
    [0.234, -0.567, ...],
    [0.345, -0.678, ...]
  ]
}
```

#### Jina AI with Task Type

```bash
curl -X POST "http://localhost:7860/embed?model=jina&task=retrieval.query" \
  -H "Content-Type: application/json" \
  -d '{"input": "What is machine learning?"}'
```

**Jina AI Tasks (query parameter):**
- `retrieval.query`: For search queries
- `retrieval.passage`: For documents
- `text-matching`: For similarity (default)

#### Voyage AI (requires API key)

```bash
curl -X POST "http://localhost:7860/embed?model=voyage&input_type=document" \
  -H "Content-Type: application/json" \
  -d '{"input": "This is a document to embed"}'
```

**Voyage AI Input Types (query parameter):**
- `document`: For documents/passages
- `query`: For search queries

### Batch Endpoint (Original Format)

For compatibility, the original batch endpoint is still available at `/embed/batch`:

```bash
curl -X POST http://localhost:7860/embed/batch \
  -H "Content-Type: application/json" \
  -d '{
    "texts": ["Software Engineer", "Data Scientist"],
    "model": "jobbertv3"
  }'
```

Response includes metadata:
```json
{
  "embeddings": [[0.123, ...], [0.234, ...]],
  "model": "jobbertv3",
  "dimension": 768,
  "num_texts": 2
}
```

### List Available Models

```bash
curl http://localhost:7860/models
```

## Python Client Examples

### Elasticsearch-Compatible Format (Recommended)

```python
import requests

BASE_URL = "http://localhost:7860"
API_KEY = "your-api-key-here"  # Optional, only if API key is required

# Headers (include API key if required)
headers = {}
if API_KEY:
    headers["Authorization"] = f"Bearer {API_KEY}"

# Single embedding (JobBERT v3 - default)
response = requests.post(
    f"{BASE_URL}/embed",
    headers=headers,
    json={"input": "Software Engineer"}
)
result = response.json()
embedding = result["embedding"]  # Single vector
print(f"Embedding dimension: {len(embedding)}")

# Single embedding with model selection
response = requests.post(
    f"{BASE_URL}/embed?model=jina",
    headers=headers,
    json={"input": "Data Scientist"}
)
embedding = response.json()["embedding"]

# Batch embeddings
response = requests.post(
    f"{BASE_URL}/embed?model=jobbertv3",
    headers=headers,
    json={"input": ["Software Engineer", "Data Scientist", "Product Manager"]}
)
result = response.json()
embeddings = result["embeddings"]  # List of vectors
print(f"Generated {len(embeddings)} embeddings")

# Jina AI with task
response = requests.post(
    f"{BASE_URL}/embed?model=jina&task=retrieval.query",
    headers=headers,
    json={"input": "What is Python?"}
)

# Voyage AI with input type
response = requests.post(
    f"{BASE_URL}/embed?model=voyage&input_type=document",
    headers=headers,
    json={"input": "Document text here"}
)
```

### Python Client Class with API Key Support

```python
import requests
from typing import List, Union, Optional

class EmbeddingClient:
    def __init__(self, base_url: str, api_key: Optional[str] = None, model: str = "jobbertv3"):
        self.base_url = base_url
        self.api_key = api_key
        self.model = model
        self.headers = {}
        if api_key:
            self.headers["Authorization"] = f"Bearer {api_key}"
    
    def embed(self, text: Union[str, List[str]]) -> Union[List[float], List[List[float]]]:
        """Get embeddings for single text or batch"""
        response = requests.post(
            f"{self.base_url}/embed?model={self.model}",
            headers=self.headers,
            json={"input": text}
        )
        response.raise_for_status()
        result = response.json()
        
        if isinstance(text, str):
            return result["embedding"]
        else:
            return result["embeddings"]

# Usage
client = EmbeddingClient(
    base_url="https://YOUR-SPACE.hf.space",
    api_key="your-api-key-here",  # Optional
    model="jobbertv3"
)

# Single embedding
embedding = client.embed("Software Engineer")
print(f"Dimension: {len(embedding)}")

# Batch embeddings
embeddings = client.embed(["Software Engineer", "Data Scientist"])
print(f"Generated {len(embeddings)} embeddings")
```

### Batch Format (Original)

```python
import requests

url = "http://localhost:7860/embed/batch"

response = requests.post(url, json={
    "texts": ["Software Engineer", "Data Scientist"],
    "model": "jobbertv3"
})
result = response.json()
embeddings = result["embeddings"]
print(f"Model: {result['model']}, Dimension: {result['dimension']}")
```

## Environment Variables

- `PORT`: Server port (default: 7860)
- `API_KEY`: Your API key for authentication (optional, but recommended for production)
- `REQUIRE_API_KEY`: Set to `true` to enable API key authentication (default: `false`)
- `VOYAGE_API_KEY`: Voyage AI API key (optional, required for Voyage embeddings)

### Setting Up API Key Authentication

#### Local Development

```bash
# Set environment variables
export API_KEY="your-secret-key-here"
export REQUIRE_API_KEY="true"

# Run the API
python api.py
```

#### Hugging Face Spaces

1. Go to your Space settings
2. Click on "Variables and secrets"
3. Add secrets:
   - Name: `API_KEY`, Value: `your-secret-key-here`
   - Name: `REQUIRE_API_KEY`, Value: `true`
4. Restart your Space

#### Docker

```bash
docker run -p 7860:7860 \
  -e API_KEY="your-secret-key-here" \
  -e REQUIRE_API_KEY="true" \
  embedding-api
```

## Interactive Documentation

Once the API is running, visit:
- **Swagger UI**: http://localhost:7860/docs
- **ReDoc**: http://localhost:7860/redoc

## Notes

- Models are downloaded automatically on first startup (~2-3GB total)
- Voyage AI requires an API key from https://www.voyageai.com/
- First request to each model may be slower due to model loading
- Use batch processing for better performance (send multiple texts at once)

## Troubleshooting

### Models not loading
- Check available disk space (need ~3GB)
- Ensure internet connection for model download
- Check logs for specific error messages

### Voyage AI not working
- Verify `VOYAGE_API_KEY` is set correctly
- Check API key has sufficient credits
- Ensure `voyageai` package is installed

### Out of memory
- Reduce batch size (process fewer texts per request)
- Use smaller models (JobBERT v2 instead of Jina)
- Increase container memory limits

## License

This API uses models with different licenses:
- JobBERT v2/v3: Apache 2.0
- Jina AI: Apache 2.0
- Voyage AI: Subject to Voyage AI terms of service