AQEA: aqea-text-embedding-3-small-29x
OpenAI text-embedding-3-small compressed 29x while preserving 90.6% similarity ranking
π Performance
| Metric | Value |
|---|---|
| Compression Ratio | 29.5x |
| Spearman Ο | 90.6% |
| Source Dimension | 1536D |
| Compressed Dimension | 52D |
| Storage Savings | 96.6% |
π Usage
from aqea import AQEACompressor
# Load pre-trained compressor
compressor = AQEACompressor.from_pretrained("nextxag/aqea-text-embedding-3-small-29x")
# Compress embeddings
embeddings = model.encode(texts) # 1536D
compressed = compressor.compress(embeddings) # 52D
# Decompress for retrieval
reconstructed = compressor.decompress(compressed) # 1536D
π Files
weights.aqwt- Binary weights (AQEA native format)config.json- Model configuration
π¬ How It Works
AQEA (Adaptive Quantized Embedding Architecture) uses learned linear projections with Pre-Quantify rotation to compress embeddings while maximally preserving pairwise similarity rankings (measured by Spearman correlation).
π Citation
@software{aqea2024,
title = {AQEA: Adaptive Quantized Embedding Architecture},
author = {AQEA Team},
year = {2024},
url = {https://huggingface.co/nextxag}
}
π License
Apache 2.0
- Downloads last month
- 14
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support