BERT fine-tuned on SST-2 (Sentiment Analysis)
This model is a fine-tuned version of bert-base-uncased on the SST-2 dataset (GLUE benchmark).
π Task
Binary sentiment classification β predict whether a sentence has positive or negative sentiment.
π§ Training Details
- Base model:
bert-base-uncased - Dataset:
glue/sst2 - Epochs: 3
- Learning rate: 2e-5
- Batch size: 16
- Evaluation metrics: accuracy, F1
π Metrics
| Metric | Value |
|---|---|
| Accuracy | 0.9289 |
| F1 | 0.9289 |
π Example Inference
from transformers import pipeline
clf = pipeline("text-classification", model="eternalGenius/SS2-Trained_Bert-Base-Uncased")
print(clf("The movie was absolutely fantastic!"))
# [{'label': 'positive', 'score': 0.98}]
- Downloads last month
- 17
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support