BERT fine-tuned on SST-2 (Sentiment Analysis)

This model is a fine-tuned version of bert-base-uncased on the SST-2 dataset (GLUE benchmark).

πŸ“š Task

Binary sentiment classification β€” predict whether a sentence has positive or negative sentiment.

🧠 Training Details

  • Base model: bert-base-uncased
  • Dataset: glue/sst2
  • Epochs: 3
  • Learning rate: 2e-5
  • Batch size: 16
  • Evaluation metrics: accuracy, F1

πŸ“Š Metrics

Metric Value
Accuracy 0.9289
F1 0.9289

πŸ”Ž Example Inference

from transformers import pipeline

clf = pipeline("text-classification", model="eternalGenius/SS2-Trained_Bert-Base-Uncased")

print(clf("The movie was absolutely fantastic!"))
# [{'label': 'positive', 'score': 0.98}]
Downloads last month
17
Safetensors
Model size
0.1B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support