File size: 1,931 Bytes
269bf04 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
---
language: en
datasets:
- cnn_dailymail
tags:
- summarization
- t5
- flan-t5
- transformers
- huggingface
- fine-tuned
license: apache-2.0
model-index:
- name: FLAN-T5 Base Fine-Tuned on CNN/DailyMail
results:
- task:
type: summarization
name: Summarization
dataset:
name: CNN/DailyMail
type: cnn_dailymail
metrics:
- type: rouge
value: 25.33
name: Rouge-1
- type: rouge
value: 11.96
name: Rouge-2
- type: rouge
value: 20.68
name: Rouge-L
metrics:
- rouge
base_model:
- google/flan-t5-base
pipeline_tag: summarization
---
# FLAN-T5 Base Fine-Tuned on CNN/DailyMail
This model is a fine-tuned version of [`google/flan-t5-base`](https://huggingface.co/google/flan-t5-base) on the [CNN/DailyMail](https://huggingface.co/datasets/cnn_dailymail) dataset using the Hugging Face Transformers library.
## ๐ Task
**Abstractive Summarization**: Given a news article, generate a concise summary.
---
## ๐ Evaluation Results
The model was fine-tuned on 20,000 training samples and validated/tested on 2,000 samples. Evaluation was performed using ROUGE metrics:
| Metric | Score |
|-------------|--------|
| ROUGE-1 | 25.33 |
| ROUGE-2 | 11.96 |
| ROUGE-L | 20.68 |
| ROUGE-Lsum | 23.81 |
---
## ๐ฆ Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model = T5ForConditionalGeneration.from_pretrained("AbdullahAlnemr1/flan-t5-summarizer")
tokenizer = T5Tokenizer.from_pretrained("AbdullahAlnemr1/flan-t5-summarizer")
input_text = "summarize: The US president met with the Senate to discuss..."
inputs = tokenizer(input_text, return_tensors="pt", max_length=512, truncation=True)
summary_ids = model.generate(inputs["input_ids"], max_length=128, num_beams=4, early_stopping=True)
print(tokenizer.decode(summary_ids[0], skip_special_tokens=True)) |