internlm-chatbode-7b
O InternLm-ChatBode รฉ um modelo de linguagem ajustado para o idioma portuguรชs, desenvolvido a partir do modelo InternLM2. Este modelo foi refinado atravรฉs do processo de fine-tuning utilizando o dataset UltraAlpaca.
Caracterรญsticas Principais
- Modelo Base: internlm/internlm2-chat-7b
- Dataset para Fine-tuning: UltraAlpaca
- Treinamento: O treinamento foi realizado a partir do fine-tuning, usando QLoRA, do internlm2-chat-7b.
Exemplo de uso
A seguir um exemplo de cรณdigo de como carregar e utilizar o modelo:
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("recogna-nlp/internlm-chatbode-7b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("recogna-nlp/internlm-chatbode-7b", torch_dtype=torch.float16, trust_remote_code=True).cuda()
model = model.eval()
response, history = model.chat(tokenizer, "Olรก", history=[])
print(response)
response, history = model.chat(tokenizer, "O que รฉ o Teorema de Pitรกgoras? Me dรช um exemplo", history=history)
print(response)
As respostas podem ser geradas via stream utilizando o mรฉtodo stream_chat:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "recogna-nlp/internlm-chatbode-7b"
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16, trust_remote_code=True).cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = model.eval()
length = 0
for response, history in model.stream_chat(tokenizer, "Olรก", history=[]):
print(response[length:], flush=True, end="")
length = len(response)
Open Portuguese LLM Leaderboard Evaluation Results
Detailed results can be found here and on the ๐ Open Portuguese LLM Leaderboard
| Metric | Value |
|---|---|
| Average | 69.54 |
| ENEM Challenge (No Images) | 63.05 |
| BLUEX (No Images) | 51.46 |
| OAB Exams | 42.32 |
| Assin2 RTE | 91.33 |
| Assin2 STS | 80.69 |
| FaQuAD NLI | 79.80 |
| HateBR Binary | 87.99 |
| PT Hate Speech Binary | 68.09 |
| tweetSentBR | 61.11 |
Citaรงรฃo
Se vocรช deseja utilizar o Chatbode em sua pesquisa, cite-o da seguinte maneira:
@misc {chatbode_2024,
author = { Gabriel Lino Garcia, Pedro Henrique Paiola and and Joรฃo Paulo Papa},
title = { Chatbode },
year = {2024},
url = { https://huggingface.co/recogna-nlp/internlm-chatbode-7b/ },
doi = { 10.57967/hf/3317 },
publisher = { Hugging Face }
}
- Downloads last month
- 58
Model tree for recogna-nlp/internlm-chatbode-7b
Space using recogna-nlp/internlm-chatbode-7b 1
Evaluation results
- accuracy on ENEM Challenge (No Images)Open Portuguese LLM Leaderboard63.050
- accuracy on BLUEX (No Images)Open Portuguese LLM Leaderboard51.460
- accuracy on OAB ExamsOpen Portuguese LLM Leaderboard42.320
- f1-macro on Assin2 RTEtest set Open Portuguese LLM Leaderboard91.330
- pearson on Assin2 STStest set Open Portuguese LLM Leaderboard80.690
- f1-macro on FaQuAD NLItest set Open Portuguese LLM Leaderboard79.800
- f1-macro on HateBR Binarytest set Open Portuguese LLM Leaderboard87.990
- f1-macro on PT Hate Speech Binarytest set Open Portuguese LLM Leaderboard68.090
- f1-macro on tweetSentBRtest set Open Portuguese LLM Leaderboard61.110