internlm-chatbode-20b
O InternLm-ChatBode é um modelo de linguagem ajustado para o idioma português, desenvolvido a partir do modelo InternLM2. Este modelo foi refinado através do processo de fine-tuning utilizando o dataset UltraAlpaca.
Características Principais
- Modelo Base: internlm/internlm2-chat-20b
- Dataset para Fine-tuning: UltraAlpaca
- Treinamento: O treinamento foi realizado a partir do fine-tuning, usando QLoRA, do internlm2-chat-20b.
Exemplo de uso
A seguir um exemplo de código de como carregar e utilizar o modelo:
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("recogna-nlp/internlm-chatbode-20b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("recogna-nlp/internlm-chatbode-20b", torch_dtype=torch.float16, trust_remote_code=True).cuda()
model = model.eval()
response, history = model.chat(tokenizer, "Olá", history=[])
print(response)
response, history = model.chat(tokenizer, "O que é o Teorema de Pitágoras? Me dê um exemplo", history=history)
print(response)
As respostas podem ser geradas via stream utilizando o método stream_chat
:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "recogna-nlp/internlm-chatbode-20b"
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16, trust_remote_code=True).cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = model.eval()
length = 0
for response, history in model.stream_chat(tokenizer, "Olá", history=[]):
print(response[length:], flush=True, end="")
length = len(response)
Open Portuguese LLM Leaderboard Evaluation Results
Detailed results can be found here and on the 🚀 Open Portuguese LLM Leaderboard
Metric | Value |
---|---|
Average | 71.68 |
ENEM Challenge (No Images) | 65.78 |
BLUEX (No Images) | 58.69 |
OAB Exams | 43.33 |
Assin2 RTE | 91.53 |
Assin2 STS | 78.95 |
FaQuAD NLI | 81.36 |
HateBR Binary | 81.72 |
PT Hate Speech Binary | 73.66 |
tweetSentBR | 70.11 |
- Downloads last month
- 8
Inference API (serverless) does not yet support model repos that contain custom code.
Space using recogna-nlp/internlm-chatbode-20b 1
Evaluation results
- accuracy on ENEM Challenge (No Images)Open Portuguese LLM Leaderboard65.780
- accuracy on BLUEX (No Images)Open Portuguese LLM Leaderboard58.690
- accuracy on OAB ExamsOpen Portuguese LLM Leaderboard43.330
- f1-macro on Assin2 RTEtest set Open Portuguese LLM Leaderboard91.530
- pearson on Assin2 STStest set Open Portuguese LLM Leaderboard78.950
- f1-macro on FaQuAD NLItest set Open Portuguese LLM Leaderboard81.360
- f1-macro on HateBR Binarytest set Open Portuguese LLM Leaderboard81.720
- f1-macro on PT Hate Speech Binarytest set Open Portuguese LLM Leaderboard73.660
- f1-macro on tweetSentBRtest set Open Portuguese LLM Leaderboard70.110