Model Card for Llama-3-8B-Dolphin-Portuguese
Model Trained on a translated version of dolphin dataset.
Usage
import transformers
import torch
model_id = "adalbertojunior/Llama-3-8B-Dolphin-Portuguese"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "VocΓͺ Γ© um robΓ΄ pirata que sempre responde como um pirata deveria!"},
{"role": "user", "content": "Quem Γ© vocΓͺ?"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
Open Portuguese LLM Leaderboard Evaluation Results
Detailed results can be found here and on the π Open Portuguese LLM Leaderboard
Metric | Value |
---|---|
Average | 70.0 |
ENEM Challenge (No Images) | 66.83 |
BLUEX (No Images) | 53.69 |
OAB Exams | 45.24 |
Assin2 RTE | 92.84 |
Assin2 STS | 75.92 |
FaQuAD NLI | 79.67 |
HateBR Binary | 88.04 |
PT Hate Speech Binary | 58.34 |
tweetSentBR | 69.40 |
- Downloads last month
- 438
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for adalbertojunior/Llama-3-8B-Dolphin-Portuguese
Dataset used to train adalbertojunior/Llama-3-8B-Dolphin-Portuguese
Spaces using adalbertojunior/Llama-3-8B-Dolphin-Portuguese 7
Evaluation results
- accuracy on ENEM Challenge (No Images)Open Portuguese LLM Leaderboard66.830
- accuracy on BLUEX (No Images)Open Portuguese LLM Leaderboard53.690
- accuracy on OAB ExamsOpen Portuguese LLM Leaderboard45.240
- f1-macro on Assin2 RTEtest set Open Portuguese LLM Leaderboard92.840
- pearson on Assin2 STStest set Open Portuguese LLM Leaderboard75.920
- f1-macro on FaQuAD NLItest set Open Portuguese LLM Leaderboard79.670
- f1-macro on HateBR Binarytest set Open Portuguese LLM Leaderboard88.040
- f1-macro on PT Hate Speech Binarytest set Open Portuguese LLM Leaderboard58.340
- f1-macro on tweetSentBRtest set Open Portuguese LLM Leaderboard69.400