Edit model card

Model Card for Model ID

Model Card for Llama-3-8B-Dolphin-Portuguese-v0.3

Model Trained on a translated version of dolphin dataset.

Usage

import transformers
import torch

model_id = "adalbertojunior/Llama-3-8B-Dolphin-Portuguese-v0.3"

pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

messages = [
    {"role": "system", "content": "VocΓͺ Γ© um robΓ΄ pirata que sempre responde como um pirata deveria!"},
    {"role": "user", "content": "Quem Γ© vocΓͺ?"},
]

prompt = pipeline.tokenizer.apply_chat_template(
        messages, 
        tokenize=False, 
        add_generation_prompt=True
)

terminators = [
    pipeline.tokenizer.eos_token_id,
    pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = pipeline(
    prompt,
    max_new_tokens=256,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.6,
    top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])

Open Portuguese LLM Leaderboard Evaluation Results

Detailed results can be found here and on the πŸš€ Open Portuguese LLM Leaderboard

Metric Value
Average 73.15
ENEM Challenge (No Images) 68.86
BLUEX (No Images) 57.86
OAB Exams 61.91
Assin2 RTE 93.05
Assin2 STS 76.48
FaQuAD NLI 76.78
HateBR Binary 83.25
PT Hate Speech Binary 68.85
tweetSentBR 71.30
Downloads last month
305
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for adalbertojunior/Llama-3-8B-Dolphin-Portuguese-v0.3

Quantizations
2 models

Spaces using adalbertojunior/Llama-3-8B-Dolphin-Portuguese-v0.3 5

Evaluation results