Model Description
Konsultasi(Q&A) stunting pada anak
- Developed by: Tanwir
- Language : Indonesia
Training
Information Result Training
***** train metrics *****
epoch = 2.9987
num_input_tokens_seen = 1900976
total_flos = 79944066GF
train_loss = 0.872
train_runtime = 1:06:36.18
train_samples_per_second = 5.737
train_steps_per_second = 0.358
Evaluation
{
"predict_bleu-4": 46.238530502486256,
"predict_model_preparation_time": 0.0054,
"predict_rouge-1": 50.236485540434444,
"predict_rouge-2": 33.20428471604292,
"predict_rouge-l": 46.93391739073541,
"predict_runtime": 10532.8745,
"predict_samples_per_second": 0.726,
"predict_steps_per_second": 0.363
}
Parameter
LlamaConfig {
"architectures": [
"LlamaForCausalLM"
],
"attention_bias": false,
"attention_dropout": 0.0,
"bos_token_id": 128000,
"eos_token_id": 128009,
"head_dim": 128,
"hidden_act": "silu",
"hidden_size": 4096,
"initializer_range": 0.02,
"intermediate_size": 14336,
"max_position_embeddings": 8192,
"mlp_bias": false,
"model_type": "llama",
"num_attention_heads": 32,
"num_hidden_layers": 32,
"num_key_value_heads": 8,
"pretraining_tp": 1,
"rms_norm_eps": 1e-05,
"rope_scaling": null,
"rope_theta": 500000.0,
"tie_word_embeddings": false,
"torch_dtype": "bfloat16",
"transformers_version": "4.51.3",
"use_cache": true,
"vocab_size": 128256
}
Use with transformers
Pastikan untuk memperbarui instalasi transformer Anda melalui pip install --upgrade transformer.
import torch
from transformers import pipeline
model_id = "kodetr/stunting-qa-v5"
pipe = pipeline(
"text-generation",
model=model_id,
torch_dtype=torch.bfloat16,
device_map="auto",
)
messages = [
{"role": "system", "content": "Jelaskan definisi 1000 hari pertama kehidupan."},
{"role": "user", "content": "Apa itu 1000 hari pertama kehidupan?"},
]
outputs = pipe(
messages,
max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])
- Downloads last month
- 24
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for kodetr/stunting-qa-v4
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct