Edit model card

Llama-3-8b-tagalog-v1:

USAGE

This is meant to be mainly a chat model.

Use "Human" and "Assistant" and prompt with Tagalog:

"\nHuman: INPUT\nAssistant:"

HYPERPARAMS

  • Trained for 1 epochs
  • rank: 32
  • lora alpha: 32
  • lr: 2e-4
  • batch size: 2
  • grad steps: 4

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

WARNINGS AND DISCLAIMERS

Note that there is a chance that the model may switch back to English (albeit still understand Tagalog inputs) or output clunky results.

Finally, this model is not guaranteed to output aligned or safe outputs nor is it meant for production use - use at your own risk!

Downloads last month
10
GGUF
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for 922-Narra/Llama-3-8b-tagalog-v1

Finetuned
(2408)
this model
Quantizations
2 models

Dataset used to train 922-Narra/Llama-3-8b-tagalog-v1

Space using 922-Narra/Llama-3-8b-tagalog-v1 1