extra_gated_description: If you want to learn more about how we process your personal data, please read our Privacy Policy.


LLaMA 3 Fine-Tuned Model

This is a fine-tuned version of the LLaMA 3 model . Below is an example of how to use it:

Example Usage

from transformers import AutoModelForCausalLM, AutoTokenizer

# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("Pection/llama3-finetune")
model = AutoModelForCausalLM.from_pretrained("Pection/llama3-finetune")

# Generate response
prompt = "Where is Bangkok?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)

print(response)
Downloads last month
24
Safetensors
Model size
1.24B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for Pection/llama3-finetune

Finetuned
(279)
this model
Quantizations
1 model