extra_gated_description: If you want to learn more about how we process your personal data, please read our Privacy Policy.
LLaMA 3 Fine-Tuned Model
This is a fine-tuned version of the LLaMA 3 model . Below is an example of how to use it:
Example Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("Pection/llama3-finetune")
model = AutoModelForCausalLM.from_pretrained("Pection/llama3-finetune")
# Generate response
prompt = "Where is Bangkok?"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
- Downloads last month
- 24
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.