πŸš€ RAG-Instruct Llama-3.2-3B (Fine-tuned)


πŸ“Œ Model Overview

This model is fine-tuned on the RAG-INSTRUCT-1.1 dataset using Unsloth to enhance text generation.
It is optimized for instruction-following while reducing hallucination, ensuring that responses remain factual and concise.

  • Instruction-Tuned: Follows structured queries effectively.
  • Hallucination Reduction: Avoids fabricating information when context is missing.
  • Optimized with Unsloth: Fast inference with GGUF quantization.

πŸ“Œ Example Usage (Transformers)

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "skshmjn/Llama-3.2-3B-RAG-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

prompt = """You are an assistant for question-answering tasks. 
Use the following pieces of retrieved context to answer the question. 
If you don't know the answer, just say that you don't know. 
Use three sentences maximum and keep the answer concise.

Question: Who discovered the first exoplanet?  
Context: [No relevant context available]  
Answer:"""

inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(**inputs, max_length=100)
response = tokenizer.decode(output[0], skip_special_tokens=True)

print(response)
Downloads last month
5
Safetensors
Model size
3.21B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for skshmjn/RAG-LLAMA-3.2-3b-INSTRUCT

Finetuned
(227)
this model

Dataset used to train skshmjn/RAG-LLAMA-3.2-3b-INSTRUCT