YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Llama-3.2-3B-Instruct-function-calling-gorilla-style-5epochs
Model Description
This is a fine-tuned version of unsloth/Llama-3.2-3B-Instruct for function calling capabilities, trained in a Gorilla-style format on a custom dataset. The model is trained to understand function calling instructions and generate appropriate responses.
Training Parameters
- Base Model: unsloth/Llama-3.2-3B-Instruct
- Dataset: Custom Function Calling Dataset (Gorilla-style format)
- Training Type: Supervised Fine-tuning with LoRA
- Epochs: 5
Dataset Format
The model was trained on a custom dataset with the following structure:
{
"Instruction": "User instruction/query",
"Functions": ["Available function definitions"],
"Output": ["Model's response with function calls"]
}
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("BluebrainAI/Llama-3.2-3B-Instruct-function-calling-gorilla-style-5epochs")
tokenizer = AutoTokenizer.from_pretrained("BluebrainAI/Llama-3.2-3B-Instruct-function-calling-gorilla-style-5epochs")
# Example usage
instruction = "Your instruction here"
chat = [
{"role": "user", "content": instruction}
]
input_text = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=False)
inputs = tokenizer(input_text, return_tensors="pt", truncation=True)
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
License
This model inherits the license of the base model unsloth/Llama-3.2-3B-Instruct.
- Downloads last month
- 7
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no library tag.