# Function Calling Fine-tuned Gemma Model

This is a fine-tuned version of google/gemma-2-2b-it optimized for function calling with thinking.

## Model Details
- Base model: google/gemma-2-2b-it
- Fine-tuned with LoRA for function calling capability
- Includes "thinking" step before function calls

## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel, PeftConfig

model_name = "sethderrick/gemma-2-2B-it-thinking-function_calling-V0"

# Load the model
config = PeftConfig.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(model, model_name)

# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Use for function calling
# ...
```
Downloads last month
0
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.