sethderrick's picture
Update README.md
88fd93d verified
metadata
language: en
license: apache-2.0
library_name: peft
tags:
  - gemma
  - peft
  - function-calling
  - lora
  - thinking
pipeline_tag: text-generation
model-index:
  - name: gemma-2-2B-it-thinking-function_calling-V0
    results: []
# Function Calling Fine-tuned Gemma Model

This is a fine-tuned version of google/gemma-2-2b-it optimized for function calling with thinking.

## Model Details
- Base model: google/gemma-2-2b-it
- Fine-tuned with LoRA for function calling capability
- Includes "thinking" step before function calls

## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel, PeftConfig

model_name = "sethderrick/gemma-2-2B-it-thinking-function_calling-V0"

# Load the model
config = PeftConfig.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path)
model = PeftModel.from_pretrained(model, model_name)

# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Use for function calling
# ...
```