Edit model card

Fine-Tuned GEMMA Model for Chatbot

This repository hosts the fine-tuned version of the GEMMA 1.1-2B model, specifically fine-tuned for a customer support chatbot use case.

Model Description

The GEMMA 1.1-2B model has been fine-tuned on the Bitext Customer Support Dataset for answering customer support queries. The fine-tuning process involved adjusting the model's weights based on question and answer pairs, which should enable it to generate more accurate and contextually relevant responses in a conversational setting.

How to Use

You can use this model directly with a pipeline for text generation:

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

tokenizer = AutoTokenizer.from_pretrained("your-username/your-model-name")
model = AutoModelForCausalLM.from_pretrained("your-username/your-model-name")

chatbot = pipeline("text-generation", model=model, tokenizer=tokenizer)

response = chatbot("How can I cancel my order?")
print(response[0]['generated_text'])
Downloads last month
18
Safetensors
Model size
2.51B params
Tensor type
F32
·

Dataset used to train rootsec1/gemma-2B-it-customer-support