rootsec1's picture
Update README.md
d7c0e7e verified
metadata
library_name: transformers
datasets:
  - bitext/Bitext-customer-support-llm-chatbot-training-dataset

Fine-Tuned GEMMA Model for Chatbot

This repository hosts the fine-tuned version of the GEMMA 1.1-2B model, specifically fine-tuned for a customer support chatbot use case.

Model Description

The GEMMA 1.1-2B model has been fine-tuned on the Bitext Customer Support Dataset for answering customer support queries. The fine-tuning process involved adjusting the model's weights based on question and answer pairs, which should enable it to generate more accurate and contextually relevant responses in a conversational setting.

How to Use

You can use this model directly with a pipeline for text generation:

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

tokenizer = AutoTokenizer.from_pretrained("your-username/your-model-name")
model = AutoModelForCausalLM.from_pretrained("your-username/your-model-name")

chatbot = pipeline("text-generation", model=model, tokenizer=tokenizer)

response = chatbot("How can I cancel my order?")
print(response[0]['generated_text'])