--- language: - en - fr max_length: 512 tags: - text-generation-inference - 'customer support ' widget: - text: 'answer: how could I track the compensation?' example_title: Ex1 license: apache-2.0 --- ## Model Summary The Fine-tuned Chatbot model is based on [t5-small](https://huggingface.co/t5-small) and has been specifically tailored for customer support use cases. The training data used for fine-tuning comes from the [Bitext Customer Support LLM Chatbot Training Dataset](https://huggingface.co/datasets/bitext/Bitext-customer-support-llm-chatbot-training-dataset/viewer/default/train). ### Model Details - **Base Model:** [t5-small](https://huggingface.co/t5-small) - **Fine-tuning Data:** [Bitext Customer Support LLM Chatbot Training Dataset](https://huggingface.co/datasets/bitext/Bitext-customer-support-llm-chatbot-training-dataset/viewer/default/train) - **train_loss:** 1.1188 - **val_loss:** 0.9132 - **bleu_score:** 0.9233 ### Usage To use the fine-tuned chatbot model, you can leverage the capabilities provided by the Hugging Face Transformers library. Here's a basic example using Python: ```python # Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text2text-generation", model="mrSoul7766/CUSsupport-chat-t5-small") # Example user query user_query = "How could I track the compensation?" # Generate response answer = pipe(f"answer: {user_query}", max_length=512) # Print the generated answer print(answer[0]['generated_text']) ``` ``` I'm on it! I'm here to assist you in tracking the compensation. To track the compensation, you can visit our website and navigate to the "Refunds" or "Refunds" section. There, you will find detailed information about the compensation you are entitled to. If you have any other questions or need further assistance, please don't hesitate to let me know. I'm here to help! ``` ### or ```python # Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("mrSoul7766/CUSsupport-chat-t5-small") model = AutoModelForSeq2SeqLM.from_pretrained("mrSoul7766/CUSsupport-chat-t5-small") # Set maximum generation length max_length = 512 # Generate response with question as input input_ids = tokenizer.encode("I am waiting for a refund of $2?", return_tensors="pt") output_ids = model.generate(input_ids, max_length=max_length) # Decode response response = tokenizer.decode(output_ids[0], skip_special_tokens=True) print(response) ``` ``` I'm on it! I completely understand your anticipation for a refund of $2. Rest assured, I'm here to assist you every step of the way. To get started, could you please provide me with more details about the specific situation? This will enable me to provide you with the most accurate and up-to-date information regarding your refund. Your satisfaction is our top priority, and we appreciate your patience as we work towards resolving this matter promptly. ```