Model Card for Model ID
Falcon-7B Fine-Tuned Chatbot Model This repository contains the fine-tuned Falcon-7B model for a chatbot application. The model has been fine-tuned using the PEFT method to provide robust responses for e-commerce customer support. It guides buyers in product selection, recommends sizes, checks product stock, suggests similar products, and presents reviews and social media video links.
Model Details
- Base Model: Falcon 7B (tiiuae/falcon-7b)
- Fine-Tuning Method: Parameter-Efficient Fine-Tuning (PEFT)
- Training Data : Custom dataset including skincare e-commerce related dialogues. (UrFavB0i/skincare-ecommerce-FAQ)
Features
- 24/7 customer support
- Product selection guidance
- Size recommendations
- Product stock checks
- Similar product suggestions
- Reviews and social media video link presentation
Usage
Installation
To use the model, you need to install the necessary dependencies. Make sure you have Python 3.7+ and pip installed.
pip install transformers
pip install peft
Loading the Model
You can load the fine-tuned model using the transformers library:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "your-huggingface-username/falcon-7b-chatbot"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
Example usage
inputs = tokenizer("Hello, how can I assist you today?", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
Training Details
The model was fine-tuned using the PEFT method on a dataset specifically curated for e-commerce scenarios. The training process involved:
- Data Preparation: Gathering and preprocessing e-commerce-related dialogues.
- Fine-Tuning: Training the base model using PEFT to adapt it to the specific needs of the e-commerce domain.
Evaluation
The fine-tuned model was evaluated based on its ability to handle various e-commerce related queries, providing accurate and contextually appropriate responses.
Limitations
While the model performs well in many scenarios, it might not handle extremely rare or out-of-domain queries perfectly. Continuous training and updating with more data can help improve its performance further.
Contributing
We welcome contributions to improve this model. If you have any suggestions or find any issues, please create an issue or a pull request.
License
This project is licensed under the Apache 2.0 License. See the [LICENSE] file for more details.
Acknowledgements
Special thanks to the Falcon team and the creators of the tiiuae/falcon-7b model for providing the base model and the tools necessary for fine-tuning.
- Downloads last month
- 24
Model tree for UrFavB0i/Fine-tuned-Falcon7B-skincare-chatbot
Base model
tiiuae/falcon-7b