|
--- |
|
base_model: roberta-base |
|
license: mit |
|
metrics: |
|
- accuracy |
|
- f1 |
|
tags: |
|
- generated_from_trainer |
|
- unsloth |
|
model-index: |
|
- name: OpenSesame |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# How to interpretate the output |
|
|
|
LABEL 0 = User hasn't buying intentions. |
|
|
|
LABEL 1 = User has buying intentions. |
|
|
|
# Word Of Prompt |
|
|
|
**Overview:** |
|
"Word Of Prompt" redefines advertising by integrating it seamlessly into natural language conversations. |
|
Utilizing fine-tuned RoBERTa and Llama3, "Word Of Prompt" detects user intent to purchase and responds with contextually relevant product suggestions as if coming from a trusted friend. |
|
|
|
**Core Features:** |
|
- **Intent Recognition:** Harnesses a fine-tuned RoBERTa model to accurately interpret buying signals within textual conversations: the model is OpenSesame and you can find it [here](https://huggingface.co/PiGrieco/OpenSesame/). |
|
- **Intelligent Response Generation:** Employs an Agentic Retrieval-Augmented Generation (RAG) mechanism built on Llama3, dynamically setting and manipulating API parameters to fetch the most suitable products: the technology is called "OpenTheVault" and you can find it [here](https://colab.research.google.com/drive/1ydT7cvNn0FhnAj8ZhPojToOBsiC5Djom?usp=sharing). |
|
- **Seamless Integration:** Designed to be integrated easily into any existing LLM or AI agent, enhancing their functionality with minimal setup: find the SDK [here](https://github.com/PiGrieco/WordOfPrompt-Integration). |
|
|
|
NB, IMPORTANT: OpenTheVault and SDK will be uploaded soon! |
|
|
|
### Vision & Mission |
|
|
|
**Vision:** |
|
To transform advertising into a helpful, integral part of the conversational experience, mirroring the trust and personal relevance of advice from a friend. |
|
"Word Of Prompt" envisions a world where ads are not just tolerated but valued components of our digital interactions. |
|
|
|
**Mission:** |
|
Our mission is to provide AI developers and marketers with powerful tools that enhance user engagement without disrupting the natural flow of conversation. |
|
By doing so, we aim to foster a more sustainable and user-centric advertising landscape that aligns advertisers' goals with consumer satisfaction and help AI Agents and LLMs democratization helping AI developers to earn from their developing efforts. |
|
|
|
### Join Us! |
|
|
|
We're looking for AI developers which want to join our team: contact Piermatteo Grieco on [LinkedIn](https://www.linkedin.com/in/piermatteo-grieco/) if you're interested in knowing more about the project. |
|
|
|
## How to Use "Word Of Prompt" |
|
|
|
**Integration Steps:** |
|
1. **Incorporate the Library:** |
|
Download and integrate the "Word Of Prompt" library into your LLM or AI agent's development environment. |
|
|
|
The library is open-source, allowing for custom modifications if needed. |
|
|
|
3. **Configure the API:** |
|
Set up the necessary API credentials and configure the settings to connect with product databases like Amazon’s Product API, ensuring that your agent can retrieve product information in real time. |
|
|
|
4. **Activate in Your Application:** |
|
Implement "Word Of Prompt" within your conversational models or customer service bots. |
|
|
|
Configure the system to detect purchase-related queries and trigger the product recommendation features. |
|
|
|
6. **Customize Responses:** |
|
Tailor the response format to fit the tone and style of your AI agent, ensuring that the product recommendations appear as natural and organic parts of the conversation. |
|
|
|
|
|
|
|
# OpenSesame |
|
|
|
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the [this](https://www.researchgate.net/publication/372788974_Purchase_Intention_and_Sentiment_Analysis_on_Twitter_Related_to_Social_Commerce) dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.0903 |
|
- Accuracy: 0.9825 |
|
- F1: 0.9826 |
|
|
|
## Model description |
|
|
|
**Overview:** |
|
"Open Sesame" is an advanced open-source model designed to detect users' buying intentions from textual data. |
|
|
|
**Core Features:** |
|
- **Intent Detection:** Utilizes a fine-tuned version of RoBERTa to analyze text and identify potential buying signals, enhancing the accuracy and relevance of generated insights. |
|
- **Integration Capability:** Engineered to be seamlessly integrated into any LLM or AI agent, "Open Sesame" offers a plug-and-play solution for developers looking to enhance e-commerce and retail applications. |
|
- **Customizable:** While pre-trained to detect purchasing intentions, "Open Sesame" can be further adapted or fine-tuned to meet specific industry needs or to cover additional conversational scenarios. |
|
|
|
**Use Cases:** |
|
- **E-commerce Platforms:** Improve product recommendation systems by understanding user intent in real-time. |
|
- **Customer Service Automation:** Equip chatbots and virtual assistants to better respond to customer inquiries with purchase intent detection. |
|
- **Marketing and Sales:** Enable more targeted and personalized marketing campaigns based on detected user interests and needs. |
|
|
|
**Getting Started:** |
|
To start using "Open Sesame" in your projects, simply load the model from the Hugging Face Model Hub using the following commands: |
|
|
|
```python |
|
from transformers import AutoModelForSequenceClassification, AutoTokenizer |
|
|
|
model_name = "PiGrieco/OpenSesame" |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
model = AutoModelForSequenceClassification.from_pretrained(model_name) |
|
``` |
|
|
|
**Contribute:** |
|
"Open Sesame" is open-source and we welcome contributions from the community! Whether it's improving the model, expanding the dataset, or refining the documentation, your input helps make "Open Sesame" better for everyone. |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 5e-05 |
|
- train_batch_size: 8 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- num_epochs: 8 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |
|
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| |
|
| 0.4003 | 1.0 | 129 | 0.1545 | 0.9649 | 0.9659 | |
|
| 0.4802 | 2.0 | 258 | 0.1453 | 0.9708 | 0.9714 | |
|
| 0.1132 | 3.0 | 387 | 0.1655 | 0.9678 | 0.9688 | |
|
| 0.0753 | 4.0 | 516 | 0.1038 | 0.9825 | 0.9826 | |
|
| 0.1563 | 5.0 | 645 | 0.1078 | 0.9766 | 0.9769 | |
|
| 0.0665 | 6.0 | 774 | 0.0914 | 0.9825 | 0.9826 | |
|
| 0.0677 | 7.0 | 903 | 0.0909 | 0.9825 | 0.9826 | |
|
| 0.0659 | 8.0 | 1032 | 0.0903 | 0.9825 | 0.9826 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.41.2 |
|
- Pytorch 2.3.0+cu121 |
|
- Datasets 2.19.2 |
|
- Tokenizers 0.19.1 |
|
|