license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5_recommendation_sports_equipment_english
results: []
widget:
- text: >-
ITEMS PURCHASED: {Soccer Goal Post, Soccer Ball, Soccer Cleats, Goalie
Gloves} - CANDIDATES FOR RECOMMENDATION: {Soccer Jersey, Basketball
Jersey, Football Jersey, Baseball Jersey, Tennis Shirt, Hockey Jersey,
Basketball, Football, Baseball, Tennis Ball, Hocket Puck, Basketball
Shoes, Football Cleats, Baseball Cleats, Tennis Shoes, Hockey Helmet,
Basketball Arm Sleeve, Football Shoulder Pads, Baseball Cap, Tennis
Racket, Hockey Skates, Basketball Hoop, Football Helmet, Baseball Bat,
Hockey Stick, Soccer Cones, Basketball Shorts, Baseball Glove, Hockey
Pads, Soccer Shin Guards, Soccer Shorts} - RECOMMENDATION:
- text: >-
ITEMS PURCHASED: {Soccer Shin Guards} - CANDIDATES FOR RECOMMENDATION:
{Soccer Jersey, Basketball Jersey, Football Jersey, Baseball Jersey,
Tennis Shirt, Hockey Jersey, Soccer Ball, Basketball, Football, Baseball,
Tennis Ball, Hocket Puck, Soccer Cleats, Basketball Shoes, Football
Cleats, Baseball Cleats, Tennis Shoes, Hockey Helmet, Goalie Gloves,
Basketball Arm Sleeve, Football Shoulder Pads, Baseball Cap, Tennis
Racket, Hockey Skates, Soccer Goal Post, Basketball Hoop, Football Helmet,
Baseball Bat, Hockey Stick, Soccer Cones, Basketball Shorts, Baseball
Glove, Hockey Pads, Soccer Shorts} - RECOMMENDATION:
t5_recommendation_sports_equipment_english
This model is a fine-tuned version of t5-large on a custom dataset, consisting of sports equipment customers have purchased, and items to recommended next.
This is based on the paper "Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5)", where the researchers use a language model as a recommendation system.
- LLMs can "understand" relationships between words/terms via embeddings produced by the transformer architecture. This allows for relationships to be taken into account.
- By feeding an LLM a history of items purchased as the input and the next item purchased as the output, the model can learn what to recommend based on the semantics of the product's name.
- Taking multiple examples of different users' purchase history into account, the LLM can also learn what genres of products go with what.
- This essentially replicates collaboritve filtering
- Benefits include:
- Getting past the cold-start problem with ease (when new items are introduced, the model will be able to understand what's similar to it from the name alone).
- Avoiding tedious, manual feature engineering (using natural language, the LLM will automatically learn).
The github repository for fine-tuning this model can be viewed here.
The fine-tuned T5 model achieves the following results on the evaluation set:
- Loss: 0.4554
- Rouge1: 57.1429
- Rouge2: 47.6190
- Rougel: 55.5556
- Rougelsum: 55.5556
- Gen Len: 3.9048
Model description
T5 is an open-source sequence-to-sequence model released by Google in 2020, from which several variants have been developed. This fine-tuned version is an attempt to replicate what was presented in the P5 paper, with a custom dataset (based on sports equipment).
More about this model (T5) can be viewed here.
The P5 models from the paper can be viewed on the Hugging Face Hub as well as in this repository.
Intended uses & limitations
Can be used as you please, but is limited to the sports equipment dataset it was fine-tuned on. Your mileage may vary.
Training and evaluation data
Please see this repository for training and evaluation data.
Training procedure
Please see this repository for training and evaluation data.
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
---|---|---|---|---|---|---|---|---|
No log | 0.96 | 6 | 6.7375 | 8.7066 | 0.9524 | 8.7598 | 8.6011 | 19.0 |
No log | 1.96 | 12 | 2.8089 | 23.8095 | 9.5238 | 23.3333 | 23.3333 | 3.1429 |
No log | 2.96 | 18 | 0.9394 | 9.5238 | 4.7619 | 9.5238 | 9.5238 | 3.1905 |
No log | 3.96 | 24 | 0.6679 | 33.3333 | 14.2857 | 32.8571 | 32.5397 | 3.5714 |
No log | 4.96 | 30 | 0.6736 | 26.5079 | 9.5238 | 25.0794 | 25.0794 | 4.2381 |
No log | 5.96 | 36 | 0.6658 | 38.7302 | 23.8095 | 37.3016 | 37.4603 | 4.0476 |
No log | 6.96 | 42 | 0.6460 | 46.3492 | 33.3333 | 45.6349 | 45.2381 | 3.8571 |
No log | 7.96 | 48 | 0.5596 | 52.3810 | 42.8571 | 50.7937 | 50.7937 | 4.0 |
No log | 8.96 | 54 | 0.5082 | 57.1429 | 47.6190 | 55.5556 | 55.5556 | 3.9524 |
No log | 9.96 | 60 | 0.4554 | 57.1429 | 47.6190 | 55.5556 | 55.5556 | 3.9048 |
Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2