File size: 1,357 Bytes
653ddae 7bc3773 653ddae 7bc3773 653ddae |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
---
language:
- en
license: apache-2.0
tags:
- reddit
datasets:
- georeactor/reddit_one_ups_seq2seq_2014
---
# t5-reddit-2014
T5-small model fine-tuned on Reddit "One-Ups" / "Clapbacks" dataset. Each reply from
the fine-tuning has a vote-score 1.5x or higher than the parent comment.
From a few tests it seems to have adopted a snarky tone. Common reply is "I'm not a shit."
## Process
Training notebook: https://github.com/Georeactor/reddit-one-ups/blob/main/training-models/t5-seq2seq-2014.ipynb
- Started with [t5-small](https://huggingface.co/t5-small) so I could run it on CoLab.
- Fine-tuned on first 80% of [georeactor/reddit_one_ups_seq2seq_2014](https://huggingface.co/datasets/georeactor/reddit_one_ups_seq2seq_2014) for one epoch, batch size = 2.
- Loss did not move much during this epoch.
- Future experiments should use a larger model, larger batch size (could easily have done batch_size = 4 on CoLab), full dataset if we are not worried about eval.
## Inference
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained('georeactor/t5-reddit-2014')
tokenizer = AutoTokenizer.from_pretrained('georeactor/t5-reddit-2014')
input = tokenizer.encode('Looks like a potato bug', return_tensors="pt")
output = model.generate(input, max_length=256)
tokenizer.decode(output[0])
```
|