IAJw's picture
Update README.md
6e91568
|
raw
history blame
356 Bytes
metadata
license: bsd
pipeline_tag: text2text-generation

Used scraped data as input, ChatGPT result as output and limited types of instruction to fine-tune flan-t5-small. Used declare-lab repo. Refer to https://github.com/declare-lab/flan-alpaca Epoch set to 5 Input token max 512 Output token max 512 Trained on Free Google Colab T4 Training time ~ 2 hours