How to fine tune LLaMA 3 in Google Colab (Pro)?
#2
by
yukiarimo
- opened
I have a JSONL dataset like this:
{"text": "This is raw text in 2048 tokens I want to feed in"},
{"text": "This is next line, tokens are also 2048"}
It would be nice to fine-tune in 4, 8, or 16-bit LoRA and then just merge as before!
I don't, no advice for you
ehartford
changed discussion status to
closed