Text Generation
Transformers
Safetensors
llama
generated_from_trainer
axolotl
conversational
Inference Endpoints
text-generation-inference

How to fine tune LLaMA 3 in Google Colab (Pro)?

#2
by yukiarimo - opened

I have a JSONL dataset like this:

{"text": "This is raw text in 2048 tokens I want to feed in"},
{"text": "This is next line, tokens are also 2048"}

It would be nice to fine-tune in 4, 8, or 16-bit LoRA and then just merge as before!

Cognitive Computations org

I don't, no advice for you

ehartford changed discussion status to closed

Sign up or log in to comment