Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
yanghaojinΒ 
posted an update May 2

Command for reproducing this run πŸ˜‰ :
CUDA_VISIBLE_DEVICES=0 WANDB_DISABLED=true python -m sft.finetune --model GreenBitAI/Llama-3-8B-layer-mix-bpw-2.2 --tune-qweight-only --galore --galore-rank 64 --optimizer adamw8bit --batch-size 1 --seqlen 96

How you prepare dataset for finetuning llama3?
Could you show the structure of your dataset and how you fine-tune using that dataset?

How you prepare dataset for finetuning llama3?
Could you show the structure of your dataset and how you fine-tune using that dataset?