Can this model be fine tuned?

#12
by rvsh - opened

Can this model be fine tuned? For example with this dataset for polish language?
https://huggingface.co/datasets/mmosiolek/pl_alpaca_data_cleaned

LLaMA2-Accessory now supports the inference and instruction finetuning (both full-parameter and PEFT like LoRA) of mixtral-8x7b-32kseqlen. It supports the load balancing loss and some other MoE support. The document is available here

Sign up or log in to comment