Text Generation
Transformers
PyTorch
English
llama
sft
Inference Endpoints
text-generation-inference

Fine tuning information & code

#6
by ajibawa-2023 - opened

Hello, excellent work by OpenAssitant team. Can you share finetuning code and system requirement. Was just HF was used for finetuning or HF + FastChat/DeepSpeed was used. Thank you.

OpenAssistant org

We use the trainer code that is part of the open-assistant repository: https://github.com/LAION-AI/Open-Assistant/tree/main/model/model_training

Ok, Thank you very much for sharing the info!

ajibawa-2023 changed discussion status to closed

Sign up or log in to comment