Text Classification
Transformers
PyTorch
English
deberta-v2
reward-model
reward_model
RLHF
Inference Endpoints

Hyperparameters training setting

#10
by hyuk199 - opened

I'd like to implement the model myself, could you provide information on hyperparameters such as learning rate for training, optimization algorithms, batch size, epochs, and others required for training?

hyuk199 changed discussion title from I'd like to implement the model myself to Hyperparameters training setting

Sign up or log in to comment