Multilingual Constitutional AI
Collection
Blog: https://sites.google.com/view/multilingual-constitutional-ai
•
12 items
•
Updated
•
1
This model is a fine-tuned version of mistralai/Mistral-Nemo-Base-2407 on the pbevan11/multilingual-constitutional-preference-pairs and the pbevan11/ultrafeedback_binarized_multilingual datasets. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.9196 | 0.9811 | 13 | 1.6468 |
1.2211 | 1.9623 | 26 | 1.2391 |
1.0105 | 2.9434 | 39 | 1.2052 |
Base model
mistralai/Mistral-Nemo-Base-2407