Text2Text Generation
Transformers
Safetensors
mt5

Trained SFT policy for MT task in the paper "ALaRM: Align Language Models via Hierarchical Rewards Modeling".

Check out our project page for more information.

Downloads last month
0
Safetensors
Model size
582M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train halfrot/sft-mt5-base