| Model Name | Parameters | Class | Ratio | Tokens | Batch Size (Tokens) | Training Loss ↓ | | --- | --- | --- | --- | --- | --- | --- | | [GerbilLab/GerbilBlender-A-32m](https://hf.co/GerbilLab/GerbilBlender-A-32m) | 32m | A-Class | 20 | 640M | 262K | 4.127 | "Blender" models, inspired by UL2 pretraining, are trained equally in fill-in-the-middle, causal modelling, and masked language modelling tasks. Special tokens for these models include: ``` '', '', '', '', '', '', '' # Example fill in the middle ' this is an for fill-in-the-middle example text <|endoftext|>' # Example causal language modelling ' this is an example text for causal language modelling <|endoftext|>' # Example masked language modelling ' this is an text for masked language modelling example <|endoftext|>' ```