Edit model card

Mixture of Tokens

Model description

Mixture of Tokens is a fully-differentiable model that retains the benefits of MoE architectures while avoiding the aforementioned difficulties. Rather than routing tokens to experts, this approach mixes tokens from different examples prior to feeding them to experts, enabling the model to learn from all token-expert combinations. Importantly, this mixing can be disabled to avoid mixing of different sequences during inference. Crucially, this method is fully compatible with both masked and causal Large Language Model training and inference.

Tips:

During inference, the model's computational performance is derived from combining tokens across batches into groups of a specified size, denoted as group_size in the model configuration. If the batch size is not evenly divisible by group_size, the model will internally pad the batch to ensure divisibility. To achieve optimal performance, it is advisable to conduct batched inference using a batch size that is a multiple of group_size.

Downloads last month
356
Safetensors
Model size
573M params
Tensor type
F32
·
Unable to determine this model’s pipeline type. Check the docs .