Upload folder using huggingface_hub
Browse files
README.md
CHANGED
@@ -9,3 +9,16 @@ Mixture of Tokens is a fully-differentiable model that retains the benefits of M
|
|
9 |
|
10 |
## Tips:
|
11 |
During inference, the model's computational performance is derived from combining tokens across batches into groups of a specified size, denoted as `group_size` in the model configuration. If the batch size is not evenly divisible by `group_size`, the model will internally pad the batch to ensure divisibility. To achieve optimal performance, it is advisable to conduct batched inference using a batch size that is a multiple of `group_size`.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
|
10 |
## Tips:
|
11 |
During inference, the model's computational performance is derived from combining tokens across batches into groups of a specified size, denoted as `group_size` in the model configuration. If the batch size is not evenly divisible by `group_size`, the model will internally pad the batch to ensure divisibility. To achieve optimal performance, it is advisable to conduct batched inference using a batch size that is a multiple of `group_size`.
|
12 |
+
|
13 |
+
|
14 |
+
## Usage example
|
15 |
+
|
16 |
+
The example generated by the model hub may be incorrect. To get started, try running:
|
17 |
+
|
18 |
+
```python
|
19 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
|
20 |
+
tokenizer = AutoTokenizer.from_pretrained("jaszczur/mixture_of_tokens", trust_remote_code=True)
|
21 |
+
model = AutoModelForCausalLM.from_pretrained("jaszczur/mixture_of_tokens", trust_remote_code=True)
|
22 |
+
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
|
23 |
+
pipe("Is mixture of tokens better than a dense model?")
|
24 |
+
```
|