Text Generation
Transformers
Safetensors
llama
text-generation-inference
TildeSIA commited on
Commit
313abea
·
verified ·
1 Parent(s): 113c8f4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -73,4 +73,4 @@ Model hyper-parameters are detailed below:
73
  | Non-embedding Parameters | 2.91E+10 |
74
  | Total Parameters | 3.07E+10 |
75
  ## Tokenizer details
76
- We built the TildeLM tokeniser to ensure equitable language representation across languages. Technically, we trained the tokeniser to represent the same text regardless of the language it is written in, using a similar number of tokens. In practice, TildeLM will be more efficient and faster than other models for our focus languages, as writing out answers will require fewer steps than other models.
 
73
  | Non-embedding Parameters | 2.91E+10 |
74
  | Total Parameters | 3.07E+10 |
75
  ## Tokenizer details
76
+ We built the TildeLM tokeniser to ensure equitable language representation across languages. Technically, we trained the tokeniser to represent the same text regardless of the language it is written in, using a similar number of tokens. In practice, TildeLM will be more efficient and faster than other models for our focus languages, as writing out answers will require fewer steps. For more details on how TildeLM compares against other models, see **[TILDE Bench](https://tilde-nlp.github.io/tokenizer-bench.html)**!