hzhwcmhf commited on
Commit
31558a2
1 Parent(s): daacf3b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -73,7 +73,7 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
73
 
74
  ### Processing Long Texts
75
 
76
- To handle extensive inputs exceeding 65,536 tokens, we utilize [YARN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
77
 
78
  For deployment, we recommend using vLLM. You can enable the long-context capabilities by following these steps:
79
 
 
73
 
74
  ### Processing Long Texts
75
 
76
+ To handle extensive inputs exceeding 32,768 tokens, we utilize [YARN](https://arxiv.org/abs/2309.00071), a technique for enhancing model length extrapolation, ensuring optimal performance on lengthy texts.
77
 
78
  For deployment, we recommend using vLLM. You can enable the long-context capabilities by following these steps:
79