TheBloke commited on
Commit
a8091d7
1 Parent(s): 6d15ad7

Initial GGML model commit

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -207,7 +207,7 @@ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
207
  del inputs['token_type_ids']
208
  streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
209
  output = model.generate(**inputs, streamer=streamer, use_cache=True, max_new_tokens=float('inf'))
210
- output_text = tokenizer.decode(output[0], skip_prompt=True, skip_special_tokens=True)
211
  ```
212
 
213
  **Our model can handle >10k input tokens thanks to the `rope_scaling` option.**
 
207
  del inputs['token_type_ids']
208
  streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
209
  output = model.generate(**inputs, streamer=streamer, use_cache=True, max_new_tokens=float('inf'))
210
+ output_text = tokenizer.decode(output[0], skip_special_tokens=True)
211
  ```
212
 
213
  **Our model can handle >10k input tokens thanks to the `rope_scaling` option.**