beomi commited on
Commit
b10288c
1 Parent(s): 2dd0a00

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -25
README.md CHANGED
@@ -91,31 +91,6 @@ Total amount of tokens: (Approx.) 15B Tokens (*using expanded tokenizer. with or
91
 
92
  TBD
93
 
94
- ## Note for oobabooga/text-generation-webui
95
-
96
- Remove `ValueError` at `load_tokenizer` function(line 109 or near), in `modules/models.py`.
97
-
98
- ```python
99
- diff --git a/modules/models.py b/modules/models.py
100
- index 232d5fa..de5b7a0 100644
101
- --- a/modules/models.py
102
- +++ b/modules/models.py
103
- @@ -106,7 +106,7 @@ def load_tokenizer(model_name, model):
104
- trust_remote_code=shared.args.trust_remote_code,
105
- use_fast=False
106
- )
107
- - except ValueError:
108
- + except:
109
- tokenizer = AutoTokenizer.from_pretrained(
110
- path_to_model,
111
- trust_remote_code=shared.args.trust_remote_code,
112
- ```
113
-
114
- Since Llama-2-Ko uses FastTokenizer provided by HF tokenizers NOT sentencepiece package,
115
- it is required to use `use_fast=True` option when initialize tokenizer.
116
-
117
- Apple Sillicon does not support BF16 computing, use CPU instead. (BF16 is supported when using NVIDIA GPU)
118
-
119
  ## Citation
120
 
121
  TBD
 
91
 
92
  TBD
93
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
94
  ## Citation
95
 
96
  TBD