Update README.md
Browse files
README.md
CHANGED
@@ -33,6 +33,8 @@ pip install transformers
|
|
33 |
|
34 |
## Usage
|
35 |
|
|
|
|
|
36 |
See [https://github.com/dpfried/incoder](https://github.com/dpfried/incoder) for example code.
|
37 |
|
38 |
This 6B model comes in two versions: with weights in full-precision (float32, stored on branch `main`) and weights in half-precision (float16, stored on branch `float16`). The versions can be loaded as follows:
|
@@ -45,6 +47,17 @@ This 6B model comes in two versions: with weights in full-precision (float32, st
|
|
45 |
|
46 |
`model = AutoModelForCausalLM.from_pretrained("facebook/incoder-6B", revision="float16", torch_dtype=torch.float16, low_cpu_mem_usage=True)`
|
47 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
48 |
## Credits
|
49 |
|
50 |
The model was developed by Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer and Mike Lewis.
|
|
|
33 |
|
34 |
## Usage
|
35 |
|
36 |
+
### Model
|
37 |
+
|
38 |
See [https://github.com/dpfried/incoder](https://github.com/dpfried/incoder) for example code.
|
39 |
|
40 |
This 6B model comes in two versions: with weights in full-precision (float32, stored on branch `main`) and weights in half-precision (float16, stored on branch `float16`). The versions can be loaded as follows:
|
|
|
47 |
|
48 |
`model = AutoModelForCausalLM.from_pretrained("facebook/incoder-6B", revision="float16", torch_dtype=torch.float16, low_cpu_mem_usage=True)`
|
49 |
|
50 |
+
### Tokenizer
|
51 |
+
`tokenizer = AutoTokenizer.from_pretrained("facebook/incoder-6B")`
|
52 |
+
|
53 |
+
Note: the incoder-1B and incoder-6B tokenizers are identical, so 'facebook/incoder-1B' could also be used.
|
54 |
+
|
55 |
+
When calling `tokenizer.decode`, it's important to pass `clean_up_tokenization_spaces=False` to avoid removing spaces after punctuation:
|
56 |
+
|
57 |
+
`tokenizer.decode(tokenizer.encode("from ."), clean_up_tokenization_spaces=False)`
|
58 |
+
|
59 |
+
(Note: encoding prepends the `<|endoftext|>` token, as this marks the start of a document to our model. This token can be removed from the decoded output by passing `skip_special_tokens=True` to `tokenizer.decode`.)
|
60 |
+
|
61 |
## Credits
|
62 |
|
63 |
The model was developed by Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer and Mike Lewis.
|