Update README.md
Browse files
README.md
CHANGED
@@ -58,6 +58,10 @@ When calling `tokenizer.decode`, it's important to pass `clean_up_tokenization_s
|
|
58 |
|
59 |
(Note: encoding prepends the `<|endoftext|>` token, as this marks the start of a document to our model. This token can be removed from the decoded output by passing `skip_special_tokens=True` to `tokenizer.decode`.)
|
60 |
|
|
|
|
|
|
|
|
|
61 |
## Credits
|
62 |
|
63 |
The model was developed by Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer and Mike Lewis.
|
|
|
58 |
|
59 |
(Note: encoding prepends the `<|endoftext|>` token, as this marks the start of a document to our model. This token can be removed from the decoded output by passing `skip_special_tokens=True` to `tokenizer.decode`.)
|
60 |
|
61 |
+
## License
|
62 |
+
|
63 |
+
CC-BY-NC 4.0
|
64 |
+
|
65 |
## Credits
|
66 |
|
67 |
The model was developed by Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen-tau Yih, Luke Zettlemoyer and Mike Lewis.
|