Kristijan's picture
Create README.md
347b903
|
raw
history blame contribute delete
No virus
828 Bytes

Model info

This is a BPE tokenizer retrained from scratch on the concatenated Wikitext-103 train, evaluation, and test sets. The vocabulary had 28,439 entries.

This tokenizer was use to tokenize text for the GPT-2 model trained on Wikitext-103.

Usage

You can download the tokenizer directly from hub as follows:

from transformers import GPT2TokenizerFast

tokenizer = GPT2TokenizerFast.from_pretrained("Kristijan/wikitext-103-tokenizer")

After cloning/downloading the files, you can load the tokenizer using the /from_pretrained() methods as follows:

from transformers import GPT2TokenizerFast

tokenizer = GPT2TokenizerFast.from_pretrained(path_to_folder_with_merges_and_vocab_files)