Model Card for omarmomen/ptb_bpe_tokenizer_10k
This model is part of the experiments in my master's thesis titled "Linguistic Structure Induction from Language Models" (https://arxiv.org/abs/2403.09714).
"omarmomen/ptb_bpe_tokenizer_10k" is a RobertaTokenizer pretrained on the Penn Tree Bank Training dataset (cased) with 10K tokens.
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model’s pipeline type.