xlnet-base-cased

 1 --- 2 language: en 3 license: mit 4 datasets: 5 - bookcorpus 6 - wikipedia 7 --- 8 9 # XLNet (base-sized model)  10 11 XLNet model pre-trained on English language. It was introduced in the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Yang et al. and first released in [this repository](https://github.com/zihangdai/xlnet/).  12 13 Disclaimer: The team releasing XLNet did not write a model card for this model so this model card has been written by the Hugging Face team. 14 15 ## Model description 16 17 XLNet is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective. Additionally, XLNet employs Transformer-XL as the backbone model, exhibiting excellent performance for language tasks involving long context. Overall, XLNet achieves state-of-the-art (SOTA) results on various downstream language tasks including question answering, natural language inference, sentiment analysis, and document ranking. 18 19 ## Intended uses & limitations 20 21 The model is mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?search=xlnet) to look for fine-tuned versions on a task that interests you. 22 23 Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2. 24 25 ## Usage 26 27 Here is how to use this model to get the features of a given text in PyTorch: 28 29 python 30 from transformers import XLNetTokenizer, XLNetModel 31 32 tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased') 33 model = XLNetModel.from_pretrained('xlnet-base-cased') 34 35 inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") 36 outputs = model(**inputs) 37 38 last_hidden_states = outputs.last_hidden_state 39  40 41 ### BibTeX entry and citation info 42 43 bibtex 44 @article{DBLP:journals/corr/abs-1906-08237, 45  author = {Zhilin Yang and 46  Zihang Dai and 47  Yiming Yang and 48  Jaime G. Carbonell and 49  Ruslan Salakhutdinov and 50  Quoc V. Le}, 51  title = {XLNet: Generalized Autoregressive Pretraining for Language Understanding}, 52  journal = {CoRR}, 53  volume = {abs/1906.08237}, 54  year = {2019}, 55  url = {http://arxiv.org/abs/1906.08237}, 56  eprinttype = {arXiv}, 57  eprint = {1906.08237}, 58  timestamp = {Mon, 24 Jun 2019 17:28:45 +0200}, 59  biburl = {https://dblp.org/rec/journals/corr/abs-1906-08237.bib}, 60  bibsource = {dblp computer science bibliography, https://dblp.org} 61 } 62  63