sushruth13 commited on
Commit
850899f
1 Parent(s): 2d1dba7
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: mit
4
+ datasets:
5
+ - bookcorpus
6
+ - wikipedia
7
+ ---
8
+
9
+ # XLNet (base-sized model)
10
+
11
+ XLNet model pre-trained on English language. It was introduced in the paper [XLNet: Generalized Autoregressive Pretraining for Language Understanding](https://arxiv.org/abs/1906.08237) by Yang et al. and first released in [this repository](https://github.com/zihangdai/xlnet/).
12
+
13
+ Disclaimer: The team releasing XLNet did not write a model card for this model so this model card has been written by the Hugging Face team.
14
+
15
+ ## Model description
16
+
17
+ XLNet is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective. Additionally, XLNet employs Transformer-XL as the backbone model, exhibiting excellent performance for language tasks involving long context. Overall, XLNet achieves state-of-the-art (SOTA) results on various downstream language tasks including question answering, natural language inference, sentiment analysis, and document ranking.
18
+
19
+ ## Intended uses & limitations
20
+
21
+ The model is mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?search=xlnet) to look for fine-tuned versions on a task that interests you.
22
+
23
+ Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2.
24
+
25
+ ## Usage
26
+
27
+ Here is how to use this model to get the features of a given text in PyTorch:
28
+
29
+ ```python
30
+ from transformers import XLNetTokenizer, XLNetModel
31
+
32
+ tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased')
33
+ model = XLNetModel.from_pretrained('xlnet-base-cased')
34
+
35
+ inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
36
+ outputs = model(**inputs)
37
+
38
+ last_hidden_states = outputs.last_hidden_state
39
+ ```
40
+
41
+ ### BibTeX entry and citation info
42
+
43
+ ```bibtex
44
+ @article{DBLP:journals/corr/abs-1906-08237,
45
+ author = {Zhilin Yang and
46
+ Zihang Dai and
47
+ Yiming Yang and
48
+ Jaime G. Carbonell and
49
+ Ruslan Salakhutdinov and
50
+ Quoc V. Le},
51
+ title = {XLNet: Generalized Autoregressive Pretraining for Language Understanding},
52
+ journal = {CoRR},
53
+ volume = {abs/1906.08237},
54
+ year = {2019},
55
+ url = {http://arxiv.org/abs/1906.08237},
56
+ eprinttype = {arXiv},
57
+ eprint = {1906.08237},
58
+ timestamp = {Mon, 24 Jun 2019 17:28:45 +0200},
59
+ biburl = {https://dblp.org/rec/journals/corr/abs-1906-08237.bib},
60
+ bibsource = {dblp computer science bibliography, https://dblp.org}
61
+ }
62
+ ```
config.json ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "XLNetLMHeadModel"
4
+ ],
5
+ "attn_type": "bi",
6
+ "bi_data": false,
7
+ "bos_token_id": 1,
8
+ "clamp_len": -1,
9
+ "d_head": 64,
10
+ "d_inner": 3072,
11
+ "d_model": 768,
12
+ "dropout": 0.1,
13
+ "end_n_top": 5,
14
+ "eos_token_id": 2,
15
+ "ff_activation": "gelu",
16
+ "initializer_range": 0.02,
17
+ "layer_norm_eps": 1e-12,
18
+ "mem_len": null,
19
+ "model_type": "xlnet",
20
+ "n_head": 12,
21
+ "n_layer": 12,
22
+ "pad_token_id": 5,
23
+ "reuse_len": null,
24
+ "same_length": false,
25
+ "start_n_top": 5,
26
+ "summary_activation": "tanh",
27
+ "summary_last_dropout": 0.1,
28
+ "summary_type": "last",
29
+ "summary_use_proj": true,
30
+ "task_specific_params": {
31
+ "text-generation": {
32
+ "do_sample": true,
33
+ "max_length": 250
34
+ }
35
+ },
36
+ "untie_r": true,
37
+ "vocab_size": 32000
38
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "pad_token_id": 5,
6
+ "transformers_version": "4.27.0.dev0"
7
+ }
generation_config_for_text_generation.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "do_sample": true,
5
+ "eos_token_id": 2,
6
+ "max_length": 250,
7
+ "pad_token_id": 5,
8
+ "transformers_version": "4.27.0.dev0"
9
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b13dc2d3664a385b92d087229d5410d9e3020ede976e7ff4c62c9c85cd969a42
3
+ size 467042463
rust_model.ot ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:938fdad36b1dfe04257318f4478f3999487b37e36e60342f335d040f54c16186
3
+ size 565354898
spiece.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f8c1c0bc2854d1af911a8550288c1258af5ba50277f3a5c829b98eb86fc5646
3
+ size 798011
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c91def0a6a0adae911fed1630194d76497736887380848054494ed9ed9324c32
3
+ size 565485600
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff