Aureliano commited on
Commit
506ab66
1 Parent(s): f041dc7
README.md CHANGED
@@ -1,3 +1,35 @@
1
  ---
 
 
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: en
3
+
4
  license: apache-2.0
5
  ---
6
+
7
+ ## ELECTRA for IF
8
+
9
+ **ELECTRA** is a method for self-supervised language representation learning. They are trained to distinguish "real" input tokens vs "fake" input tokens generated by another neural network, similar to the discriminator of a [GAN](https://arxiv.org/pdf/1406.2661.pdf).
10
+
11
+ For a detailed description and experimental results, please refer to the original paper [ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators](https://openreview.net/pdf?id=r1xMH1BtvB).
12
+
13
+ This repository contains a small ELECTRA discriminator finetuned on a corpus of interactive fiction commands labelled with the WordNet synset offset of the verb in the sentence. The original dataset has been collected from the list of action in the walkthroughs for the game included in the [Jericho](https://github.com/microsoft/jericho) framework and manually annotated. For more information visit https://github.com/aporporato/electra and https://github.com/aporporato/jericho-corpora.
14
+
15
+ ## How to use the discriminator in `transformers`
16
+
17
+ ```python
18
+ from transformers import ElectraForPreTraining, ElectraTokenizerFast
19
+ import torch
20
+
21
+ discriminator = ElectraForPreTraining.from_pretrained("google/electra-small-discriminator")
22
+ tokenizer = ElectraTokenizerFast.from_pretrained("google/electra-small-discriminator")
23
+
24
+ sentence = "The quick brown fox jumps over the lazy dog"
25
+ fake_sentence = "The quick brown fox fake over the lazy dog"
26
+
27
+ fake_tokens = tokenizer.tokenize(fake_sentence)
28
+ fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt")
29
+ discriminator_outputs = discriminator(fake_inputs)
30
+ predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2)
31
+
32
+ [print("%7s" % token, end="") for token in fake_tokens]
33
+
34
+ [print("%7s" % int(prediction), end="") for prediction in predictions.squeeze().tolist()]
35
+ ```
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "wn_full-trainer",
3
+ "architectures": [
4
+ "ElectraModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "embedding_size": 128,
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 256,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 1024,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "electra",
17
+ "num_attention_heads": 4,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 0,
20
+ "position_embedding_type": "absolute",
21
+ "problem_type": "single_label_classification",
22
+ "summary_activation": "gelu",
23
+ "summary_last_dropout": 0.1,
24
+ "summary_type": "first",
25
+ "summary_use_proj": true,
26
+ "torch_dtype": "float32",
27
+ "transformers_version": "4.17.0",
28
+ "type_vocab_size": 2,
29
+ "use_cache": true,
30
+ "vocab_size": 30522
31
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a613fb2aa3569bac1a4a5cb2be88706cdccd33a72dddfc78051353e6ec07cb46
3
+ size 54011377
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:56a0bfbd98a2fb0c1a57ad0c8a4dd6234cc17b2e27a21b5c33971a5045d5ecd7
3
+ size 54198792
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "google/electra-small-discriminator", "tokenizer_class": "ElectraTokenizer"}
vocab.txt ADDED
The diff for this file is too large to render. See raw diff