root commited on
Commit
12b55a9
1 Parent(s): 48c5833

Initial commit

Browse files
README.md ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Try out in the Hosted inference API
2
+ In the right panel, you can try to the model (although it only handles a short sequence length).
3
+ Try the following string: `A capsule containing asteroid soil samples landed in the Australian Outback. The precision required to carry out the mission thrilled many.<|endoftext|>`
4
+
5
+
6
+ # Model Loading
7
+
8
+ The model can be loaded in the following way:
9
+ ```
10
+ from transformers import AutoTokenizer, AutoModelForCausalLM
11
+ import torch
12
+
13
+ tokenizer = AutoTokenizer.from_pretrained("philippelaban/keep_it_simple")
14
+ kis_model = AutoModelForCausalLM.from_pretrained("philippelaban/keep_it_simple")
15
+ ```
16
+
17
+ # Example use
18
+
19
+ And then used by first inputting a paragraph for simplification, followed by a `bos_token` to indicate to the model to start simplifying.
20
+ Imagine we want to simplify the following paragraph:
21
+ ```
22
+ A small capsule containing asteroid soil samples that was dropped from 136,700 miles in space
23
+ by Japan's Hayabusa2 spacecraft landed as planned in the Australian Outback on December 6.
24
+ The extremely high precision required to carry out the mission thrilled many in Japan,
25
+ who said they took pride in its success.
26
+ ```
27
+
28
+ The following code can be run:
29
+ ```
30
+ paragraph = """A small capsule containing asteroid soil samples that was dropped from 136,700 miles in space by Japan's Hayabusa2 spacecraft landed as planned in the Australian Outback on December 6. The extremely high precision required to carry out the mission thrilled many in Japan, who said they took pride in its success."""
31
+
32
+ start_id = tokenizer.bos_token_id
33
+ tokenized_paragraph = [(tokenizer.encode(text=paragraph) + [start_id])]
34
+ input_ids = torch.LongTensor(tokenized_paragraph)
35
+
36
+ output_ids = kis_model.generate(input_ids, max_length=150, num_beams=4, do_sample=True, num_return_sequences=8)
37
+ output_ids = output_ids[:, input_ids.shape[1]:]
38
+ output = tokenizer.batch_decode(output_ids)
39
+ output = [o.replace(tokenizer.eos_token, "") for o in output]
40
+
41
+ for o in output:
42
+ print("----")
43
+ print(o)
44
+ ```
45
+
46
+ # Example output
47
+
48
+ When run, an output similar to the following should be obtained:
49
+
50
+ A small capsule containing samples of asteroid soil that was dropped from 136,700 miles, Japan's Hayabusa2 space probe, landed as planned on December 6. The mission was extremely precise, said many in Japan, and they took pride in its success.
51
+
52
+ A small capsule containing samples of asteroid soil that was dropped from 136,700 miles, Japan's Hayabusa2 space probe, landed as planned on December 6. The mission was extremely precise and well thought-out, said many in Japan, who took pride in the mission.
53
+
54
+ A small capsule containing soil samples that was dropped from 136,700 miles, Japan's Hayabusa2 space probe, landed as planned on December 6. The mission was designed to test the performance of the country's space fleet, which many said took pride in its success.
55
+
56
+ A small capsule containing soil samples that was dropped from 136,700 miles in space by Japan's Hayabusa2 probe was followed by a landing on the Outback. The precise timing of the mission thrilled many in Japan, who said they took pride in its success.
57
+
58
+ # Github repo
59
+
60
+ You can access more information, access to the scoring function, the training script, or an example training log on the Github repo:
61
+ https://github.com/tingofurro/keep_it_simple
config.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "gpt2-medium",
3
+ "activation_function": "gelu_new",
4
+ "architectures": [
5
+ "GPT2LMHeadModel"
6
+ ],
7
+ "attn_pdrop": 0.1,
8
+ "bos_token_id": 50256,
9
+ "embd_pdrop": 0.1,
10
+ "eos_token_id": 50256,
11
+ "gradient_checkpointing": false,
12
+ "initializer_range": 0.02,
13
+ "layer_norm_epsilon": 1e-05,
14
+ "model_type": "gpt2",
15
+ "n_ctx": 1024,
16
+ "n_embd": 1024,
17
+ "n_head": 16,
18
+ "n_inner": null,
19
+ "n_layer": 24,
20
+ "n_positions": 1024,
21
+ "n_special": 0,
22
+ "predict_special_tokens": true,
23
+ "resid_pdrop": 0.1,
24
+ "scale_attn_weights": true,
25
+ "summary_activation": null,
26
+ "summary_first_dropout": 0.1,
27
+ "summary_proj_to_labels": true,
28
+ "summary_type": "cls_index",
29
+ "summary_use_proj": true,
30
+ "task_specific_params": {
31
+ "text-generation": {
32
+ "do_sample": true,
33
+ "max_length": 50
34
+ }
35
+ },
36
+ "transformers_version": "4.8.2",
37
+ "use_cache": true,
38
+ "vocab_size": 50257
39
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7eb9c93de94a4ffc988db3dba9e9d614465e66761c880e0d72bf10c028b4131e
3
+ size 1444589475
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "unk_token": "<|endoftext|>", "pad_token": "!"}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "<|endoftext|>", "bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "add_prefix_space": false, "model_max_length": 1024, "special_tokens_map_file": null, "name_or_path": "gpt2-medium", "tokenizer_class": "GPT2Tokenizer"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff