Doron Adler commited on
Commit
eb79bc2
1 Parent(s): 5b35cbf

Created model card

Browse files
Files changed (1) hide show
  1. README.md +108 -0
README.md CHANGED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: he
3
+
4
+ thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg
5
+ widget:
6
+ - text: "עוד בימי קדם"
7
+ - text: "קוראים לי דורון ואני מעוניין ל"
8
+ - text: "קוראים לי איציק ואני חושב ש"
9
+ - text: "החתול שלך מאוד חמוד ו"
10
+
11
+ license: mit
12
+ ---
13
+
14
+ # hebrew-distilgpt2
15
+
16
+ A tiny GPT2 based Hebrew text generation model trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program.
17
+
18
+ ## Dataset
19
+
20
+ oscar / unshuffled_deduplicated_he - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he)
21
+
22
+ The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.
23
+
24
+ ## Training
25
+
26
+ * Done on a TPUv3-8 VM using [Huggingface's clm-flax example script](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_clm_flax.py) <BR>
27
+ * I have made a list of items which might make it easier for other to use this script. The list was posted to [This discussion forum](https://discuss.huggingface.co/t/ideas-for-beginner-friendlier-tpu-vm-clm-training/8351)
28
+
29
+ ## Usage
30
+
31
+
32
+ #### Simple usage sample code
33
+
34
+ ```python
35
+
36
+ #pip install tokenizers==0.10.3 transformers==4.8.0
37
+
38
+ from transformers import AutoTokenizer, AutoModelForCausalLM
39
+
40
+ tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-distilgpt2")
41
+ model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-distilgpt2", pad_token_id=tokenizer.eos_token_id)
42
+
43
+ prompt_text = "אני אוהב שוקולד ועוגות"
44
+ max_len = 512
45
+ sample_output_num = 3
46
+ seed = 1000
47
+
48
+ import numpy as np
49
+ import torch
50
+
51
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
52
+ n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count()
53
+
54
+ print(f"device: {device}, n_gpu: {n_gpu}")
55
+
56
+ np.random.seed(seed)
57
+ torch.manual_seed(seed)
58
+ if n_gpu > 0:
59
+ torch.cuda.manual_seed_all(seed)
60
+
61
+ model.to(device)
62
+
63
+ encoded_prompt = tokenizer.encode(
64
+ prompt_text, add_special_tokens=False, return_tensors="pt")
65
+
66
+ encoded_prompt = encoded_prompt.to(device)
67
+
68
+ if encoded_prompt.size()[-1] == 0:
69
+ input_ids = None
70
+ else:
71
+ input_ids = encoded_prompt
72
+
73
+ print("input_ids = " + str(input_ids))
74
+
75
+ if input_ids != None:
76
+ max_len += len(encoded_prompt[0])
77
+ if max_len > 1024:
78
+ max_len = 1024
79
+
80
+ print("Updated max_len = " + str(max_len))
81
+
82
+ stop_token = "<|endoftext|>"
83
+ new_lines = "\n\n\n"
84
+
85
+ sample_outputs = model.generate(
86
+ input_ids,
87
+ do_sample=True,
88
+ max_length=max_len,
89
+ top_k=50,
90
+ top_p=0.95,
91
+ num_return_sequences=sample_output_num
92
+ )
93
+
94
+ print(100 * '-' + "\n\t\tOutput\n" + 100 * '-')
95
+ for i, sample_output in enumerate(sample_outputs):
96
+
97
+ text = tokenizer.decode(sample_output, skip_special_tokens=True)
98
+
99
+ # Remove all text after the stop token
100
+ text = text[: text.find(stop_token) if stop_token else None]
101
+
102
+ # Remove all text after 3 newlines
103
+ text = text[: text.find(new_lines) if new_lines else None]
104
+
105
+ print("\n{}: {}".format(i, text))
106
+ print("\n" + 100 * '-')
107
+
108
+ ```