File size: 2,891 Bytes
eb79bc2
 
 
 
 
6e1c9c6
 
 
eb79bc2
 
 
 
 
31898c6
eb79bc2
6e1c9c6
eb79bc2
 
 
6e1c9c6
eb79bc2
 
 
6e1c9c6
 
 
 
 
 
 
 
 
eb79bc2
 
 
 
6e1c9c6
eb79bc2
 
 
 
 
 
 
 
6e1c9c6
eb79bc2
6e1c9c6
 
eb79bc2
6e1c9c6
 
eb79bc2
6e1c9c6
 
 
 
 
eb79bc2
6e1c9c6
 
eb79bc2
6e1c9c6
eb79bc2
6e1c9c6
 
eb79bc2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
language: he

thumbnail: https://avatars1.githubusercontent.com/u/3617152?norod.jpg
widget:
- text: "האיש האחרון עלי אדמות ישב לבד בחדרו כשלפתע נשמעה נקישה"
- text: "שלום, קרואים לי"
- text: "הארי פוטר חייך חיוך נבוך"
- text: "החתול שלך מאוד חמוד ו"

license: mit
---

# distilgpt2-base-pretrained-he

A tiny GPT2 based Hebrew text generation model initially trained on a TPUv3-8 which was made avilable to me via the [TPU Research Cloud](https://sites.research.google/trc/) Program. Then was further fine-tuned on GPU.

## Dataset

### oscar (unshuffled deduplicated he) - [Homepage](https://oscar-corpus.com) | [Dataset Permalink](https://huggingface.co/datasets/viewer/?dataset=oscar&config=unshuffled_deduplicated_he)

The Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.

### CC-100 (he) - [HomePage](https://data.statmt.org/cc-100/)

This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository.

### Misc
* Hebrew Twitter
* Wikipedia
* Various other sources

## Training

* Done on a TPUv3-8 VM using [Huggingface's clm-flax example script](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_clm_flax.py) <BR>
* I have made a list of items which might make it easier for other to use this script. The list was posted to [This discussion forum](https://discuss.huggingface.co/t/ideas-for-beginner-friendlier-tpu-vm-clm-training/8351)
* Further training was performed on GPU

## Usage


#### Simple usage sample code

```python

from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline

def main():
    model_name="Norod78/distilgpt2-base-pretrained-he"

    prompt_text = "שלום, קוראים לי"
    generated_max_length = 192

    print("Loading model...")
    model =  AutoModelForCausalLM.from_pretrained(model_name)
    print('Loading Tokenizer...')
    tokenizer = AutoTokenizer.from_pretrained(model_name)
    text_generator = pipeline(task="text-generation", model=model, tokenizer=tokenizer)

    print("Generating text...")
    result = text_generator(prompt_text, num_return_sequences=1, batch_size=1, do_sample=True, top_k=40, top_p=0.92, temperature = 1, repetition_penalty=5.0, max_length = generated_max_length)

    print("result = " + str(result))

if __name__ == '__main__':
    main()
```