Milos commited on
Commit
103d8be
1 Parent(s): 209df65

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +156 -0
README.md ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - sk
4
+ tags:
5
+ - Slovak GPT-J
6
+ - pytorch
7
+ - causal-lm
8
+ license: gpl-3.0
9
+ ---
10
+
11
+ # Slovak GPT-J-1.4B
12
+ Slovak GPT-J-1.4B with the whopping `1,415,283,792` parameters is the latest and the largest model released in Slovak GPT-J series. Smaller variants, [Slovak GPT-J-405M](https://huggingface.co/Milos/slovak-gpt-j-405M) and [Slovak GPT-J-162M](https://huggingface.co/Milos/slovak-gpt-j-162M), are still available.
13
+ ## Model Description
14
+ Model is based on [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax/) and has over 1.4B trainable parameters.
15
+
16
+ <figure>
17
+
18
+ | Hyperparameter | Value |
19
+ |----------------------|----------------------------------------------------------------------------------------------------------------------------------------|
20
+ | \\(n_{parameters}\\) | 1,415,283,792 |
21
+ | \\(n_{layers}\\) | 24 |
22
+ | \\(d_{model}\\) | 2048 |
23
+ | \\(d_{ff}\\) | 16384 |
24
+ | \\(n_{heads}\\) | 16 |
25
+ | \\(d_{head}\\) | 256 |
26
+ | \\(n_{ctx}\\) | 2048 |
27
+ | \\(n_{vocab}\\) | 50256 (same tokenizer as GPT-2/3&dagger;) |
28
+ | Positional Encoding | [Rotary Position Embedding (RoPE)](https://arxiv.org/abs/2104.09864) |
29
+ | RoPE Dimensions | [64](https://github.com/kingoflolz/mesh-transformer-jax/blob/f2aa66e0925de6593dcbb70e72399b97b4130482/mesh_transformer/layers.py#L223) |
30
+
31
+ <p><strong>&dagger;</strong> ByteLevelBPETokenizer was trained on the same Slovak corpus.</p></figure>
32
+
33
+ ## Training data
34
+
35
+ Slovak GPT-J models were trained on a privately collected dataset consisting of predominantly Slovak text spanning different categories, e.g. web, news articles or even biblical texts - in total, over 40GB of text data was used to train this model.
36
+ The dataset was preprocessed and cleaned in a specific way that involves minor but a few caveats, so in order to achieve the expected performance, feel free to refer to [How to use](###How to use) section. Please, keep in mind that despite the effort to remove inappropriate corpus, the model still might generate sensitive content or leak sensitive information.
37
+
38
+ ## Training procedure
39
+
40
+ This model was trained for a bit more than 26.5 billion tokens over 48,001 steps on TPU v3-8 pod. The cross-entropy validation loss at the last step was `2.657`.
41
+
42
+ ## Intended Use
43
+
44
+ Same as the original GPT-J, Slovak GPT-J learns an inner representation of the language that can be used to extract features useful for downstream tasks, however, the intended use is text generation from a prompt.
45
+
46
+ ### How to use
47
+
48
+ This model along with the tokenizer can be easily loaded using the `AutoModelForCausalLM` functionality:
49
+
50
+ ```python
51
+ from transformers import AutoTokenizer, AutoModelForCausalLM
52
+
53
+ tokenizer = AutoTokenizer.from_pretrained("Milos/slovak-gpt-j-1.4B")
54
+ model = AutoModelForCausalLM.from_pretrained("Milos/slovak-gpt-j-1.4B")
55
+ ```
56
+
57
+ When generating a prompt keep in mind these three things, and you should be good to go:
58
+ 1. Never leave trailing whitespaces. There's a difference between how tokenizer encodes "Mám rád slovenčinu" (no space after `slovenčinu`) and "Mám rád slovenčinu " (trailing space after `slovenčinu`), i.e `[12805, 2872, 46878]` != `[12805, 2872, 46878, 221]`.
59
+ 2. Always use good ol' US English primary double quotation marks, i.e. `""` instead of `„“`.
60
+ 3. In case of a new line always enter `\n\n` instead of a single `\n`
61
+
62
+ To illustrate an example of a basic text generation:
63
+ ```
64
+ >>> prompt = "Tradičné jedlo na Orave sú"
65
+ >>> encoded_input = tokenizer(prompt, return_tensors='pt')
66
+ >>> output = model.generate(**encoded_input)
67
+ >>> tokenizer.decode(output[0])
68
+ 'Tradičné jedlo na Orave sú bryndzové halušky\n\nNa Orave sa v minulosti varilo viac druhov'
69
+ ```
70
+
71
+ ### Capabilities, Limitations, and Biases
72
+
73
+ Based on the parameter count and our internal testing this should be by far the most capable model out of all models released in Slovak GPT-J series.
74
+ For sake of simplicity, I have omitted all the boilerplate code and swapped `\n` for new lines in the examples that follow. These were generated with both `top_p` and `temperature` set to `0.9`.
75
+
76
+ A spin on "Mor ho!" by Samo Chalpuka
77
+ ```
78
+ PROMPT
79
+ Zleteli orly z Tatry, tiahnu na podolia, ponad vysoké hory, ponad rovné polia; preleteli cez Dunaj, cez tú šíru vodu, sadli tam za pomedzím slovenského rodu.
80
+
81
+ Duní Dunaj a luna za lunou sa valí
82
+ OUTPUT
83
+ Zleteli orly z Tatry, tiahnu na podolia, ponad vysoké hory, ponad rovné polia; preleteli cez Dunaj, cez tú šíru vodu, sadli tam za pomedzím slovenského rodu.
84
+
85
+ Duní Dunaj a luna za lunou sa valí: nad ním svieti pevný hrad na vysokom bralí. Pod tým hradom je zem slovenská, na nej je Slovensko naše.
86
+
87
+ Na tom vrchu leží slovenský národ, pod tým bralom leží krajina slovenská. A pod tým hradom leží i jeho sláva: na tom vrchu, na tom brale leží sláva slovenského národa.```
88
+ ```
89
+
90
+ Jokes are a bit hit or miss. Very often they contain vulgarisms and/or are highly politically incorrect. This one is rather exception than the rule.
91
+ ```
92
+ PROMPT
93
+ Nasledujú najlepšie vtipy.
94
+
95
+ Prečo
96
+ OUTPUT
97
+ Nasledujú najlepšie vtipy.
98
+
99
+ Prečo má policajt pri sebe vždy dva nože? Jeden je na sebaobranu, druhý na boj zblízka.
100
+ ```
101
+
102
+ The infamous unicorn prompt from GPT-2 paper.
103
+ ```
104
+ PROMPT
105
+ V šokujúcom náleze vedci objavili stádo jednorožcov žijúcich v odľahlom, predtým nepreskúmanom údolí, v Andách. Ešte prekvapivejšie pre vedcov bol fakt, že jednorožce hovorili plynule po slovensky.
106
+
107
+ OUTPUT
108
+ V šokujúcom náleze vedci objavili stádo jednorožcov žijúcich v odľahlom, predtým nepreskúmanom údolí, v Andách. Ešte prekvapivejšie pre vedcov bol fakt, že jednorožce hovorili plynule po slovensky.
109
+
110
+ V Andách ich totiž žilo približne 700 jedincov, ktoré sa živili výhradne materským mliekom. Slováci sú tak pravdepodobne prvými Európanmi, ktorí v Andách stretli jednorožca. "Je to dôkaz, že tieto zvieratá sú inteligentné a že žijú v Andách už stovky rokov," povedal pre webový portál televízie JOJ profesor geológie, geografie a zoológie, Milan Kováč.
111
+
112
+ Podľa profesora Kováča si v Andách zvieratá vytvárajú svoj vlastný jazyk. Je to zároveň dôkaz, že jednorožce žili v minulosti aj v slovenských pohoriach. "Jednorožce sa tam síce vyskytovali, ale neboli tak dobre preskúmané, ako teraz v Andách."
113
+
114
+ Na Slovensku však ľudia o jednorožcoch donedávna vedeli veľmi málo.<|endoftext|>
115
+ ```
116
+
117
+ Since the dataset contains profanity, politically incorrect language, and (unintentionally) even a bits of text in Czech, the model can generate them in some extent too. Here's an example of the model output when prompt is in Czech:
118
+ ```
119
+ >>> prompt = "Věta nesmí být sprostá a musí být zcela"
120
+ >>> encoded_input = tokenizer(prompt, return_tensors='pt')
121
+ >>> output = model.generate(**encoded_input, max_length=16)
122
+ >>> tokenizer.decode(output[0])
123
+ 'Věta nesmí být sprostá a musí být zcela pravdivá.'
124
+ ```
125
+
126
+ ## Citation and Related Information
127
+
128
+ This was done as a moonlighting project during summer of 2021 to better understand transformers. I didn't have much free to open source it properly, so it all sat on my hard drive until now :) Based on the popularity and interest in this model I might release _substantially_ larger versions of Slovak GPT-J models that are way more capable.
129
+
130
+ If you use this model or have any questions about it feel free to hit me up at [twitter](https://twitter.com/miloskondela) or check out my [github](https://github.com/kondela) profile.
131
+
132
+ ### BibTeX entry
133
+ To cite this model:
134
+ ```bibtex
135
+ @misc{slovak-gpt-j-1.4B,
136
+ author = {Kondela, Milos},
137
+ title = {{Slovak GPT-J-1.4B}},
138
+ howpublished = {\url{https://huggingface.co/Milos/slovak-gpt-j-1.4B}},
139
+ year = 2022,
140
+ month = February
141
+ }
142
+ ```
143
+
144
+ To cite the codebase that trained this model:
145
+ ```bibtex
146
+ @misc{mesh-transformer-jax,
147
+ author = {Wang, Ben},
148
+ title = {{Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX}},
149
+ howpublished = {\url{https://github.com/kingoflolz/mesh-transformer-jax}},
150
+ year = 2021,
151
+ month = May
152
+ }
153
+ ```
154
+
155
+ ## Acknowledgements
156
+ This project was generously supported by [TPU Research Cloud (TRC) program](https://sites.research.google/trc/about/). Shoutout also goes to [Ben Wang](https://github.com/kingoflolz) and great [EleutherAI community](https://www.eleuther.ai/).