Stojanco Tudzarski commited on
Commit
aa732de
1 Parent(s): 0f4bf5f

Inital commit

Browse files
README.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - mk
4
+ thumbnail: https://huggingface.co/macedonizer/mk-roberta-base/blaze-koneski.jpg
5
+ tags:
6
+ - casual-lm
7
+ license: Apache 2.0
8
+ datasets:
9
+ - wiki-mk
10
+ - time-mk-news-2010-2015
11
+ ---
12
+
13
+ # mk-gpt2
14
+ Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
15
+ Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
16
+ [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
17
+ and first released at [this page](https://openai.com/blog/better-language-models/).
18
+
19
+ ## Model description
20
+ mk-gpt2 is a transformers model pretrained on a very large corpus of Macedonian data in a self-supervised fashion. This
21
+ means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
22
+ of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
23
+ it was trained to guess the next word in sentences.
24
+ More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
25
+ shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the
26
+ predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
27
+ This way, the model learns an inner representation of the Macedonian language that can then be used to extract features
28
+ useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a
29
+ prompt.
30
+
31
+ ### How to use
32
+ Here is how to use this model to get the features of a given text in PyTorch:
33
+
34
+ import random
35
+ from transformers import AutoTokenizer, AutoModelWithLMHead
36
+
37
+ tokenizer = AutoTokenizer.from_pretrained('macedonizer/mk-gpt2')
38
+ model = AutoModelWithLMHead.from_pretrained('macedonizer/mk-gpt2')
39
+
40
+ input_text = 'Скопје е '
41
+
42
+ if len(input_text) == 0:
43
+ encoded_input = tokenizer(input_text, return_tensors="pt")
44
+ output = model.generate(
45
+ bos_token_id=random.randint(1, 50000),
46
+ do_sample=True,
47
+ top_k=50,
48
+ max_length=1024,
49
+ top_p=0.95,
50
+ num_return_sequences=1,
51
+ )
52
+ else:
53
+ encoded_input = tokenizer(input_text, return_tensors="pt")
54
+ output = model.generate(
55
+ **encoded_input,
56
+ bos_token_id=random.randint(1, 50000),
57
+ do_sample=True,
58
+ top_k=50,
59
+ max_length=1024,
60
+ top_p=0.95,
61
+ num_return_sequences=1,
62
+ )
63
+
64
+ decoded_output = []
65
+ for sample in output:
66
+ decoded_output.append(tokenizer.decode(sample, skip_special_tokens=True))
67
+
68
+ print(decoded_output)
added_tokens.json ADDED
@@ -0,0 +1 @@
 
1
+ {"<|endoftext|>": 52000}
blaze-koneski.jpg ADDED
config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "activation_function": "gelu_new",
3
+ "architectures": [
4
+ "GPT2LMHeadModel"
5
+ ],
6
+ "attn_pdrop": 0.1,
7
+ "bos_token_id": 0,
8
+ "embd_pdrop": 0.1,
9
+ "eos_token_id": 2,
10
+ "gradient_checkpointing": false,
11
+ "initializer_range": 0.02,
12
+ "layer_norm_epsilon": 1e-05,
13
+ "model_type": "gpt2",
14
+ "n_ctx": 1024,
15
+ "n_embd": 768,
16
+ "n_head": 12,
17
+ "n_inner": null,
18
+ "n_layer": 12,
19
+ "n_positions": 1024,
20
+ "resid_pdrop": 0.1,
21
+ "scale_attn_weights": true,
22
+ "summary_activation": null,
23
+ "summary_first_dropout": 0.1,
24
+ "summary_proj_to_labels": true,
25
+ "summary_type": "cls_index",
26
+ "summary_use_proj": true,
27
+ "transformers_version": "4.6.1",
28
+ "use_cache": true,
29
+ "vocab_size": 52000
30
+ }
lets-talk-about-nlp-blaze-koneski.jpg ADDED
merges.txt ADDED
The diff for this file is too large to render. See raw diff
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abe80668ea9a10639381bc4a94a2b4f766d0c840d145c39042d5ab8baec88060
3
+ size 515758313
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>", "mask_token": "<mask>"}
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"unk_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "bos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "add_prefix_space": false, "special_tokens_map_file": null, "name_or_path": "/content/drive/MyDrive/macedonizer.me/mk-gpt2/mk_gpt2_tok", "errors": "replace"}
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4302bb614aa0da6e9e29a019d16a572a817d35ea8af0c74ddd4b7c371e89772e
3
+ size 2479
vocab.json ADDED
The diff for this file is too large to render. See raw diff