macedonizer commited on
Commit
af91809
1 Parent(s): b0d62ce

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -11
README.md CHANGED
@@ -1,21 +1,20 @@
1
  ---
2
  language:
3
- - mk
4
- thumbnail: https://huggingface.co/macedonizer/mk-roberta-base/blaze-koneski.jpg
5
  license: Apache 2.0
6
  datasets:
7
- - wiki-mk
8
- - time-mk-news-2010-2015
9
  ---
10
 
11
- # mk-gpt2
12
  Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
13
  Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
14
  [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
15
  and first released at [this page](https://openai.com/blog/better-language-models/).
16
 
17
  ## Model description
18
- mk-gpt2 is a transformers model pretrained on a very large corpus of Macedonian data in a self-supervised fashion. This
19
  means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
20
  of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
21
  it was trained to guess the next word in sentences.
@@ -32,10 +31,10 @@ Here is how to use this model to get the features of a given text in PyTorch:
32
  import random
33
  from transformers import AutoTokenizer, AutoModelWithLMHead
34
 
35
- tokenizer = AutoTokenizer.from_pretrained('macedonizer/mk-gpt2') \
36
- model = AutoModelWithLMHead.from_pretrained('macedonizer/mk-gpt2')
37
 
38
- input_text = 'Скопје е '
39
 
40
  if len(input_text) == 0: \
41
  encoded_input = tokenizer(input_text, return_tensors="pt") \
@@ -59,8 +58,7 @@ else: \
59
  num_return_sequences=1, \
60
  )
61
 
62
- decoded_output = [] \
63
- for sample in output: \
64
  decoded_output.append(tokenizer.decode(sample, skip_special_tokens=True))
65
 
66
  print(decoded_output)
1
  ---
2
  language:
3
+ - sl
4
+ thumbnail: https://huggingface.co/macedonizer/mkgpt2/lets-talk-about-nlp.jpg
5
  license: Apache 2.0
6
  datasets:
7
+ - wiki-sl
 
8
  ---
9
 
10
+ # sl-gpt2
11
  Test the whole generation capabilities here: https://transformer.huggingface.co/doc/gpt2-large
12
  Pretrained model on English language using a causal language modeling (CLM) objective. It was introduced in
13
  [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)
14
  and first released at [this page](https://openai.com/blog/better-language-models/).
15
 
16
  ## Model description
17
+ sl-gpt2 is a transformers model pretrained on a very large corpus of Slovenian data in a self-supervised fashion. This
18
  means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots
19
  of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
20
  it was trained to guess the next word in sentences.
31
  import random
32
  from transformers import AutoTokenizer, AutoModelWithLMHead
33
 
34
+ tokenizer = AutoTokenizer.from_pretrained('macedonizer/sl-gpt2') \
35
+ model = AutoModelWithLMHead.from_pretrained('macedonizer/sl-gpt2')
36
 
37
+ input_text = 'Ljubljana '
38
 
39
  if len(input_text) == 0: \
40
  encoded_input = tokenizer(input_text, return_tensors="pt") \
58
  num_return_sequences=1, \
59
  )
60
 
61
+ decoded_output = [] \\nfor sample in output: \
 
62
  decoded_output.append(tokenizer.decode(sample, skip_special_tokens=True))
63
 
64
  print(decoded_output)