blaze-koneski / README.md
1
---
2
language:
3
- mk
4
thumbnail: https://huggingface.co/macedonizer/blaze-koneski/blaze-koneski.jpg
5
license: apache-2.0
6
datasets:
7
- wiki-mk
8
- blaze-koneski-poetry
9
---
10
11
# blaze-koneski
12
GPT-2 type of model. We finetuned macedonizer/mk-gpt-2 with Blaze Koneski's poetry.
13
14
## About Blaze Koneski
15
Born in a village near Prilep in 1921. Studied philology at Skopje University and worked there as a professor. Was the first chairman of the Macedonian Academy of Sciences and Arts, corresponding member of the Yugoslav Academy of Sciences and Arts, as well as of the Serbian and Slovene Academies, and honorary doctor of the Universities of Chicago and Krakow.
16
17
Wrote poetry, short stories, and essays, as well as scholarly works, many of them on the Macedonian language. Editor of the Dictionarv of the Macedonian Language, translator of Heine and Shakespeare. His works have been translated into Serbian, Croatian, Slovene, Albanian, Turkish, Hungarian, French, Russian, Italian, Greek, Polish, Romanian, German, and English.
18
19
Winner of numerous prizes, including the Golden Wreath of the Struga Poetry Evenings.
20
21
### How to use
22
Here is how to use this model to get the features of a given text in PyTorch:
23
24
import random
25
from transformers import AutoTokenizer, AutoModelWithLMHead
26
27
tokenizer = AutoTokenizer.from_pretrained('macedonizer/blaze-koneski')
28
nmodel = AutoModelWithLMHead.from_pretrained('macedonizer/blaze-koneski')
29
30
input_text = 'Москва '
31
32
if len(input_text) == 0: \
33
    encoded_input = tokenizer(input_text, return_tensors="pt") \
34
    output = model.generate( \
35
        bos_token_id=random.randint(1, 50000), \
36
        do_sample=True, \
37
        top_k=50, \
38
        max_length=1024, \
39
        top_p=0.95, \
40
        num_return_sequences=1, \
41
     ) \
42
else: \
43
    encoded_input = tokenizer(input_text, return_tensors="pt") \
44
    output = model.generate( \
45
        **encoded_input, \
46
        bos_token_id=random.randint(1, 50000), \
47
        do_sample=True, \
48
        top_k=50, \
49
        max_length=1024, \
50
        top_p=0.95, \
51
        num_return_sequences=1, \
52
    )
53
54
decoded_output = [] \
55
for sample in output: \
56
    decoded_output.append(tokenizer.decode(sample, skip_special_tokens=True))
57
58
print(decoded_output)