File size: 787 Bytes
18759ae
5c300a1
18759ae
5c300a1
ffa00e1
 
18759ae
7570505
 
 
 
18759ae
 
 
 
 
 
118498c
18759ae
 
 
 
 
 
 
 
6cb870a
18759ae
 
 
 
 
 
 
 
118498c
 
18759ae
41f4758
18759ae
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
language: hr
tags:
- GPT-2
datasets:
- hrwac
---
If you use this model for own tasks, please share your results in the community tab.


With Tensorflow you can use:
```python
from transformers import GPT2Tokenizer, TFGPT2Model

tokenizer = GPT2Tokenizer.from_pretrained("domsebalj/GPcroaT")
model = TFGPT2LMHeadModel.from_pretrained("domsebalj/GPcroaT")

text = "Zamijeni ovaj tekst vlastitim"

input_ids = tokenizer.encode(text, return_tensors='tf')

beam_output = model.generate(
  input_ids,
  max_length = 80,
  min_length = 10,
  num_beams = 10,
  temperature = 5.7,
  no_repeat_ngram_size=2,
  num_return_sequences=5,
  repetition_penalty =7.5,
  length_penalty = 1.5,
  top_k = 50
)

output = []
for i in beam_output:
  output.append(tokenizer.decode(i))
  
print(output)
```