File size: 1,189 Bytes
45f1236
 
 
 
1c23a6a
 
45f1236
 
e6b3e3d
 
2c742c2
190151e
 
 
 
e6b3e3d
5517e7a
75d3fc0
5517e7a
 
 
 
 
 
 
 
 
 
da2363f
 
 
5517e7a
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
language: de
widget:
- text: "Hallo, ich bin ein Sprachmodell"

license: gnu 
---

<h2> GPT2 Model for German Language </h2>


Model Name: Tanhim/gpt2-model-de <br />
language: German or Deutsch  <br />
thumbnail: "https://huggingface.co/Tanhim/gpt2-model-de" <br />
datasets: Ten Thousand German News Articles Dataset <br />

### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, I
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generation= pipeline('text-generation', model='Tanhim/gpt2-model-de')
>>> set_seed(42)
>>> generation("Hallo, ich bin ein Sprachmodell,", max_length=30, num_return_sequences=5)

```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead 
tokenizer = AutoTokenizer.from_pretrained("Tanhim/gpt2-model-de") 
model = AutoModelWithLMHead.from_pretrained("Tanhim/gpt2-model-de") 
text = "Ersetzen Sie mich durch einen beliebigen Text, den Sie wünschen."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```