Tanhim commited on
Commit
5517e7a
1 Parent(s): c249720

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -10
README.md CHANGED
@@ -6,16 +6,22 @@ language: German or Deutsch <br />
6
  thumbnail: "https://huggingface.co/Tanhim/gpt2-model-de" <br />
7
  datasets: Ten Thousand German News Articles Dataset <br />
8
 
9
-
10
- * How to use from the 🤗/transformers library <br />
11
-
 
 
 
 
 
 
 
 
 
12
  from transformers import AutoTokenizer, AutoModelWithLMHead <br />
13
-
14
  tokenizer = AutoTokenizer.from_pretrained("Tanhim/gpt2-model-de") <br />
15
  model = AutoModelWithLMHead.from_pretrained("Tanhim/gpt2-model-de") <br />
16
-
17
- * How to use from the pipeline <br />
18
-
19
- from transformers import pipeline <br />
20
-
21
- text-generation = pipeline("text-generation", model="Tanhim/gpt2-model-de", tokenizer="anonymous-german-nlp/german-gpt2") <br />
 
6
  thumbnail: "https://huggingface.co/Tanhim/gpt2-model-de" <br />
7
  datasets: Ten Thousand German News Articles Dataset <br />
8
 
9
+ ### How to use
10
+ You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we
11
+ set a seed for reproducibility:
12
+ ```python
13
+ >>> from transformers import pipeline, set_seed
14
+ >>> generation= pipeline('text-generation', model='Tanhim/gpt2-model-de')
15
+ >>> set_seed(42)
16
+ >>> generation("Hallo, ich bin ein Sprachmodell,", max_length=30, num_return_sequences=5)
17
+
18
+ ```
19
+ Here is how to use this model to get the features of a given text in PyTorch:
20
+ ```python
21
  from transformers import AutoTokenizer, AutoModelWithLMHead <br />
 
22
  tokenizer = AutoTokenizer.from_pretrained("Tanhim/gpt2-model-de") <br />
23
  model = AutoModelWithLMHead.from_pretrained("Tanhim/gpt2-model-de") <br />
24
+ text = "Ersetzen Sie mich durch einen beliebigen Text, den Sie wünschen."
25
+ encoded_input = tokenizer(text, return_tensors='pt')
26
+ output = model(**encoded_input)
27
+ ```