Text Generation
Transformers
PyTorch
Safetensors
English
gpt2
Inference Endpoints
text-generation-inference
lgaalves commited on
Commit
f3efe18
1 Parent(s): 554f9b2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -9,8 +9,6 @@ language:
9
  pipeline_tag: text-generation
10
  ---
11
 
12
-
13
-
14
  # GPT2_platypus-dolly-guanaco
15
 
16
  **gpt2_platypus-dolly-guanaco** is an instruction fine-tuned model based on the GPT-2 transformer architecture.
@@ -39,7 +37,7 @@ We use state-of-the-art [Language Model Evaluation Harness](https://github.com/E
39
  ```python
40
  # Use a pipeline as a high-level helper
41
  >>> from transformers import pipeline
42
- >>> pipe = pipeline("text-generation", model="lgaalves/lgaalves/gpt2_platypus-dolly-guanaco")
43
  >>> question = "What is a large language model?"
44
  >>> answer = pipe(question)
45
  >>> print(answer[0]['generated_text'])
 
9
  pipeline_tag: text-generation
10
  ---
11
 
 
 
12
  # GPT2_platypus-dolly-guanaco
13
 
14
  **gpt2_platypus-dolly-guanaco** is an instruction fine-tuned model based on the GPT-2 transformer architecture.
 
37
  ```python
38
  # Use a pipeline as a high-level helper
39
  >>> from transformers import pipeline
40
+ >>> pipe = pipeline("text-generation", model="lgaalves/gpt2_platypus-dolly-guanaco")
41
  >>> question = "What is a large language model?"
42
  >>> answer = pipe(question)
43
  >>> print(answer[0]['generated_text'])