Text Generation
Transformers
PyTorch
English
gptj
Inference Endpoints
juewang commited on
Commit
adf6322
1 Parent(s): 3becaca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -87,7 +87,7 @@ We incorporated a collection of open techniques and datasets to build GPT-JT:
87
 
88
  With the help of techniques mentioned above, GPT-JT significantly improves the performance of classification tasks over the original GPT-J, and even outperforms most 100B+ parameter models!
89
 
90
- ***Please try out our [Online Demo](https://huggingface.co/spaces/togethercomputer/GPT-JT)!***
91
 
92
  # Quick Start
93
 
@@ -96,9 +96,7 @@ from transformers import pipeline
96
  pipe = pipeline(model='togethercomputer/GPT-JT-6B-v1')
97
  pipe('''"I love this!" Is it positive? A:''')
98
  ```
99
-
100
  or
101
-
102
  ```python
103
  from transformers import AutoTokenizer, AutoModelForCausalLM
104
  tokenizer = AutoTokenizer.from_pretrained("togethercomputer/GPT-JT-6B-v1")
 
87
 
88
  With the help of techniques mentioned above, GPT-JT significantly improves the performance of classification tasks over the original GPT-J, and even outperforms most 100B+ parameter models!
89
 
90
+ ***<p style="font-size: 24px">Please try out our [Online Demo](https://huggingface.co/spaces/togethercomputer/GPT-JT)!</p>***
91
 
92
  # Quick Start
93
 
 
96
  pipe = pipeline(model='togethercomputer/GPT-JT-6B-v1')
97
  pipe('''"I love this!" Is it positive? A:''')
98
  ```
 
99
  or
 
100
  ```python
101
  from transformers import AutoTokenizer, AutoModelForCausalLM
102
  tokenizer = AutoTokenizer.from_pretrained("togethercomputer/GPT-JT-6B-v1")