Text Generation
Transformers
PyTorch
gptj
Inference Endpoints
juliensalinas commited on
Commit
ac7834d
1 Parent(s): 96966e4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -49,7 +49,7 @@ Using the model in fp16 with the text generation pipeline, here is what you can
49
  from transformers import pipeline
50
  import torch
51
 
52
- generator = pipeline(model="nlpcloud/instruct-gpt-j", torch_dtype=torch.float16, device=0)
53
 
54
  prompt = "Correct spelling and grammar from the following text.\nI do not wan to go\n"
55
 
@@ -62,8 +62,8 @@ You can also use the `generate()` function. Here is what you can do:
62
  from transformers import AutoTokenizer, AutoModelForCausalLM
63
  import torch
64
 
65
- tokenizer = AutoTokenizer.from_pretrained('nlpcloud/instruct-gpt-j')
66
- generator = AutoModelForCausalLM.from_pretrained("nlpcloud/instruct-gpt-j",torch_dtype=torch.float16).cuda()
67
 
68
  prompt = "Correct spelling and grammar from the following text.\nI do not wan to go\n"
69
 
 
49
  from transformers import pipeline
50
  import torch
51
 
52
+ generator = pipeline(model="nlpcloud/instruct-gpt-j-fp16", torch_dtype=torch.float16, device=0)
53
 
54
  prompt = "Correct spelling and grammar from the following text.\nI do not wan to go\n"
55
 
 
62
  from transformers import AutoTokenizer, AutoModelForCausalLM
63
  import torch
64
 
65
+ tokenizer = AutoTokenizer.from_pretrained('nlpcloud/instruct-gpt-j-fp16')
66
+ generator = AutoModelForCausalLM.from_pretrained("nlpcloud/instruct-gpt-j-fp16",torch_dtype=torch.float16).cuda()
67
 
68
  prompt = "Correct spelling and grammar from the following text.\nI do not wan to go\n"
69