Text Generation
Transformers
PyTorch
gptj
Inference Endpoints
juliensalinas commited on
Commit
2cc7a94
1 Parent(s): 8bdd61d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -0
README.md CHANGED
@@ -2,6 +2,8 @@
2
  license: gpl-3.0
3
  ---
4
 
 
 
5
  This model demonstrates that GPT-J can work perfectly well as an "instruct" model when properly fine-tuned.
6
 
7
  We fine-tuned GPT-J on an instruction dataset created by the [Stanford Alpaca team](https://github.com/tatsu-lab/stanford_alpaca). You can find the original dataset [here](https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json).
@@ -31,3 +33,40 @@ Correct spelling and grammar from the following text.
31
  I do not wan to go
32
  ```
33
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: gpl-3.0
3
  ---
4
 
5
+ # Description
6
+
7
  This model demonstrates that GPT-J can work perfectly well as an "instruct" model when properly fine-tuned.
8
 
9
  We fine-tuned GPT-J on an instruction dataset created by the [Stanford Alpaca team](https://github.com/tatsu-lab/stanford_alpaca). You can find the original dataset [here](https://github.com/tatsu-lab/stanford_alpaca/blob/main/alpaca_data.json).
 
33
  I do not wan to go
34
  ```
35
 
36
+ Which returns the following:
37
+
38
+ ```text
39
+ I do not want to go.
40
+ ```
41
+
42
+ ## How To Use The Model?
43
+
44
+ Using the model in FP16 with the text generation pipeline, here is what you can do:
45
+
46
+ ```python
47
+ from transformers import pipeline
48
+ import torch
49
+
50
+ generator = pipeline(model="nlpcloud/instruct-gpt-j", torch_dtype=torch.float16, device=0)
51
+
52
+ prompt = "Correct spelling and grammar from the following text.\nI do not wan to go"
53
+
54
+ print(generator(prompt))
55
+ ```
56
+
57
+ You can also use the `generate()` function, here is what you can do:
58
+
59
+ ```python
60
+ from transformers import AutoTokenizer, AutoModelForCausalLM
61
+ import torch
62
+
63
+ tokenizer = AutoTokenizer.from_pretrained('nlpcloud/instruct-gpt-j')
64
+ generator = AutoModelForCausalLM.from_pretrained("nlpcloud/instruct-gpt-j",torch_dtype=torch.float16).cuda()
65
+
66
+ prompt = "Correct spelling and grammar from the following text.\nI do not wan to go"
67
+
68
+ inputs = tokenizer(prompt, return_tensors='pt')
69
+ outputs = generator.generate(inputs.input_ids.cuda())
70
+
71
+ print(tokenizer.decode(outputs[0]))
72
+ ```