Sharathhebbar24 commited on
Commit
f6df300
1 Parent(s): 63f8af3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -10
README.md CHANGED
@@ -10,16 +10,34 @@ tags:
10
 
11
  This model is a finetuned version of ```gpt2-medium``` using ```databricks/databricks-dolly-15k dataset```
12
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
  ```python
15
- >>> from transformers import pipeline, set_seed
16
- >>> generator = pipeline('text-generation', model='Sharathhebbar24/Instruct_GPT')
17
- >>> set_seed(42)
18
- >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5)
19
-
20
- [{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."},
21
- {'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"},
22
- {'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"},
23
- {'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"},
24
- {'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}]
 
 
 
25
  ```
 
10
 
11
  This model is a finetuned version of ```gpt2-medium``` using ```databricks/databricks-dolly-15k dataset```
12
 
13
+ ## Model description
14
+
15
+ GPT-2 is a transformers model pre-trained on a very large corpus of English data in a self-supervised fashion. This
16
+ means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots
17
+ of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
18
+ it was trained to guess the next word in sentences.
19
+
20
+ More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
21
+ shifted one token (word or piece of word) to the right. The model uses a mask mechanism to make sure the
22
+ predictions for the token `i` only uses the inputs from `1` to `i` but not the future tokens.
23
+
24
+ This way, the model learns an inner representation of the English language that can then be used to extract features
25
+ useful for downstream tasks. The model is best at what it was trained for, however, which is generating texts from a
26
+ prompt.
27
+
28
 
29
  ```python
30
+ >>> from transformers import AutoTokenizer, AutoModelForCausalLM
31
+ >>> model_name = "Sharathhebbar24/Instruct_GPT"
32
+ >>> model = AutoModelForCausalLM.from_pretrained(model_name)
33
+ >>> tokenizer = AutoTokenizer.from_pretrained("gpt2-medium")
34
+ >>> def generate_text(prompt):
35
+ >>> inputs = tokenizer.encode(prompt, return_tensors='pt')
36
+ >>> outputs = mod1.generate(inputs, max_length=64, pad_token_id=tokenizer.eos_token_id)
37
+ >>> generated = tokenizer.decode(outputs[0], skip_special_tokens=True)
38
+ >>> return generated[:generated.rfind(".")+1]
39
+
40
+ >>> generate_text("Should I Invest in stocks")
41
+
42
+ Should I Invest in stocks? Investing in stocks is a great way to diversify your portfolio. You can invest in stocks based on the market's performance, or you can invest in stocks based on the company's performance."
43
  ```