Sharathhebbar24 commited on
Commit
699c2c0
1 Parent(s): 17a1d29

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - vicgalle/alpaca-gpt4
5
+ language:
6
+ - en
7
+ pipeline_tag: text-generation
8
+ ---
9
+
10
+
11
+ This model is a finetuned version of ```gpt2``` using ```vicgalle/alpaca-gpt4```
12
+
13
+ ## Model description
14
+
15
+ GPT-2 is a transformers model pre-trained on a very large corpus of English data in a self-supervised fashion. This
16
+ means it was pre-trained on the raw texts only, with no humans labeling them in any way (which is why it can use lots
17
+ of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely,
18
+ it was trained to guess the next word in sentences.
19
+
20
+ More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence,
21
+ shifting one token (word or piece of word) to the right. The model uses a masking mechanism to make sure the
22
+ predictions for the token `i` only use the inputs from `1` to `i` but not the future tokens.
23
+
24
+ This way, the model learns an inner representation of the English language that can then be used to extract features
25
+ useful for downstream tasks. The model is best at what it was trained for, however, which is generating texts from a
26
+ prompt.
27
+
28
+ ### To use this model
29
+
30
+ ```python
31
+ >>> from transformers import AutoTokenizer, AutoModelForCausalLM
32
+ >>> model_name = "Sharathhebbar24/convo_bot_gpt2"
33
+ >>> model = AutoModelForCausalLM.from_pretrained(model_name)
34
+ >>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
35
+ >>> def generate_text(prompt):
36
+ >>> inputs = tokenizer.encode(prompt, return_tensors='pt')
37
+ >>> outputs = mod1.generate(inputs, max_length=64, pad_token_id=tokenizer.eos_token_id)
38
+ >>> generated = tokenizer.decode(outputs[0], skip_special_tokens=True)
39
+ >>> return generated
40
+ >>> prompt = """
41
+ >>> Below is an instruction that describes a task. Write a response that appropriately completes the request.
42
+ >>> ### Instruction: Who is the world's most famous painter?
43
+ >>> ###
44
+ >>> """
45
+ >>> res = generate_text(prompt)
46
+ >>> res
47
+ ```