teknium commited on
Commit
7c803c4
1 Parent(s): 16817c2

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +100 -0
README.md ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ datasets:
4
+ - bigcode/the-stack-dedup
5
+ - sahil2801/CodeAlpaca-20k
6
+ - teknium/GPTeacher-CodeInstruct
7
+ model-base:
8
+ - replit/replit-code-v1-3b
9
+ tags:
10
+ - code
11
+ - instruct
12
+ - self instruct
13
+ language:
14
+ - code
15
+ programming_language:
16
+ - Markdown
17
+ - Java
18
+ - JavaScript
19
+ - Python
20
+ - TypeScript
21
+ - PHP
22
+ - SQL
23
+ - JSX
24
+ - reStructuredText
25
+ - Rust
26
+ - C
27
+ - CSS
28
+ - Go
29
+ - C++
30
+ - HTML
31
+ - Vue
32
+ - Ruby
33
+ - Jupyter Notebook
34
+ - R
35
+ - Shell
36
+ ---
37
+
38
+ Base Model: replit/replit-code-v1-3b
39
+
40
+ This model is fine tuned on both Sahil2801's CodeAlpaca & Teknium's GPTeacher Code-Instruct to give Replit's Code model instruct capabilities.
41
+
42
+ Try this model on it's HuggingFace demo Spaces: https://huggingface.co/spaces/teknium/Replit-v2-CodeInstruct-3B
43
+
44
+ Dataset links:
45
+ CodeAlpaca: https://huggingface.co/datasets/sahil2801/CodeAlpaca-20k
46
+ GPTeacher subset - Code Instruct: https://github.com/teknium1/GPTeacher
47
+
48
+ This model was trained on 2x a100 80gb for 1 hour on ~25,000 code instruction/response pairs in Alpaca format.
49
+
50
+ The first model was only trained on 512 sequence length, this model on 2000, giving it much greater access to training data knowledge.
51
+
52
+ Refer to the base models HuggingFace model card for some basic requirements to run: https://huggingface.co/replit/replit-code-v1-3b
53
+
54
+ This fine tune can be prompted like any alpaca fine tune:
55
+ ```
56
+ ### Instruction:
57
+ <prompt>
58
+
59
+ ### Input:
60
+ <additional context>
61
+
62
+ ### Response:
63
+ ```
64
+
65
+ or
66
+
67
+ ```
68
+ ### Instruction:
69
+ <prompt>
70
+
71
+ ### Response:
72
+
73
+ ```
74
+
75
+ This model seems to have issues with device="auto" in the model arguments (and requires the trust_remote_code=True, so you should maybe load it like I am here:
76
+ ```
77
+ self.tokenizer = AutoTokenizer.from_pretrained("./Replit-CodeInstruct/", trust_remote_code=True)
78
+ self.model = AutoModelForCausalLM.from_pretrained(
79
+ "./Replit-CodeInstruct",
80
+ torch_dtype=torch.bfloat16,
81
+ trust_remote_code=True
82
+ )
83
+ self.model.to('cuda')
84
+ ```
85
+
86
+
87
+ This model for me produced coherent outputs with the following sampler settings, but feel free to experiment:
88
+ ```
89
+ max_new_tokens=128, do_sample=True, use_cache=True, temperature=0.2, top_p=0.9, eos_token_id= self.tokenizer.eos_token_id
90
+ ```
91
+
92
+ In the tokenizer decode arguments, it also needs these settings:
93
+ ```
94
+ skip_special_tokens=True, clean_up_tokenization_space=False
95
+ ```
96
+
97
+ The following parameters were used with HuggingFace trainer to train the model with:
98
+ ```
99
+ --model_name_or_path replit/replit-code-v1-3b --data_path /root/stanford_alpaca/train.json --bf16 True --output_dir /root/stanford_alpaca/model_ckpts --num_train_epochs 3 --per_device_train_batch_size 4 --per_device_eval_batch_size 1 --gradient_accumulation_steps 8 --save_strategy steps --save_steps 200 --save_total_limit 3 --learning_rate 1e-5 --weight_decay 0. --warmup_ratio 0.03 --tf32 True --run_name Replit1
100
+ ```