arampacha commited on
Commit
7b603af
1 Parent(s): ad81500

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -7
README.md CHANGED
@@ -11,11 +11,11 @@ datasets:
11
 
12
  ---
13
 
14
- # GPT-Code-Clippy-125M-APPS
15
 
16
  ## Model Description
17
 
18
- GPT-CC-125M-APPS is a GPT-Neo-125M finetuned on APPS dataset. This model is specialized to solve programming tasks.
19
 
20
  ## Training data
21
 
@@ -58,7 +58,7 @@ python run_clm_apps.py \
58
 
59
  ## Intended Use and Limitations
60
 
61
- The model is finetuned to solve programming problems given a text description and optional starter code.
62
 
63
  ### How to use
64
 
@@ -104,11 +104,9 @@ The paper ["Evaluating Large Language Models Trained on Code"](https://arxiv.org
104
 
105
  2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software.
106
 
107
- 5. **Biases:** The model is trained on data containing prompt questions formatted in specific way. The performance of the model can be worse if the prompt
108
 
109
- formatting is different from that used in APPS dataset.
110
-
111
- GPT-CC is finetuned GPT-Neo and might have inhereted biases and limitations from it. See [GPT-Neo model card](https://huggingface.co/EleutherAI/gpt-neo-125M#limitations-and-biases) for details.
112
 
113
  ## Eval results
114
 
 
11
 
12
  ---
13
 
14
+ # GPT-Code-Clippy-1.3B-APPS-all
15
 
16
  ## Model Description
17
 
18
+ GPT-CC-1.3B-APPS-all is a GPT-Neo-1.3B fine-tuned on APPS dataset. This model is specialized to solve programming tasks.
19
 
20
  ## Training data
21
 
 
58
 
59
  ## Intended Use and Limitations
60
 
61
+ The model is fine-tuned to solve programming problems given a text description and optional starter code.
62
 
63
  ### How to use
64
 
 
104
 
105
  2. **Economic and labor market impacts:** Large language models trained on large code datasets such as this one that are capable of generating high-quality code have the potential to automate part of the software development process. This may negatively impact software developers. However, as discussed in the paper, as shown in the Summary Report of software developers from [O*NET OnLine](https://www.onetonline.org/link/summary/15-1252.00), developers don't just write software.
106
 
107
+ 5. **Biases:** The model is trained on data containing prompt questions formatted in specific way. The performance of the model can be worse if the prompt formatting is different from that used in APPS dataset.
108
 
109
+ This model is finetuned GPT-Neo and might have inhereted biases and limitations from it. See [GPT-Neo model card](https://huggingface.co/EleutherAI/gpt-neo-125M#limitations-and-biases) for details.
 
 
110
 
111
  ## Eval results
112