HyperionHF
commited on
Commit
•
e6318bf
1
Parent(s):
0c40ba3
Update README.md
Browse files
README.md
CHANGED
@@ -1,58 +1,73 @@
|
|
1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
tags:
|
3 |
-
-
|
4 |
-
|
5 |
-
-
|
6 |
-
|
7 |
-
-
|
8 |
-
results: []
|
9 |
---
|
10 |
|
11 |
-
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
|
14 |
-
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
19 |
|
20 |
-
|
|
|
|
|
|
|
|
|
|
|
21 |
|
22 |
-
## Intended
|
23 |
|
24 |
-
|
25 |
|
26 |
-
##
|
27 |
|
28 |
-
|
29 |
|
30 |
-
|
31 |
|
32 |
-
|
33 |
|
34 |
-
|
35 |
-
- learning_rate: 1e-05
|
36 |
-
- train_batch_size: 2
|
37 |
-
- eval_batch_size: 1
|
38 |
-
- seed: 42
|
39 |
-
- distributed_type: multi-GPU
|
40 |
-
- num_devices: 64
|
41 |
-
- gradient_accumulation_steps: 2
|
42 |
-
- total_train_batch_size: 256
|
43 |
-
- total_eval_batch_size: 64
|
44 |
-
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
|
45 |
-
- lr_scheduler_type: linear
|
46 |
-
- lr_scheduler_warmup_steps: 895
|
47 |
-
- num_epochs: 1.0
|
48 |
|
49 |
-
|
50 |
|
|
|
51 |
|
|
|
52 |
|
53 |
-
|
54 |
|
55 |
-
|
56 |
-
- Pytorch 1.13.0
|
57 |
-
- Datasets 2.7.1
|
58 |
-
- Tokenizers 0.12.1
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
+
- code
|
5 |
+
|
6 |
+
license: "mit"
|
7 |
+
|
8 |
tags:
|
9 |
+
- Diff Model
|
10 |
+
- pytorch
|
11 |
+
- causal-lm
|
12 |
+
- code-generation
|
13 |
+
- The Pile
|
|
|
14 |
---
|
15 |
|
16 |
+
# Diff-Codegen-6B v2 Model Card
|
17 |
+
|
18 |
+
## Model Description
|
19 |
+
|
20 |
+
diff-codegen-6b-v2 is a diff model for code generation, released by [CarperAI](http://carper.ai/). A diff model is an autoregressive language model trained on edits to a piece of text, formatted in [Unified Diff Format](https://en.wikipedia.org/wiki/Diff#Unified_format). These diff models can suggest, given a section of text and a description of the desired change, an intelligent change to the text that fits the description, marking the lines added, changed, and deleted in diff format.
|
21 |
+
|
22 |
+
In comparison to few-shot prompting of normal code generation models, diff models are specialized for suggesting intelligent changes to existing code, particularly longer pieces of code and where a change is required to follow some natural language text description (provided in the form of a commit message).
|
23 |
+
|
24 |
+
This model is a fine-tune of [codegen-6B-mono](https://huggingface.co/Salesforce/codegen-6B-mono) by Salesforce, trained on a large dataset of commits scraped from GitHub.
|
25 |
+
|
26 |
+
diff-codegen-6b-v2 is an experimental research artifact and should be treated as such. We are releasing these results and this model in the hopes that it may be useful to the greater research community, especially those interested in LMs for code.
|
27 |
+
|
28 |
+
## Training Data
|
29 |
+
|
30 |
+
This model is a fine-tune of [codegen-6B-mono](https://huggingface.co/Salesforce/codegen-6B-mono) by Salesforce. This language model was first pre-trained on The Pile, an 800Gb dataset composed of varied web corpora. The datasheet and paper for the Pile can be found [here](https://arxiv.org/abs/2201.07311) and [here](https://arxiv.org/abs/2101.00027) respectively. The model was then fine-tuned on a large corpus of code data in multiple languages, before finally being fine-tuned on a Python code dataset. The Codegen paper with full details of these datasets can be found [here](https://arxiv.org/abs/2203.13474).
|
31 |
+
|
32 |
+
Our dataset for this fine-tune consists of commits from GitHub, obtained using the [Google BigQuery Public Dataset](https://cloud.google.com/blog/topics/public-datasets/github-on-bigquery-analyze-all-the-open-source-code), a public up to date snapshot of a huge number of open-source GitHub repositories. We took this dataset and filtered using [BigQuery](https://console.cloud.google.com/marketplace/details/github/github-repos) on the number of stars in the repository to exclude repos with less than 100 stars, and further restricted the query to only repositories with open-source non-copyleft licenses (e.g. MIT, Apache, etc) and commits with more than 10 characters in the commit message. We also restricted ourselves to a list of 22 popular programming, scripting, and markup languages, including Python, HTML, Bash scripts, SQL, C++, etc. This resulted in a dataset of 19 million commits after filtering.
|
33 |
+
|
34 |
+
Our diff model was trained on a dataset of commits from BigQuery, a large-scale dataset of many programming languages from GitHub repositories. We filtered the dataset by the number of stars in the repository (>100 stars), license (only open-source non-copyleft licensed code included), and length of file (files greater than 2048 tokens in length were excluded).
|
35 |
+
|
36 |
+
The model was trained using the Huggingface Codegen tokenizer.
|
37 |
|
38 |
+
## Training Details
|
39 |
|
40 |
+
The model was trained on 1.08 billion tokens for 1 epoch on 64 A100 GPUs, provided by [Stability AI](https://stability.ai/).
|
41 |
|
42 |
+
Each file was formatted as follows for input to the language model:
|
43 |
|
44 |
+
```
|
45 |
+
<NME> {FILE_NAME}
|
46 |
+
<BEF> {INPUT_FILE}
|
47 |
+
<MSG> {COMMIT_MESSAGE}
|
48 |
+
<DFF> {FILE_DIFF}
|
49 |
+
```
|
50 |
|
51 |
+
## Intended Uses and Limitations
|
52 |
|
53 |
+
Due to the model’s small size and restriction to code, one should not expect the model to generalize to domains beyond code and perform (successful) reasoning over large chunks of code. This model is intended to be used in prototyping code generation systems, and for solely experimental purposes. This model is provided without warranty and should not be used in commercial settings—even though the license permits.
|
54 |
|
55 |
+
## Limitations and Biases
|
56 |
|
57 |
+
Due to the short context length restriction and due to the fact that all repositories with under 100 stars were excluded, we expect our diff model to underperform on underrepresented languages, for instance Lean or Coq.
|
58 |
|
59 |
+
The output of this model should not be trusted as correct and secure code. This model should not be used in any mission critical setting where security is of importance. When running the output of this model, it should be done as much as possible in a sandbox, such as [gVisor](https://gvisor.dev), since it is very possible for the model to produce code which may delete files, send HTTP requests, or otherwise contain critical security vulnerabilities.
|
60 |
|
61 |
+
As with other language models, diff-codegen is prone to hallucination and biased, stereotyped, or toxic output. There are no guarantees of truthful output when generating from the model.
|
62 |
|
63 |
+
## Evaluation Results
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
|
65 |
+
See our blog post for full evaluation results.
|
66 |
|
67 |
+
## Licensing
|
68 |
|
69 |
+
This model is licensed as MIT.
|
70 |
|
71 |
+
## Acknowledgements
|
72 |
|
73 |
+
We’d like to thank Honglu Fan, Harry Saini, Herbie Bradley, Reshinth Adithyan, and Joel Lehman for their efforts! Thanks to Nitarshan Rajkumar for feedback on this model card.
|
|
|
|
|
|