model documentation

#2
by nazneen - opened
Files changed (1) hide show
  1. README.md +178 -0
README.md ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Model Card for plbart-base
2
+
3
+
4
+ # Model Details
5
+
6
+ ## Model Description
7
+
8
+ The PLBART model was proposed in Unified Pre-training for Program Understanding and Generation
9
+
10
+ - **Developed by:** UCLA NLP
11
+ - **Shared by [Optional]:** [Gunjan Chhablani](https://huggingface.co/gchhablani)
12
+ - **Model type:** Text2Text Generation
13
+ - **Language(s) (NLP):** More information needed
14
+ - **License:** More information needed
15
+ - **Related Models:** bert-base-multilingual-uncased
16
+ - **Parent Model:** plbart
17
+ - **Resources for more information:**
18
+ - [Associated Paper](https://arxiv.org/abs/2103.06333)
19
+ - [Model Documentation](https://huggingface.co/docs/transformers/model_doc/plbart)
20
+
21
+ # Uses
22
+
23
+ ## Direct Use
24
+
25
+ The pre-trained model plbart-base has been trained using multilingual denoising task
26
+
27
+ ## Downstream Use [Optional]
28
+
29
+ More information needed
30
+
31
+ ## Out-of-Scope Use
32
+ More information needed
33
+
34
+ # Bias, Risks, and Limitations
35
+
36
+
37
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
38
+
39
+
40
+ ## Recommendations
41
+
42
+
43
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
44
+
45
+
46
+ # Training Details
47
+
48
+ ## Training Data
49
+
50
+ More information needed
51
+
52
+ ## Training Procedure
53
+
54
+
55
+
56
+ ### Preprocessing
57
+
58
+ The model creators note in the [associated paper](https://arxiv.org/pdf/2103.06333.pdf)
59
+ > We tokenize all the data with a sentencepiece model (Kudo and Richardson, 2018) learned on 1/5’th of the pre-training data. We train sentencepiece to learn 50,000 subword tokens. One key challenge to aggregate data from different modalities is that some modalities may have more data, such as we have 14 times more data in PL than NL. Therefore, we mix and up/down sample the data following Conneau and Lample (2019) to alleviate the bias towards PL.
60
+
61
+ ### Speeds, Sizes, Times
62
+ The model creators note in the [associated paper]()
63
+ > The effective batch size is maintained at 2048 instances.
64
+
65
+ # Evaluation
66
+
67
+
68
+ ## Testing Data, Factors & Metrics
69
+
70
+ ### Testing Data
71
+ The model creators note in the [associated paper](https://arxiv.org/pdf/2103.06333.pdf)
72
+ >CodeXGLUE (Lu et al., 2021) provided public dataset and corresponding train validation-test splits for all the tasks
73
+
74
+ ### Factors
75
+
76
+ More information needed
77
+
78
+ ### Metrics
79
+
80
+ More information needed
81
+
82
+ ## Results
83
+
84
+ More information needed
85
+
86
+ # Model Examination
87
+ More information needed
88
+
89
+ # Environmental Impact
90
+
91
+
92
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
93
+
94
+ - **Hardware Type:** 8 Nvidia GeForce RTX 2080 Ti GPUs
95
+ - **Hours used:** More information needed
96
+ - **Cloud Provider:** More information needed
97
+ - **Compute Region:** More information needed
98
+ - **Carbon Emitted:** More information needed
99
+
100
+ # Technical Specifications [optional]
101
+
102
+ ## Model Architecture and Objective
103
+
104
+ PLBart is a multilingual encoder-decoder (sequence-to-sequence) model primarily intended for code-to-text, text-to-code, code-to-code tasks. As the model is multilingual it expects the sequences in a different format. A special language id token is added in both the source and target text. The source text format is X [eos, src_lang_code] where X is the source text.
105
+
106
+ ## Compute Infrastructure
107
+ The model creators note in the [associated paper](https://arxiv.org/pdf/2103.06333.pdf)
108
+ > PLBART uses the same architecture as BARTbase (Lewis et al., 2020), it uses the sequence-to-sequence Transformer architecture (Vaswani et al., 2017), with 6 layers of encoder and 6 layers of decoder with model dimension of 768 and 12 heads (∼140M parameters). The only exception is, we include an additional layer normalization layer on top of both the encoder and decoder following Liu et al. (2020),
109
+
110
+ ### Hardware
111
+
112
+ More information needed
113
+
114
+ ### Software
115
+
116
+ More information needed
117
+
118
+ # Citation
119
+
120
+
121
+
122
+ **BibTeX:**
123
+ ```
124
+ @misc{https://doi.org/10.48550/arxiv.2103.06333,
125
+ doi = {10.48550/ARXIV.2103.06333},
126
+
127
+ url = {https://arxiv.org/abs/2103.06333},
128
+
129
+ author = {Ahmad, Wasi Uddin and Chakraborty, Saikat and Ray, Baishakhi and Chang, Kai-Wei},
130
+
131
+ keywords = {Computation and Language (cs.CL), Programming Languages (cs.PL), FOS: Computer and information sciences, FOS: Computer and information sciences},
132
+
133
+ title = {Unified Pre-training for Program Understanding and Generation},
134
+
135
+ publisher = {arXiv},
136
+
137
+ year = {2021},
138
+
139
+ copyright = {arXiv.org perpetual, non-exclusive license}
140
+ }
141
+
142
+
143
+ ```
144
+
145
+ **APA:**
146
+ More information needed
147
+
148
+ # Glossary [optional]
149
+
150
+ >CodeBLEU is a metric for measuring the quality of the synthesized code (Ren et al., 2020). Unlike BLEU, CodeBLEU also considers grammatical and logical correctness based on the abstract syntax tree and the data-flow structure.
151
+
152
+ # More Information [optional]
153
+
154
+ More information needed
155
+
156
+ # Model Card Authors [optional]
157
+
158
+
159
+ UCLA NLP in collaboration with Ezi Ozoani and the Hugging Face team
160
+
161
+ # Model Card Contact
162
+ More information needed
163
+
164
+ # How to Get Started with the Model
165
+
166
+ Use the code below to get started with the model.
167
+
168
+ <details>
169
+ <summary> Click to expand </summary>
170
+ ```python
171
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
172
+
173
+ tokenizer = AutoTokenizer.from_pretrained("uclanlp/plbart-base")
174
+
175
+ model = AutoModelForSeq2SeqLM.from_pretrained("uclanlp/plbart-base")
176
+
177
+ ```
178
+ </details>