model documentation

#3
by nazneen - opened
Files changed (1) hide show
  1. README.md +173 -3
README.md CHANGED
@@ -1,3 +1,173 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Model Card for UniXcoder-base
2
+
3
+
4
+
5
+
6
+
7
+
8
+ # Model Details
9
+
10
+ ## Model Description
11
+ UniXcoder is a unified cross-modal pre-trained model that leverages multimodal data (i.e. code comment and AST) to pretrain code representation.
12
+
13
+ - **Developed by:** Microsoft Team
14
+ - **Shared by [Optional]:** Hugging Face
15
+ - **Model type:** Feature Engineering
16
+ - **Language(s) (NLP):** en
17
+ - **License:** Apache-2.0
18
+ - **Related Models:**
19
+ - **Parent Model:** RoBERTa
20
+ - **Resources for more information:**
21
+ - [Associated Paper](https://arxiv.org/abs/2203.03850)
22
+
23
+ # Uses
24
+
25
+
26
+
27
+ ## Direct Use
28
+ Feature Engineering
29
+
30
+
31
+ ## Downstream Use [Optional]
32
+
33
+ More information needed
34
+
35
+ ## Out-of-Scope Use
36
+
37
+ More information needed
38
+
39
+ # Bias, Risks, and Limitations
40
+
41
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
42
+
43
+
44
+ ## Recommendations
45
+
46
+
47
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
48
+
49
+
50
+ # Training Details
51
+
52
+ ## Training Data
53
+
54
+ More information needed
55
+
56
+ ## Training Procedure
57
+
58
+ ### Preprocessing
59
+
60
+ More information needed
61
+
62
+ ### Speeds, Sizes, Times
63
+ More information needed
64
+
65
+ # Evaluation
66
+
67
+
68
+ ## Testing Data, Factors & Metrics
69
+
70
+ ### Testing Data
71
+ More information needed
72
+
73
+ ### Factors
74
+
75
+ The model creators note in the [associated paper](https://arxiv.org/abs/2203.03850):
76
+
77
+ > UniXcoder has slightly worse BLEU-4 scores on both code summarization and generation tasks. The main reasons may come from two aspects. One is the amount of NL-PL pairs in the pre-training data
78
+
79
+ ### Metrics
80
+
81
+ The model creators note in the [associated paper](https://arxiv.org/abs/2203.03850):
82
+
83
+ > We evaluate UniXcoder on five tasks over nine public datasets, including two understanding tasks, two generation tasks and an autoregressive task. To further evaluate the performance of code fragment embeddings, we also propose a new task called zero-shot code-to-code search.
84
+
85
+ ## Results
86
+
87
+ The model creators note in the [associated paper](https://arxiv.org/abs/2203.03850):
88
+
89
+ >Taking zero-shot code-code search task as an example, after removing contrastive learning, the performance drops from 20.45% to 13.73%.
90
+
91
+ # Model Examination
92
+
93
+ More information needed
94
+
95
+ # Environmental Impact
96
+
97
+
98
+
99
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
100
+
101
+ - **Hardware Type:** More information needed
102
+ - **Hours used:** More information needed
103
+ - **Cloud Provider:** More information needed
104
+ - **Compute Region:** More information needed
105
+ - **Carbon Emitted:** More information needed
106
+
107
+ # Technical Specifications [optional]
108
+
109
+ ## Model Architecture and Objective
110
+
111
+ More information needed
112
+
113
+ ## Compute Infrastructure
114
+
115
+ More information needed
116
+
117
+ ### Hardware
118
+
119
+ More information needed
120
+
121
+ ### Software
122
+
123
+ More information needed
124
+
125
+ # Citation
126
+
127
+
128
+ **BibTeX:**
129
+ ```
130
+ @misc{https://doi.org/10.48550/arxiv.2203.03850,
131
+ doi = {10.48550/ARXIV.2203.03850},
132
+
133
+ url = {https://arxiv.org/abs/2203.03850},
134
+
135
+ author = {Guo, Daya and Lu, Shuai and Duan, Nan and Wang, Yanlin and Zhou, Ming and Yin, Jian},
136
+
137
+ keywords = {Computation and Language (cs.CL), Programming Languages (cs.PL), Software Engineering (cs.SE), FOS: Computer and information sciences, FOS: Computer and information sciences},
138
+
139
+ title = {UniXcoder: Unified Cross-Modal Pre-training for Code
140
+ ```
141
+
142
+
143
+ # Glossary [optional]
144
+
145
+ More information needed
146
+
147
+ # More Information [optional]
148
+
149
+ More information needed
150
+
151
+ # Model Card Authors [optional]
152
+
153
+ Microsoft Team in collaboration with Ezi Ozoani and the Hugging Face Team.
154
+
155
+ # Model Card Contact
156
+
157
+ More information needed
158
+
159
+ # How to Get Started with the Model
160
+
161
+ Use the code below to get started with the model.
162
+
163
+ <details>
164
+ <summary> Click to expand </summary>
165
+ ```python
166
+ from transformers import AutoTokenizer, AutoModel
167
+
168
+ tokenizer = AutoTokenizer.from_pretrained("microsoft/unixcoder-base")
169
+
170
+ model = AutoModel.from_pretrained("microsoft/unixcoder-base")
171
+
172
+ ```
173
+ </details>