blindsubmissions
commited on
Commit
·
2312b31
1
Parent(s):
d583690
Update README.md
Browse files
README.md
CHANGED
@@ -116,20 +116,19 @@ Each data instance corresponds to function/methods occurring in licensed files t
|
|
116 |
|
117 |
M2CRB is released with the aim to increase the coverage of the NLP for code research community by providing data from scarce combinations of languages. We expect this data to help enable more accurate information retrieval systems and text-to-code or code-to-text summarization on languages other than English.
|
118 |
|
119 |
-
As a subset of The Stack, this dataset inherits de-risking efforts carried out when that dataset was built, though we highlight risks exist and malicious use of the data could exist such as, for instance, to aid on creation of malicious code. We highlight however
|
120 |
|
|
|
121 |
|
122 |
## Discussion of Biases
|
123 |
|
124 |
-
The data is collected from GitHub and naturally occurring text on that platform
|
125 |
-
|
126 |
-
Moreover, certain language combinations are more or less likely to contain well documented code and, as such, resulting data will not be uniformly represented.
|
127 |
|
128 |
## Known limitations
|
129 |
|
130 |
While we cover 16 scarce combinations of programming and natural languages, our evaluation dataset can be expanded to further improve its coverage.
|
131 |
-
Moreover, we use text naturally occurring as comments or docstrings as opposed to human annotators. As such, resulting data will have high variance in terms of quality and
|
132 |
-
Finally, we that some imbalance on data is observed due to the same reason since
|
133 |
|
134 |
## Maintenance plan:
|
135 |
|
|
|
116 |
|
117 |
M2CRB is released with the aim to increase the coverage of the NLP for code research community by providing data from scarce combinations of languages. We expect this data to help enable more accurate information retrieval systems and text-to-code or code-to-text summarization on languages other than English.
|
118 |
|
119 |
+
As a subset of The Stack, this dataset inherits de-risking efforts carried out when that dataset was built, though we highlight risks exist and malicious use of the data could exist such as, for instance, to aid on creation of malicious code. We highlight however that this is a risk shared by any code dataset made openly available.
|
120 |
|
121 |
+
Moreover, we remark that while unlikely due to human filtering, the data may contain harmful or offensive language, which could be learned by the models.
|
122 |
|
123 |
## Discussion of Biases
|
124 |
|
125 |
+
The data is collected from GitHub and naturally occurring text on that platform. As a consequence, certain language combinations are more or less likely to contain well documented code and, as such, resulting data will not be uniformly represented in terms of their natural and programing languages.
|
|
|
|
|
126 |
|
127 |
## Known limitations
|
128 |
|
129 |
While we cover 16 scarce combinations of programming and natural languages, our evaluation dataset can be expanded to further improve its coverage.
|
130 |
+
Moreover, we use text naturally occurring as comments or docstrings as opposed to human annotators. As such, resulting data will have high variance in terms of quality and depending on practices of sub-communities of software developers. However, we remark that the task our evaluation dataset defines is reflective of what searching on a real codebase would look like.
|
131 |
+
Finally, we note that some imbalance on data is observed due to the same reason since certain language combinations are more or less likely to contain well documented code.
|
132 |
|
133 |
## Maintenance plan:
|
134 |
|