Datasets:
Tasks:
Fill-Mask
Sub-tasks:
masked-language-modeling
Languages:
English
Size:
10M<n<100M
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -174,7 +174,7 @@ This dataset may contain personal and sensitive information. However, this has b
|
|
174 |
|
175 |
### Social Impact of Dataset
|
176 |
|
177 |
-
We hope that this dataset will provide more mechanisms for doing data work. As we describe in the paper, the internal variation allows contextual privacy rules to be learned. If robust mechanisms for this are developed they can applied more broadly. This dataset can also potentially be used for legal language model pretraining. As discussed in
|
178 |
|
179 |
### Discussion of Biases
|
180 |
|
|
|
174 |
|
175 |
### Social Impact of Dataset
|
176 |
|
177 |
+
We hope that this dataset will provide more mechanisms for doing data work. As we describe in the paper, the internal variation allows contextual privacy rules to be learned. If robust mechanisms for this are developed they can applied more broadly. This dataset can also potentially be used for legal language model pretraining. As discussed in ``On the Opportunities and Risks of Foundation Models'', legal language models can help improve access to justice in various ways. But they can also be used in potentially harmful ways. While such models are not ready for most production environments and are the subject of significant research, we ask that model creators using this data, particularly when creating generative models, consider the impacts of their model and make a good faith effort to weigh the benefits against the harms of their method. Our license and many of the sub-licenses also restrict commercial usage.
|
178 |
|
179 |
### Discussion of Biases
|
180 |
|