Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Languages:
Arabic
Size:
10M - 100M
ArXiv:
DOI:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -59,11 +59,8 @@ This dataset was created to address the significant lack of large-scale, high-qu
|
|
59 |
|
60 |
#### Data Collection and Processing
|
61 |
|
62 |
-
|
63 |
-
|
64 |
-
#### Who are the source data producers?
|
65 |
-
|
66 |
-
The data was produced by web content creators worldwide and collected through the Common Crawl project, which provides an extensive archive of the web's content.
|
67 |
|
68 |
## Bias, Risks, and Limitations
|
69 |
|
|
|
59 |
|
60 |
#### Data Collection and Processing
|
61 |
|
62 |
+
We initially gathered data from specified sources, primarily Common Crawl, and extracted Arabic content from WET files using Rust. Then, we applied our preprocessing pipeline, which included text cleaning and
|
63 |
+
deduplication.
|
|
|
|
|
|
|
64 |
|
65 |
## Bias, Risks, and Limitations
|
66 |
|