Datasets:

Modalities:
Text
Formats:
parquet
Languages:
Arabic
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
Manel-Hik commited on
Commit
62a303b
1 Parent(s): c3358e0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -5
README.md CHANGED
@@ -59,11 +59,8 @@ This dataset was created to address the significant lack of large-scale, high-qu
59
 
60
  #### Data Collection and Processing
61
 
62
- Data was collected from the Common Crawl archive, focusing on Arabic content within a specified time frame. The data went through extensive cleaning and deduplication processes to ensure quality and relevance.
63
-
64
- #### Who are the source data producers?
65
-
66
- The data was produced by web content creators worldwide and collected through the Common Crawl project, which provides an extensive archive of the web's content.
67
 
68
  ## Bias, Risks, and Limitations
69
 
 
59
 
60
  #### Data Collection and Processing
61
 
62
+ We initially gathered data from specified sources, primarily Common Crawl, and extracted Arabic content from WET files using Rust. Then, we applied our preprocessing pipeline, which included text cleaning and
63
+ deduplication.
 
 
 
64
 
65
  ## Bias, Risks, and Limitations
66