Update README.md
Browse files
README.md
CHANGED
@@ -13,7 +13,8 @@ To address the scarcity of high-quality safety datasets in the Chinese, we open-
|
|
13 |
- Model-based filtering: filtering of low-quality content by training a classification model
|
14 |
- Deduplication: within and between datasets dedup
|
15 |
|
16 |
-
|
|
|
17 |
|
18 |
## Update
|
19 |
|
@@ -27,7 +28,7 @@ The CCI 3.0 corpus released is 1010GB in size. We added rich meta information. U
|
|
27 |
| :-------: | :----: | :--------------------------: |
|
28 |
| id | String | Document ID, globally unique |
|
29 |
| content | String | Content of the document |
|
30 |
-
| meta_info | String | Meta Info of the document
|
31 |
|
32 |
|
33 |
## Sample
|
@@ -50,7 +51,8 @@ The CCI 3.0 corpus released is 1010GB in size. We added rich meta information. U
|
|
50 |
"2gram_repetition_ratio": 0.1016839378238342,
|
51 |
"3gram_repetition_ratio": 0.0304601425793908,
|
52 |
"entity_count": 424,
|
53 |
-
"entity_dependency_count": 184
|
|
|
54 |
}
|
55 |
}
|
56 |
```
|
|
|
13 |
- Model-based filtering: filtering of low-quality content by training a classification model
|
14 |
- Deduplication: within and between datasets dedup
|
15 |
|
16 |
+
Besides, we added rich meta information including quality score and educational level tagged by small models. Users can conveniently utilize the meta information of each data entry to further filter and customize the dataset.
|
17 |
+
The CCI 3.0 corpus released is about 1000GB in size.
|
18 |
|
19 |
## Update
|
20 |
|
|
|
28 |
| :-------: | :----: | :--------------------------: |
|
29 |
| id | String | Document ID, globally unique |
|
30 |
| content | String | Content of the document |
|
31 |
+
| meta_info | String | Meta Info of the document |
|
32 |
|
33 |
|
34 |
## Sample
|
|
|
51 |
"2gram_repetition_ratio": 0.1016839378238342,
|
52 |
"3gram_repetition_ratio": 0.0304601425793908,
|
53 |
"entity_count": 424,
|
54 |
+
"entity_dependency_count": 184,
|
55 |
+
"educational_level": 0
|
56 |
}
|
57 |
}
|
58 |
```
|