GaoangLau
commited on
Commit
•
ab4564e
0
Parent(s):
reinit add train.csv
Browse files- .gitattributes +3 -0
- README.md +48 -0
- cn_ner.py +18 -0
- dataset_infos.json +18 -0
- dev.csv +3 -0
- dev.py +74 -0
- jsons/bank.json +0 -0
- jsons/boson.json +0 -0
- jsons/ccfbdci.json +0 -0
- jsons/ccks2017_task2.json +0 -0
- jsons/ccks2018_task1.json +0 -0
- jsons/ccks2019_task1.json +0 -0
- jsons/cluener2020.json +0 -0
- jsons/cmeee.json +0 -0
- jsons/dlner.json +0 -0
- jsons/ecommerce.json +0 -0
- jsons/finance_sina.json +0 -0
- jsons/fned.json +0 -0
- jsons/gaiic2022_task2.json +0 -0
- jsons/imcs21_task1.json +0 -0
- jsons/mmc.json +0 -0
- jsons/msra.json +0 -0
- jsons/nlpcc2018_task4.json +0 -0
- jsons/people_dairy_1998.json +0 -0
- jsons/people_dairy_2014.json +0 -0
- jsons/resume.json +0 -0
- jsons/wanchuang.json +0 -0
- jsons/weibo.json +0 -0
- train.csv +3 -0
.gitattributes
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
dev.csv filter=lfs diff=lfs merge=lfs -text
|
2 |
+
test.csv filter=lfs diff=lfs merge=lfs -text
|
3 |
+
train.csv filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- code
|
4 |
+
pretty_name: "Chinese ner dataseet"
|
5 |
+
tags:
|
6 |
+
- ner
|
7 |
+
license: "bsd"
|
8 |
+
task_categories:
|
9 |
+
- token-classification
|
10 |
+
---
|
11 |
+
|
12 |
+
来源 https://github.com/liucongg/NLPDataSet
|
13 |
+
|
14 |
+
|
15 |
+
* 从网上收集数据,将CMeEE数据集、IMCS21_task1数据集、CCKS2017_task2数据集、CCKS2018_task1数据集、CCKS2019_task1数据集、CLUENER2020数据集、MSRA数据集、NLPCC2018_task4数据集、CCFBDCI数据集、MMC数据集、WanChuang数据集、PeopleDairy1998数据集、PeopleDairy2004数据集、GAIIC2022_task2数据集、WeiBo数据集、ECommerce数据集、FinanceSina数据集、BoSon数据集、Resume数据集、Bank数据集、FNED数据集和DLNER数据集等22个数据集进行整理清洗,构建一个较完善的中文NER数据集。
|
16 |
+
* 数据集清洗时,仅进行了简单地规则清洗,并将格式进行了统一化,标签为“BIO”。
|
17 |
+
* 处理后数据集详细信息,见[数据集描述](https://zhuanlan.zhihu.com/p/529541521)。
|
18 |
+
* 数据集由[NJUST-TB](https://github.com/Swag-tb)一起整理。
|
19 |
+
* 由于部分数据包含嵌套实体的情况,所以转换成BIO标签时,长实体会覆盖短实体。
|
20 |
+
|
21 |
+
| 数据 | 原始数据/项目地址 | 原始数据描述 |
|
22 |
+
| ------ | ------ | ------ |
|
23 |
+
| CMeEE数据集 | [地址](http://www.cips-chip.org.cn/2021/CBLUE) | 中文医疗信息处理挑战榜CBLUE中医学实体识别数据集 |
|
24 |
+
| IMCS21_task1数据集 | [地址](http://www.fudan-disc.com/sharedtask/imcs21/index.html?spm=5176.12282016.0.0.140e6d92ypyW1r) | CCL2021第一届智能对话诊疗评测比赛命名实体识别数据集 |
|
25 |
+
| CCKS2017_task2数据集 | [地址](https://www.biendata.xyz/competition/CCKS2017_2/) | CCKS2017面向电子病历的命名实体识别数据集 |
|
26 |
+
| CCKS2018_task1数据集 | [地址](https://www.biendata.xyz/competition/CCKS2018_1/) | CCKS2018面向中文电子病历的命名实体识别数据集 |
|
27 |
+
| CCKS2019_task1数据集 | [地址](http://openkg.cn/dataset/yidu-s4k) | CCKS2019面向中文电子病历的命名实体识别数据集 |
|
28 |
+
| CLUENER数据集 | [地址](https://github.com/CLUEbenchmark/CLUENER2020) | CLUENER2020数据集 |
|
29 |
+
| MSRA数据集 | [地址](https://www.msra.cn/) | MSRA微软亚洲研究院开源命名实体识别数据集 |
|
30 |
+
| NLPCC2018_task4数据集 | [地址](http://tcci.ccf.org.cn/conference/2018/taskdata.php) | 任务型对话系统数据数据集 |
|
31 |
+
| CCFBDCI数据集 | [地址](https://www.datafountain.cn/competitions/510) | 中文命名实体识别算法鲁棒性评测数据集 |
|
32 |
+
| MMC数据集 | [地址](https://tianchi.aliyun.com/competition/entrance/231687/information) | 瑞金医院MMC人工智能辅助构建知识图谱大赛数据集 |
|
33 |
+
| WanChuang数据集 | [地址](https://tianchi.aliyun.com/competition/entrance/531827/introduction) | "万创杯”中医药天池大数据竞赛—智慧中医药应用创新挑战赛数据集 |
|
34 |
+
| PeopleDairy1998数据集 | [地址]() | 人民日报1998数据集 |
|
35 |
+
| PeopleDairy2004数据集 | [地址]() | 人民日报2004数据集 |
|
36 |
+
| GAIIC2022_task2数据集 | [地址](https://www.heywhale.com/home/competition/620b34ed28270b0017b823ad/content/2) | 2022全球人工智能技术创新大赛-商品标题实体识别数据集 |
|
37 |
+
| WeiBo数据集 | [地址](https://github.com/hltcoe/golden-horse) | 社交媒体中文命名实体识别数据集 |
|
38 |
+
| ECommerce数据集 | [地址](https://github.com/allanj/ner_incomplete_annotation) | 面向电商的命名实体识别数据集 |
|
39 |
+
| FinanceSina数据集 | [地址](https://github.com/jiesutd/LatticeLSTM) | 新浪财经爬取中文命名实体识别数据集 |
|
40 |
+
| BoSon数据集 | [地址](https://github.com/bosondata) | 玻森中文命名实体识别数据集 |
|
41 |
+
| Resume数据集 | [地址](https://github.com/jiesutd/LatticeLSTM/tree/master/ResumeNER) | 中国股市上市公司高管的简历 |
|
42 |
+
| Bank数据集 | [地址](https://www.heywhale.com/mw/dataset/617969ec768f3b0017862990/file) | 银行借贷数据数据集 |
|
43 |
+
| FNED数据集 | [地址](https://www.datafountain.cn/competitions/561/datasets) | 高鲁棒性要求下的领域事件检测数据集 |
|
44 |
+
| DLNER数据集 | [地址](https://github.com/lancopku/Chinese-Literature-NER-RE-Dataset) | 语篇级命名实体识别数据集 |
|
45 |
+
|
46 |
+
清洗及格式转换后的数据,下载链接如下:[百度云](https://pan.baidu.com/s/1VvbvWPv3eM4MXsv_nlDSSA)
|
47 |
+
<br>提取码:4sea
|
48 |
+
|
cn_ner.py
ADDED
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# --------------------------------------------
|
2 |
+
|
3 |
+
import random
|
4 |
+
import json
|
5 |
+
import re
|
6 |
+
import sys
|
7 |
+
import time
|
8 |
+
from collections import defaultdict
|
9 |
+
from functools import reduce
|
10 |
+
|
11 |
+
import joblib
|
12 |
+
import numpy as np
|
13 |
+
import pandas as pd
|
14 |
+
from typing import List, Union, Callable, Set, Dict, Tuple, Optional, Any
|
15 |
+
|
16 |
+
# —--------------------------------------------
|
17 |
+
|
18 |
+
from datasets import load_dataset
|
dataset_infos.json
ADDED
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"default": {
|
3 |
+
"description": "ner movie dataset",
|
4 |
+
"citation": "",
|
5 |
+
"homepage": "",
|
6 |
+
"license": "bsd",
|
7 |
+
"supervised_keys": null,
|
8 |
+
"config_name": "default",
|
9 |
+
"version": {
|
10 |
+
"version_str": "0.1.0",
|
11 |
+
"description": null,
|
12 |
+
"datasets_version_to_prepare": null,
|
13 |
+
"major": 0,
|
14 |
+
"minor": 1,
|
15 |
+
"patch": 0
|
16 |
+
}
|
17 |
+
}
|
18 |
+
}
|
dev.csv
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:2ad6e9d176ef8b20442ec2af6d50057b104f6532dd7f9a67a1cb241b7b1b15d2
|
3 |
+
size 1261
|
dev.py
ADDED
@@ -0,0 +1,74 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# --------------------------------------------
|
2 |
+
import keyring as kr
|
3 |
+
import os
|
4 |
+
import random
|
5 |
+
import json
|
6 |
+
import re
|
7 |
+
import sys
|
8 |
+
import time
|
9 |
+
from collections import defaultdict
|
10 |
+
from functools import reduce
|
11 |
+
|
12 |
+
import codefast as cf
|
13 |
+
import joblib
|
14 |
+
import numpy as np
|
15 |
+
import pandas as pd
|
16 |
+
from rich import print
|
17 |
+
from typing import List, Union, Callable, Set, Dict, Tuple, Optional, Any
|
18 |
+
|
19 |
+
from pydantic import BaseModel
|
20 |
+
|
21 |
+
import asyncio
|
22 |
+
import aiohttp
|
23 |
+
import aioredis
|
24 |
+
from codefast.patterns.pipeline import Pipeline, BeeMaxin
|
25 |
+
# —--------------------------------------------
|
26 |
+
from datasets import load_dataset
|
27 |
+
|
28 |
+
|
29 |
+
class DataLoader(BeeMaxin):
|
30 |
+
def __init__(self) -> None:
|
31 |
+
super().__init__()
|
32 |
+
|
33 |
+
def process(self):
|
34 |
+
files = []
|
35 |
+
for f in cf.io.walk('jsons/'):
|
36 |
+
files.append(f)
|
37 |
+
return files
|
38 |
+
|
39 |
+
|
40 |
+
class ToCsv(BeeMaxin):
|
41 |
+
def to_csv(self, json_file: str):
|
42 |
+
texts, labels = [], []
|
43 |
+
with open(json_file, 'r') as f:
|
44 |
+
for line in f:
|
45 |
+
line = json.loads(line)
|
46 |
+
texts.append(line['text'])
|
47 |
+
_label = ' '.join(line['labels'])
|
48 |
+
labels.append(_label)
|
49 |
+
task_name = cf.io.basename(json_file).replace('.json', '')
|
50 |
+
return pd.DataFrame({'text': texts, 'labels': labels,
|
51 |
+
'task_name': task_name})
|
52 |
+
|
53 |
+
def process(self, files: List[str]):
|
54 |
+
""" Merge all ner data into a train.csv
|
55 |
+
"""
|
56 |
+
df = pd.DataFrame()
|
57 |
+
for f in files:
|
58 |
+
cf.info({
|
59 |
+
'message': f'processing {f}'
|
60 |
+
})
|
61 |
+
newdf = self.to_csv(f)
|
62 |
+
df = pd.concat([df, newdf], axis=0)
|
63 |
+
df.to_csv('train.csv', index=False)
|
64 |
+
df.sample(10).to_csv('dev.csv', index=False)
|
65 |
+
|
66 |
+
|
67 |
+
if __name__ == '__main__':
|
68 |
+
pl = Pipeline(
|
69 |
+
[
|
70 |
+
('dloader', DataLoader()),
|
71 |
+
('csv converter', ToCsv())
|
72 |
+
]
|
73 |
+
)
|
74 |
+
pl.gather()
|
jsons/bank.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
jsons/boson.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
jsons/ccfbdci.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
jsons/ccks2017_task2.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
jsons/ccks2018_task1.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
jsons/ccks2019_task1.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
jsons/cluener2020.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
jsons/cmeee.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
jsons/dlner.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
jsons/ecommerce.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
jsons/finance_sina.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
jsons/fned.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
jsons/gaiic2022_task2.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
jsons/imcs21_task1.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
jsons/mmc.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
jsons/msra.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
jsons/nlpcc2018_task4.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
jsons/people_dairy_1998.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
jsons/people_dairy_2014.json
ADDED
Binary file (199 MB). View file
|
|
jsons/resume.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
jsons/wanchuang.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
jsons/weibo.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
train.csv
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0d5e44c58a01a1460998da382b190c2433db00502fd941533f83ed2c8a51bc71
|
3 |
+
size 227506134
|