Merge branch 'main' of https://huggingface.co/wangfan/jdt-fin-roberta-wwm into main
Browse files
README.md
CHANGED
@@ -1,14 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
在众多业务中,越来越频繁的使用预训练语言模型(Pre-trained Language Models),为了在金融场景下各任务中取得更好效果,我们发布了jdt-fin-roberta-wwm模型
|
2 |
|
3 |
-
|
4 |
* `base模型`:12-layer, 768-hidden, 12-heads, 110M parameters
|
5 |
|
6 |
-
| 模型简称 |
|
7 |
-
|
|
8 |
-
| fin-roberta-wwm |
|
|
|
9 |
|
10 |
-
|
11 |
-
### 使用Huggingface-Transformers
|
12 |
依托于[Huggingface-Transformers](https://github.com/huggingface/transformers),可轻松调用以上模型。
|
13 |
```
|
14 |
tokenizer = BertTokenizer.from_pretrained("MODEL_NAME")
|
@@ -20,4 +29,11 @@ model = BertModel.from_pretrained("MODEL_NAME")
|
|
20 |
| 模型名 | MODEL_NAME |
|
21 |
| - | - |
|
22 |
| fin-roberta-wwm | wangfan/jdt-fin-roberta-wwm |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
|
1 |
+
---
|
2 |
+
language: zh
|
3 |
+
tags:
|
4 |
+
- roberta-wwm
|
5 |
+
license: apache-2.0
|
6 |
+
datasets:
|
7 |
+
- finance
|
8 |
+
---
|
9 |
+
|
10 |
在众多业务中,越来越频繁的使用预训练语言模型(Pre-trained Language Models),为了在金融场景下各任务中取得更好效果,我们发布了jdt-fin-roberta-wwm模型
|
11 |
|
12 |
+
#### 模型&下载
|
13 |
* `base模型`:12-layer, 768-hidden, 12-heads, 110M parameters
|
14 |
|
15 |
+
| 模型简称 | 京盘下载 |
|
16 |
+
| :----: | :----:|
|
17 |
+
| fin-roberta-wwm | [Tensorflow](https://3.cn/103c-hwSS)/[Pytorch](https://3.cn/103c-izpe) |
|
18 |
+
| fin-roberta-wwm-large | todo |
|
19 |
|
20 |
+
#### 快速加载
|
|
|
21 |
依托于[Huggingface-Transformers](https://github.com/huggingface/transformers),可轻松调用以上模型。
|
22 |
```
|
23 |
tokenizer = BertTokenizer.from_pretrained("MODEL_NAME")
|
29 |
| 模型名 | MODEL_NAME |
|
30 |
| - | - |
|
31 |
| fin-roberta-wwm | wangfan/jdt-fin-roberta-wwm |
|
32 |
+
| fin-roberta-wwm-large | todo |
|
33 |
+
|
34 |
+
#### 任务效果
|
35 |
+
| Task | NER | 关系抽取 | 事件抽取 | 指标抽取 | 实体链接 |
|
36 |
+
|:----:|:-- :|:------:|:-------:|:-------:|:------:|
|
37 |
+
| Our |93.88| 79.02 | 91.99 | 94.28| 86.72 |
|
38 |
+
| Roberta-wwm |93.47| 76.99 | 91.58 | 93.98| 85.20 |
|
39 |
|