Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,41 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- ko
|
4 |
+
|
5 |
+
---
|
6 |
+
|
7 |
+
# KR-FinBert & KR-FinBert-SC
|
8 |
+
|
9 |
+
Much progress has been made in the NLP (Natural Language Processing) field, with numerous studies showing that domain adaptation using small-scale corpus and fine-tuning with labeled data is effective for overall performance improvement.
|
10 |
+
we proposed KR-FinBert for the financial domain by further pre-training it on a financial corpus and fine-tuning it for sentiment analysis. As many studies have shown, the performance improvement through adaptation and conducting the downstream task was also clear in this experiment.
|
11 |
+
|
12 |
+
![KR-FinBert](https://huggingface.co/snunlp/KR-FinBert/resolve/main/images/KR-FinBert.png)
|
13 |
+
|
14 |
+
## Data
|
15 |
+
|
16 |
+
The training data for this model is expanded from those of **KR-BERT-MEDIUM**, texts from Korean Wikipedia, general news articles, legal texts crawled from the National Law Information Center and [Korean Comments dataset](https://www.kaggle.com/junbumlee/kcbert-pretraining-corpus-korean-news-comments). For the transfer learning, **corporate related economic news articles from 72 media sources** such as the Financial Times, The Korean Economy Daily, etc and **analyst reports from 16 securities companies** such as Kiwoom Securities, Samsung Securities, etc are added. Included in the dataset is 440,067 news titles with their content and 11,237 analyst reports. **The total data size is about 13.22GB.** For mlm training, we split the data line by line and **the total no. of lines is 6,379,315.**
|
17 |
+
KR-FinBert is trained for 5.5M steps with the maxlen of 512, training batch size of 32, and learning rate of 5e-5, taking 67.48 hours to train the model using NVIDIA TITAN XP.
|
18 |
+
|
19 |
+
|
20 |
+
## Downstream tasks
|
21 |
+
### Sentimental Classification model
|
22 |
+
|
23 |
+
Downstream task performances with 50,000 labeled data.
|
24 |
+
|
25 |
+
|Model|Accuracy|
|
26 |
+
|-|-|
|
27 |
+
|KR-FinBert|0.963|
|
28 |
+
|KR-BERT-MEDIUM|0.958|
|
29 |
+
|KcBert-large|0.955|
|
30 |
+
|KcBert-base|0.953|
|
31 |
+
|KoBert|0.817|
|
32 |
+
|
33 |
+
### Inference sample
|
34 |
+
|
35 |
+
|Positive|Negative|
|
36 |
+
|-|-|
|
37 |
+
|ํ๋๋ฐ์ด์ค, 'ํด๋ฆฌํ์
' ์ฝ๋ก๋19 ์น๋ฃ ๊ฐ๋ฅ์ฑ์ 19% ๊ธ๋ฑ | ์ํ๊ดๆ ช '์ฝ๋ก๋ ๋นํ๊ธฐ' ์ธ์ ๋๋๋โฆ"CJ CGV ์ฌ 4000์ต ์์ค ๋ ์๋"ย |
|
38 |
+
|์ด์ํํ, 3๋ถ๊ธฐย ์์
์ตย 176์ตโฆ์ ๋
ๆฏย 80%โ | C์ผํฌ์ย ๋ฉ์ถย ํ์๋นํโฆ๋ํํญ๊ณตย 1๋ถ๊ธฐย ์์
์ ์ย 566์ตย |
|
39 |
+
|"GKL, 7๋
ย ๋ง์ย ๋ย ์๋ฆฟ์ย ๋งค์ถ์ฑ์ฅย ์์" | '1000์ต๋ย ํก๋ นยท๋ฐฐ์'ย ์ต์ ์ย ํ์ฅ ๊ตฌ์โฆย SK๋คํธ์์คย "๊ฒฝ์ ๊ณต๋ฐฑ ๋ฐฉ์ง ์ต์ "ย |
|
40 |
+
|์์ง์
์คํ๋์ค, ์ฝํ
์ธ ํ์ฝ์ ์ฌ์ ์ฒซ ๋งค์ถ 1000์ต์ ๋ํ | ๋ถํ ๊ณต๊ธ ์ฐจ์ง์โฆ๊ธฐ์์ฐจย ๊ด์ฃผ๊ณต์ฅ ์ ๋ฉด ๊ฐ๋ ์ค๋จย |
|
41 |
+
|์ผ์ฑ์ ์, 2๋
๋ง์ ์ธ๋ ์ค๋งํธํฐ ์์ฅ ์ ์ ์จ 1์ '์์ข ํํ' | ํ๋์ ์ฒ , ์ง๋ํดย ์์
์ตย 3,313์ต์ยทยทยท์ ๋
ๆฏย 67.7%ย ๊ฐ์ย |
|