# izumi-lab /bert-small-japanese-fin

YAML Metadata Error: "datasets" does not match any of the allowed types

# BERT small Japanese finance

This is a BERT model pretrained on texts in the Japanese language.

The codes for the pretraining are available at retarfi/language-pretraining.

## Model architecture

The model architecture is the same as BERT small in the original ELECTRA paper; 12 layers, 256 dimensions of hidden states, and 4 attention heads.

## Training Data

The models are trained on Wikipedia corpus and financial corpus.

The Wikipedia corpus is generated from the Japanese Wikipedia dump file as of June 1, 2021.

The corpus file is 2.9GB, consisting of approximately 20M sentences.

The financial corpus consists of 2 corpora:

• Summaries of financial results from October 9, 2012, to December 31, 2020
• Securities reports from February 8, 2018, to December 31, 2020

The financial corpus file is 5.2GB, consisting of approximately 27M sentences.

## Tokenization

The texts are first tokenized by MeCab with IPA dictionary and then split into subwords by the WordPiece algorithm.

The vocabulary size is 32768.

## Training

The models are trained with the same configuration as BERT small in the original ELECTRA paper; 128 tokens per instance, 128 instances per batch, and 1.45M training steps.

## Citation

There will be another paper for this pretrained model. Be sure to check here again when you cite.

@inproceedings{suzuki2021fin-bert-electra,
title={金融文書を用いた事前学習言語モデルの構築と検証},
% title={Construction and Validation of a Pre-Trained Language Model Using Financial Documents},
author={鈴木 雅弘 and 坂地 泰紀 and 平野 正徳 and 和泉 潔},
% author={Masahiro Suzuki and Hiroki Sakaji and Masanori Hirano and Kiyoshi Izumi},
booktitle={人工知能学会第27回金融情報学研究会(SIG-FIN)},
% booktitle={Proceedings of JSAI Special Interest Group on Financial Infomatics (SIG-FIN) 27},
pages={5-10},
year={2021}
}


## Acknowledgments

This work was supported by JSPS KAKENHI Grant Number JP21K12010.

Mask token: [MASK]