Edit model card

πŸ”₯ Classifiers of FinTOC 2022 Shared task winners (ISPRAS team) πŸ”₯

Classifiers of texual lines of English, French and Spanish financial prospects in PDF format for the FinTOC 2022 Shared task.

πŸ€— Source code πŸ€—

Training scripts are available in the repository https://github.com/ispras/dedoc/ (see scripts/fintoc2022 directory).

πŸ€— Task description πŸ€—

Lines are classified in two stages:

  1. Binary classification title/not title (title detection task).
  2. Classification of title lines into title depth classes (TOC generation task).

There are two types of classifiers according to the stage:

  1. For the first stage, binary classifiers are trained. They return bool values: True for title lines and False for non-title lines.
  2. For the second stage, target classifiers are trained. They return int title depth classes from 1 to 6. More important lines have a lesser depth.

πŸ€— Results evaluation πŸ€—

The training dataset contains English, French, and Spanish documents, so three language categories are available ("en", "fr", "sp"). To obtain document lines, we use dedoc library (dedoc.readers.PdfTabbyReader, dedoc.readers.PdfTxtlayerReader), so two reader categories are available ("tabby", "txt_layer").

To obtain FinTOC structure, we use our method described in our article (winners of FinTOC 2022 Shared task!). The results of our method (3-fold cross-validation on the FinTOC 2022 training dataset) for different languages and readers are given in the table below (they slightly changed since the competition finished). As in the FinTOC 2022 Shared task, we use two metrics for results evaluation (metrics from the article): TD - F1 measure for the title detection task, TOC - harmonic mean of Inex F1 score and Inex level accuracy for the TOC generation task.

TD 0 TD 1 TD 2 TD mean TOC 0 TOC 1 TOC 2 TOC mean
en_tabby 0.811522 0.833798 0.864239 0.836520 56.5 58.0 64.9 59.800000
en_txt_layer 0.821360 0.853258 0.833623 0.836081 57.8 62.1 57.8 59.233333
fr_tabby 0.753409 0.744232 0.782169 0.759937 51.2 47.9 51.5 50.200000
fr_txt_layer 0.740530 0.794460 0.766059 0.767016 45.6 52.2 50.1 49.300000
sp_tabby 0.606718 0.622839 0.599094 0.609550 37.1 43.6 43.4 41.366667
sp_txt_layer 0.629052 0.667976 0.446827 0.581285 46.4 48.8 30.7 41.966667

πŸ€— See also πŸ€—

Please see our article ISPRAS@FinTOC-2022 shared task: Two-stage TOC generation model to get more information about the FinTOC 2022 Shared task and our method of solving it. We will be grateful, if you cite our work (see citation in BibTeX format below).

@inproceedings{bogatenkova-etal-2022-ispras,
    title = "{ISPRAS}@{F}in{TOC}-2022 Shared Task: Two-stage {TOC} Generation Model",
    author = "Bogatenkova, Anastasiia  and
      Belyaeva, Oksana Vladimirovna  and
      Perminov, Andrew Igorevich  and
      Kozlov, Ilya Sergeevich",
    editor = "El-Haj, Mahmoud  and
      Rayson, Paul  and
      Zmandar, Nadhem",
    booktitle = "Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022",
    month = jun,
    year = "2022",
    address = "Marseille, France",
    publisher = "European Language Resources Association",
    url = "https://aclanthology.org/2022.fnp-1.13",
    pages = "89--94"
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .