SBB
/

Text Classification
German
annif
glam
lam
Jrglmn commited on
Commit
6a3caed
1 Parent(s): 723cf68

Model Card for ark-omikuji-de-title-content

Browse files
Files changed (1) hide show
  1. README.md +202 -3
README.md CHANGED
@@ -1,3 +1,202 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - de
4
+ tags:
5
+ - annif
6
+ - glam
7
+ - lam
8
+ pipeline_tag: text-classification
9
+ license: apache-2.0
10
+ dataset: "[published on Zenodo](https://zenodo.org/doi/10.5281/zenodo.13301020)"
11
+ library_name: annif
12
+ ---
13
+
14
+ # Model Card for ark-omikuji-de-title-content
15
+
16
+
17
+ An [Annif](https://annif.org/) model, trained on historical titles and additional catalogue metadata for automatic subject indexing tasks. It classifies a given text into one or multiple subjects from the “Alter Realkatalog” ([ARK](https://staatsbibliothek-berlin.de/recherche/kataloge-der-staatsbibliothek/alter-realkatalog-und-historische-systematik)) classification system. The model was developed in the research project [Human.Machine.Culture](https://mmk.sbb.berlin/?lang=en) at Staatsbibliothek zu Berlin – Berlin State Library (SBB).
18
+
19
+ # Table of Contents
20
+
21
+ * [Model Card for ark-omikuji-de-title-content](#model-card-for-ark-omikuji-de-title-content)
22
+ * [Table of Contents](#table-of-contents)
23
+ * [Model Details](#model-details)
24
+ * [Model Description](#model-description)
25
+ * [Uses](#uses)
26
+ * [Direct Use](#direct-use)
27
+ * [Downstream Use](#downstream-use)
28
+ * [Out-of-Scope Use](#out-of-scope-use)
29
+ * [Bias, Risks, and Limitations](#bias-risks-and-limitations)
30
+ * [Recommendations](#recommendations)
31
+ * [Training Details](#training-details)
32
+ * [Training Data](#training-data)
33
+ * [Training Procedure](#training-procedure)
34
+ * [Preprocessing](#preprocessing)
35
+ * [Speeds, Sizes, Times](#speeds-sizes-times)
36
+ * [Training hyperparameters](#training-hyperparameters)
37
+ * [Training results](#training-results)
38
+ * [Evaluation](#evaluation)
39
+ * [Testing Data, Factors and Metrics](#testing-data-factors-and-metrics)
40
+ * [Testing Data](#testing-data)
41
+ * [Metrics](#metrics)
42
+ * [Environmental Impact](#environmental-impact)
43
+ * [Technical Specifications](#technical-specifications)
44
+ * [Model Architecture and Objective](#model-architecture-and-objective)
45
+ * [Software](#software)
46
+ * [Model Card Authors](#model-card-authors)
47
+ * [Model Card Contact](#model-card-contact)
48
+ * [How to Get Started with the Model](#how-to-get-started-with-the-model)
49
+
50
+ # Model Details
51
+
52
+ ## Model Description
53
+
54
+ An [Annif](https://annif.org/) model, trained on historical titles and additional catalogue metadata for automatic subject indexing tasks. Subject indexing is a classical library task, aiming at describing the content of a resource. The model is intended to be used to automatically classify historical texts with a historical classification system developed in the 19th century to enrich those texts that have not been classified manually so far. For each input text, the model outputs one or multiple subjects from the [ARK](https://staatsbibliothek-berlin.de/recherche/kataloge-der-staatsbibliothek/alter-realkatalog-und-historische-systematik) classification system. It is part of a collection of 5 models, created with the help of the Annif toolkit which addresses this task of automated subject indexing.
55
+
56
+ * **Developed by:** [Sophie Schneider](mailto:sophie.schneider@sbb.spk-berlin.de)
57
+ * **Shared by:** [Staatsbibliothek zu Berlin – Berlin State Library](https://huggingface.co/SBB)
58
+ * **Model type:** tree-based
59
+ * **Language(s) (NLP):** de (German)
60
+ * **License:** apache-2.0
61
+
62
+ # Uses
63
+
64
+ ## Direct Use
65
+
66
+ This model can directly be used to automatically classify historical texts with the ARK classification scheme. It is intended to be used together with the Annif automated subject indexing toolkit version 0.60.0-1.1.0.
67
+
68
+ ## Downstream Use
69
+
70
+ Other/downstream uses outside of the Annif setting described above are not intended but also not excluded.
71
+
72
+ ## Out-of-Scope Use
73
+
74
+ The model is not intended for use on contemporary texts, as language and concept drifts will probably influence the results negatively and some terms from the vocabulary are not appropriate for more recent publications.
75
+
76
+ # Bias, Risks, and Limitations
77
+
78
+ Since we are dealing with historical texts and especially with a historical classification system such as the ARK, the classes suggested for an input text might not be suitable for today’s understanding or might even be of an unethical nature (for more information, see also [the datasheet accompanying the Metadata of the “Alter Realkatalog” (ARK of Berlin State Library)](https://zenodo.org/doi/10.5281/zenodo.12783813) and the [Datasheet for Machine-Readable Vocabulary Files of the ARK (Alter Realkatalog)](https://zenodo.org/doi/10.5281/zenodo.13301020)).
79
+
80
+ Another limitation when using the ARK as a vocabulary arises from its hierarchical structure: the system contains multiple classes that do not describe the same content (e.g. they have different IDs) but are labeled identical (same name). This is due to the fact that the manual inspection of the whole path to a class, including all the upper level classes leading to it, delivers additional information that allows for contextualization. As duplicate label names seem to be \- as expected \- a challenge for lexical methods, we decided to focus on statistical rather than lexical algorithms.
81
+
82
+ ## Recommendations
83
+
84
+ Considering that the ARK classification scheme consists of 225.691 classes in total and that there is only limited training material at hand plus an overall unbalanced distribution of classes, we might describe this task as an Extreme Multi-Label Classification (XMC) problem. We recommend being aware of this limitation and, if available, use additional training data to improve the current model’s performance (e.g. by running `annif learn`, see [CLI commands documentation](https://annif.readthedocs.io/en/v1.1.0/source/commands.html\#annif-learn)).
85
+
86
+ # Training Details
87
+
88
+ ## Training Data
89
+
90
+ Training data include a selection of metadata fields that were obtained via CBS export:
91
+
92
+ * Lehmann, J., & Schneider, S. (2024). Metadata of the "Alter Realkatalog" (ARK) of Berlin State Library (SBB) (Version 1\) \[Data set\]. Zenodo. [https://doi.org/10.5281/zenodo.12783813](https://doi.org/10.5281/zenodo.12783813)
93
+
94
+ The following title and content data fields have been extracted and combined from this dataset:
95
+
96
+ * "Abweichender Titel" ([4212](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat\&val=4212\&katalog=Standard))
97
+ * "Abweichender Titel (Sucheinstieg)" ([3260](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat\&val=3260\&katalog=Standard))
98
+ * "Ansetzungssachtitel" ([3220](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat\&val=3220\&katalog=Standard))
99
+ * "Einheitssachtitel des beigefügten oder kommentierten Werkes" ([4210](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat\&val=4210\&katalog=Standard))
100
+ * "Frühere/frühester Haupttitel (nur für fortlaufende und integrierende Ressourcen)" ([4213](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat\&val=4213\&katalog=Standard))
101
+ * "Gesamttitel der Reproduktion" ([4110](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat\&val=4110\&katalog=Standard))
102
+ * "Gesamttitel der fortlaufenden Ressource" ([4170](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat\&val=4170\&katalog=Standard))
103
+ * "Gesamttitel der mehrteiligen Monografie" ([4150](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat\&val=4150\&katalog=Standard))
104
+ * "Haupttitel, Titelzusatz, Verantwortlichkeitsangabe" ([4000](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat\&val=4000\&katalog=Standard))
105
+ * "Normierter Zeitschriftenkurztitel" ([3232](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat\&val=3232\&katalog=Standard))
106
+ * "Paralleltitel, paralleler Titelzusatz, parallele Verantwortlichkeitsangabe" ([4002](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat\&val=4002\&katalog=Standard))
107
+ * "Titelkonkordanzen" ([4245](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat\&val=4245\&katalog=Standard))
108
+ * "Titelzusätze und Verantwortlichkeitsangabe zur gesamten Vorlage" ([4011](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat\&val=4011\&katalog=Standard))
109
+ * "Weitere Titel etc. bei Zusammenstellungen" ([4010](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat\&val=4010\&katalog=Standard))
110
+ * "Weiterer Werktitel und sonstige unterscheidende Merkmale" ([3211](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat\&val=3211\&katalog=Standard))
111
+ * "Werktitel und sonstige unterscheidende Merkmale des Werks" ([3210](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat\&val=3210\&katalog=Standard))
112
+ * "Zusätzliche Sucheinstiege" ([4200](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat\&val=4200\&katalog=Standard))
113
+ * "Veröffentlichungsart und Inhalt" ([1140](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat\&val=1140\&katalog=Standard))
114
+ * "Sonstige Anmerkungen" ([4201](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat\&val=4201\&katalog=Standard))
115
+ * "Zusammenfassende Register" ([4203](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat\&val=4203\&katalog=Standard))
116
+ * "Inhaltliche Zusammenfassung" ([4207](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat\&val=4207\&katalog=Standard) bzw. [9000](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat\&val=9000\&katalog=Standard))
117
+ * "Einleitender Text" ([7124](https://swbtools.bsz-bw.de/cgi-bin/k10plushelp.pl?cmd=kat\&val=7124\&katalog=Standard))
118
+
119
+ The vocabulary files themselves can be found here:
120
+ * Schneider, S., & Lehmann, J. (2024). Machine-Readable Vocabulary Files of the "Alter Realkatalog" (ARK) of Berlin State Library (SBB) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.13301020
121
+
122
+ ## Training Procedure
123
+
124
+ Training procedure includes loading the ARK vocabulary (see [Datasheet for Machine-Readable Vocabulary Files of the ARK (Alter Realkatalog)](https://zenodo.org/doi/10.5281/zenodo.13301020)) into Annif and training the [Omikuji backend](https://github.com/NatLibFi/Annif/wiki/Backend%3A-Omikuji) with the help of our training data. Further aspects on technical specifications can be found in the section [Training hyperparameters](#training-hyperparameters).
125
+
126
+ ### Preprocessing
127
+
128
+ Besides merging and transforming the data described under [Training Data](\#training-data) to fit the [corpus formats](https://github.com/NatLibFi/Annif/wiki/Corpus-formats) accepted by Annif, no further preprocessing of natural language or similar has been performed.
129
+
130
+ ### Speeds, Sizes, Times
131
+
132
+ Training takes from several minutes to a few hours on a V100, depending on the choice of dataset and algorithm as well as hyperparameter settings.
133
+
134
+ ### Training hyperparameters
135
+
136
+ For some of the ARK Annif models, a slight hyperparameter optimization has been conducted to identify the final hyperparameter settings stated below.
137
+
138
+ hyperparameter configuration (as it needs to be stated in the Annif `projects.cfg` file):
139
+ ```
140
+ [ark-omikuji-de-title-content]
141
+ name=ARK-DE-18 Omikuji
142
+ language=de
143
+ backend=omikuji
144
+ analyzer=snowball(german)
145
+ vocab=arktsv
146
+ cluster_k=2
147
+ collapse_every_n_layers=5
148
+ ```
149
+
150
+ ### Training results
151
+
152
+ * Precision (`--limit` 1, `--threshold` 0): 0.4861
153
+ * Recall (`--limit` 1, `--threshold` 0): 0.4587
154
+ * F1 (`--limit` 1, `--threshold` 0): 0.4675
155
+ * NDCG (`--limit` 1, `--threshold` 0): 0.4683
156
+ * F1@5: 0.2258
157
+ * NDCG@5: 0.5703
158
+
159
+ # Evaluation
160
+
161
+ ## Testing Data, Factors and Metrics
162
+
163
+ ### Testing Data
164
+
165
+ The dataset is described under [Training Data](#training-data). It was split into smaller subsets used for training, testing and validating (80%/10%/10% split).
166
+
167
+ ### Metrics
168
+
169
+ Model performance has been evaluated based on the following metrics: Precision, Recall, F1 and NDCG. These are standard metrics for machine learning and more specifically automatic subject indexing tasks and are directly provided in Annif by calling the `annif eval` statement. Evaluation parameters (`--limit` = maximum number of results to return; `--threshold` = minimum confidence for a suggestion to be considered) have been optimized before using the validation dataset and affect the results accordingly. We also state F1@5 and NDCG@5 scores reached without any evaluation parameters.
170
+
171
+ # Environmental Impact
172
+
173
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact\#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
174
+
175
+ * **Hardware Type:** V100
176
+ * **Hours used:** 0,5-5 hours
177
+ * **Cloud Provider:** No cloud.
178
+ * **Compute Region:** Germany.
179
+
180
+ # Technical Specifications
181
+
182
+ ## Model Architecture and Objective
183
+
184
+ See [Annif](https://github.com/NatLibFi/Annif) and [Omikuji](https://github.com/tomtung/omikuji) repositories on Github. Omikuji is an implementation of Partitioned Label Trees (Prabhu et al., 2018):
185
+ * Y. Prabhu, A. Kag, S. Harsola, R. Agrawal, and M. Varma, “Parabel: Partitioned Label Trees for Extreme Classification with Application to Dynamic Search Advertising,” in Proceedings of the 2018 World Wide Web Conference, 2018, pp. 993–1002.
186
+
187
+ ### Software
188
+
189
+ To run this model, Annif version 0.60.0 or higher (min. up to 1.1.0) must be installed.
190
+
191
+ # Model Card Authors
192
+
193
+ [Sophie Schneider](mailto:sophie.schneider@sbb.spk-berlin.de) and [Jörg Lehmann](mailto:joerg.lehmann@sbb.spk-berlin.de)
194
+
195
+ # Model Card Contact
196
+
197
+ Questions and comments about the model can be directed to Sophie Schneider at sophie.schneider@sbb.spk-berlin.de, questions and comments about the model card can be directed to Jörg Lehmann at joerg.lehmann@sbb.spk-berlin.de.
198
+
199
+ # How to Get Started with the Model
200
+
201
+ Follow the Annif [Getting Started](https://github.com/NatLibFi/Annif/wiki/Getting-started) page to set up and run Annif. Create a projects.cfg file (see section [Training hyperparameters](#training-hyperparameters) for details on the specific project configuration), load the ARK vocabulary (see [Datasheet for Machine-Readable Vocabulary Files of the ARK (Alter Realkatalog)](https://zenodo.org/doi/10.5281/zenodo.13301020)) via `annif load-vocab` command and copy the model folder over to `data/projects`.
202
+