File size: 8,434 Bytes
392ee77
f16545e
a819667
 
 
b72fa5a
f16545e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0002666
 
 
 
 
9b67e48
 
bee8c0f
9b67e48
 
0002666
 
 
f16545e
392ee77
a276bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
098aa7e
a276bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
098aa7e
a276bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
---
language:
- lat 
- fra
- spa
- multilingual
license: cc-by-nc-4.0 
tags:
- text  # Example: audio
- named entity recognition  
- roberta
- historical languages

- precision  # Example: wer. Use metric id from https://hf.co/metrics
- recall

model-index:
- name: roberta-multilingual-medieval-ner
  results:
  - task: 
      type: named entity recognition  # Required. Example: automatic-speech-recognition
    metrics:
      - type: precision    
        value: 98.01
      - type: Recall
        value: 97.08  
        
inference:
  parameters:
    aggregation_strategy: 'simple'

        
widget:
- text: "In nomine sanctæ et individuæ Trinitatis. Ego Guido, Dei gratia Cathalaunensis episcopus, propter inevitabilem temporum mutationem et casum decedentium quotidie personarum, necesse habemus litteris annotare quod dampnosa delere non possit oblivio. Eapropter notum fieri volumus tam futuris quam presentibus quod, pro remedio animæ meæ et predecessorum nostrorum, abbati et fratribus de Insula altare de Hattunmaisnil dedimus et perpetuo habendum concessimus, salvis custumiis nostris et archidiaconi loci illius. Ne hoc ergo malignorum hominum perversitate aut temporis alteratur incommodo presentem paginam sigilli nostri impressione firmavimus, testibus subnotatis : S. Raynardy capellani, Roberti Armensis, Mathei de Waisseio, Michaeli decani, Hugonis de Monasterio, Hervaudi de Panceio. Data per manum Gerardi cancellarii, anno ab incarnatione Domini millesimo centesimo septuagesimo octavo. "

 
---

## Model Details

This is a Fine-tuned version of the multilingual Roberta model on medieval charters. The model is intended to recognize Locations and persons in medieval texts
in a Flat and nested manner. The train dataset entails 8k annotated texts on medieval latin, french and Spanish from a period ranging from 11th to 15th centuries.


### How to Get Started with the Model
The model is intended to be used in a simple way manner:

```python 
import torch
from transformers import pipeline

pipe = pipeline("token-classification", model="magistermilitum/roberta-multilingual-medieval-ner")

results = list(map(pipe, list_of_sentences))
results =[[[y["entity"],y["word"], y["start"], y["end"]] for y in x] for x in results]
print(results)
```


### Model Description

The following snippet can transforms model inferences to CONLL format using the BIO format.

```python 
class TextProcessor:
    def __init__(self, filename):
        self.filename = filename
        self.sent_detector = nltk.data.load("tokenizers/punkt/english.pickle") #sentence tokenizer
        self.sentences = []
        self.new_sentences = []
        self.results = []
        self.new_sentences_token_info = []
        self.new_sentences_bio = []
        self.BIO_TAGS = []
        self.stripped_BIO_TAGS = []

    def read_file(self):
        #Reading a txt file with one document per line.
        with open(self.filename, 'r') as f:
            text = f.read()
        self.sentences = self.sent_detector.tokenize(text.strip())

    def process_sentences(self): #We split long sentences as encoder has a 256 max-lenght. Sentences with les of 40 words will be merged.
        for sentence in self.sentences:
            if len(sentence.split()) < 40 and self.new_sentences:
                self.new_sentences[-1] += " " + sentence
            else:
                self.new_sentences.append(sentence)

    def apply_model(self, pipe):
        self.results = list(map(pipe, self.new_sentences))
        self.results=[[[y["entity"],y["word"], y["start"], y["end"]] for y in x] for x in self.results]

    def tokenize_sentences(self):
        for n_s in self.new_sentences:
            tokens=n_s.split() # Basic tokenization
            token_info = []

            # Initialize a variable to keep track of character index
            char_index = 0
            # Iterate through the tokens and record start and end info
            for token in tokens:
                start = char_index
                end = char_index + len(token)  # Subtract 1 for the last character of the token
                token_info.append((token, start, end))

                char_index += len(token) + 1  # Add 1 for the whitespace
            self.new_sentences_token_info.append(token_info)

    def process_results(self): #merge subwords and BIO tags
        for result in self.results:
            merged_bio_result = []
            current_word = ""
            current_label = None
            current_start = None
            current_end = None
            for entity, subword, start, end in result:
                if subword.startswith("▁"):
                    subword = subword[1:]
                    merged_bio_result.append([current_word, current_label, current_start, current_end])
                    current_word = "" ; current_label = None ; current_start = None ; current_end = None
                if current_start is None:
                    current_word = subword ; current_label = entity ; current_start = start+1 ; current_end= end
                else:
                    current_word += subword ; current_end = end
            if current_word:
                merged_bio_result.append([current_word, current_label, current_start, current_end])
            self.new_sentences_bio.append(merged_bio_result[1:])

    def match_tokens_with_entities(self): #match BIO tags with tokens
        for i,ss in enumerate(self.new_sentences_token_info):
            for word in ss:
                for ent in self.new_sentences_bio[i]:
                    if word[1]==ent[2]:
                        if ent[1]=="L-PERS":
                            self.BIO_TAGS.append([word[0], "I-PERS", "B-LOC"])
                            break
                        else:
                            if "LOC" in ent[1]:
                                self.BIO_TAGS.append([word[0], "O", ent[1]])
                            else:
                                self.BIO_TAGS.append([word[0], ent[1], "O"])
                            break
                else:
                    self.BIO_TAGS.append([word[0], "O", "O"])

    def separate_dots_and_comma(self): #optional
        signs=[",", ";", ":", "."]
        for bio in self.BIO_TAGS:
            if any(bio[0][-1]==sign for sign in signs) and len(bio[0])>1:
                self.stripped_BIO_TAGS.append([bio[0][:-1], bio[1], bio[2]]); 
                self.stripped_BIO_TAGS.append([bio[0][-1], "O", "O"])
            else:
                self.stripped_BIO_TAGS.append(bio)

    def save_BIO(self):
        with open('output_BIO_a.txt', 'w', encoding='utf-8') as output_file:
            output_file.write("TOKEN\tPERS\tLOCS\n"+"\n".join(["\t".join(x) for x in self.stripped_BIO_TAGS]))

# Usage:
processor = TextProcessor('my_docs_file.txt')
processor.read_file()
processor.process_sentences()
processor.apply_model(pipe)
processor.tokenize_sentences()
processor.process_results()
processor.match_tokens_with_entities()
processor.separate_dots_and_comma()
processor.save_BIO()
```

- **Developed by:** [Sergio Torres Aguilar]
- **Model type:** [XLM-Roberta]
- **Language(s) (NLP):** [Medieval Latin, Spanish, French]
- **Finetuned from model [optional]:** [Named Entity Recognition]

### Direct Use

A sentence as : "Ego Radulfus de Francorvilla miles, notum facio tam presentibus cum futuris quod, cum Guillelmo Bateste militi de Miliaco"

Will be annotated in BIO format as: 

```python 
('Ego', 'O', 'O')
('Radulfus', 'B-PERS')
('de', 'I-PERS', 'O')
('Francorvilla', 'I-PERS', 'B-LOC')
('miles', 'O')
(',', 'O', 'O')
('notum', 'O', 'O')
('facio', 'O', 'O')
('tam', 'O', 'O')
('presentibus', 'O', 'O')
('quam', 'O', 'O')
('futuris', 'O', 'O')
('quod', 'O', 'O')
(',', 'O', 'O')
('cum', 'O', 'O')
('Guillelmo', 'B-PERS', 'O')
('Bateste', 'I-PERS', 'O')
('militi', 'O', 'O')
('de', 'O', 'O')
('Miliaco', 'O', 'B-LOC')
```

### Training Procedure 

The model was fine-tuned during 5 epoch on the XML-Roberta-Large using a 5e-5 Lr and a batch size of 16.


**BibTeX:**
```bibtex
@inproceedings{aguilar2022multilingual,
  title={Multilingual Named Entity Recognition for Medieval Charters Using Stacked Embeddings and Bert-based Models.},
  author={Aguilar, Sergio Torres},
  booktitle={Proceedings of the second workshop on language technologies for historical and ancient languages},
  pages={119--128},
  year={2022}
}
```

## Model Card Contact

[sergio.torres@uni.lu]