lmoncla's picture
Update README.md
716725a verified
---
license: cc-by-nc-4.0
language:
- fr
metrics:
- accuracy
- recall
- precision
- f1
library_name: spacy
pipeline_tag: token-classification
tags:
- spacy
- token-classification
model-index:
- name: fr_spacy_custom_spancat_edda
results:
- task:
name: spancat
type: span-classification
metrics:
- name: Span Precision
type: precision
value: 0.942
- name: Span Recall
type: recall
value: 0.798
- name: Span F1 Score
type: f_score
value: 0.864
widget:
- text: "* ALBI, (Géog.) ville de France, capitale de l'Albigeois, dans le haut Languedoc : elle est sur le Tarn. Long. 19. 49. lat. 43. 55. 44."
---
# spaCy Custom Spancat trained on Diderot & d’Alembert’s Encyclopédie entries
<!-- Provide a quick summary of what the model is/does. -->
This model is designed to identify and classify text spans corresponding to named entities (such as Spatial, Person, and MISC), as well as nested named entities (including Spatial, Person, and MISC), spatial relations, and other relevant information within French encyclopedic entries.
The spans detected by this model are:
- **NC-Spatial**: a common noun that identifies a spatial entity (nominal spatial entity) including natural features, e.g. `ville`, `la rivière`, `royaume`.
- **NP-Spatial**: a proper noun identifying the name of a place (spatial named entities), e.g. `France`, `Paris`, `la Chine`.
- **ENE-Spatial**: nested spatial entity , e.g. `ville de France`, `royaume de Naples`, `la mer Baltique`.
- **Relation**: spatial relation, e.g. `dans`, `sur`, `à 10 lieues de`.
- **Latlong**: geographic coordinates, e.g. `Long. 19. 49. lat. 43. 55. 44.`
- **NC-Person**: a common noun that identifies a person (nominal spatial entity), e.g. `roi`, `l'empereur`, `les auteurs`.
- **NP-Person**: a proper noun identifying the name of a person (person named entities), e.g. `Louis XIV`, `Pline`, `les Romains`.
- **ENE-Person**: nested people entity, e.g. `le czar Pierre`, `roi de Macédoine`
- **NP-Misc**: a proper noun identifying entities not classified as spatial or person, e.g. `l'Eglise`, `1702`, `Pélasgique`.
- **ENE-Misc**: nested named entity not classified as spatial or person, e.g. `l'ordre de S. Jacques`, `la déclaration du 21 Mars 1671`.
- **Head**: entry name
- **Domain-Mark**: words indicating the knowledge domain (usually after the head and between parenthesis), e.g. `Géographie`, `Geog.`, `en Anatomie`.
## Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [Ludovic Moncla](https://ludovicmoncla.github.io), [Katherine McDonough](https://www.lancaster.ac.uk/dsi/about-us/members/katherine-mcdonough#projects) and [Denis Vigier](http://www.icar.cnrs.fr/membre/dvigier/) in the framework of the [GEODE](https://geode-project.github.io) project.
- **Model type:** spaCy Span Categorization
- **spaCy**: `>=3.7.2,<3.8.0`
- **Components**: `tok2vec`, `spancat`
- **Repository:** [https://github.com/GEODE-project/ner-spancat-edda](https://github.com/GEODE-project/ner-spancat-edda)
- **Language(s) (NLP):** French
- **License:** cc-by-nc-4.0
- **Dataset:** https://zenodo.org/records/10530177
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This model was trained entirely on French encyclopaedic entries and will likely not perform well on text in other languages or other corpora.
## How to Get Started with the Model
Use the code below to get started with the model.
```bash
pip install https://huggingface.co/GEODE/fr_spacy_custom_spancat_edda/resolve/main/fr_spacy_custom_spancat_edda-any-py3-none-any.whl
```
```python
# Using spacy.load().
import spacy
nlp = spacy.load("fr_spacy_custom_spancat_edda")
# Importing as module.
import fr_spacy_custom_spancat_edda
nlp = fr_spacy_custom_spancat_edda.load()
doc = nlp("* ALBI, (Géog.) ville de France, capitale de l'Albigeois, dans le haut Languedoc : elle est sur le Tarn. Long. 19. 49. lat. 43. 55. 44.")
spans = []
for span in doc.spans['sc']:
spans.append({
"start": span.start_char,
"end": span.end_char,
"labels": [span.label_],
"text": span.text
})
print(spans)
# Output
[{'start': 2, 'end': 6, 'labels': ['Head'], 'text': 'ALBI'},
{'start': 16, 'end': 21, 'labels': ['NC-Spatial'], 'text': 'ville'},
{'start': 25, 'end': 31, 'labels': ['NP-Spatial'], 'text': 'France'},
{'start': 33, 'end': 41, 'labels': ['NC-Spatial'], 'text': 'capitale'},
{'start': 58, 'end': 62, 'labels': ['Relation'], 'text': 'dans'},
{'start': 92, 'end': 95, 'labels': ['Relation'], 'text': 'sur'},
{'start': 9, 'end': 14, 'labels': ['Domain-mark'], 'text': 'Géog.'},
{'start': 45, 'end': 56, 'labels': ['NP-Spatial'], 'text': "l'Albigeois"},
{'start': 96, 'end': 103, 'labels': ['NP-Spatial'], 'text': 'le Tarn'},
{'start': 16,
'end': 31,
'labels': ['ENE-Spatial'],
'text': 'ville de France'},
{'start': 63,
'end': 80,
'labels': ['NP-Spatial'],
'text': 'le haut Languedoc'},
{'start': 33,
'end': 56,
'labels': ['ENE-Spatial'],
'text': "capitale de l'Albigeois"},
{'start': 33,
'end': 80,
'labels': ['ENE-Spatial'],
'text': "capitale de l'Albigeois, dans le haut Languedoc"},
{'start': 16,
'end': 80,
'labels': ['ENE-Spatial'],
'text': "ville de France, capitale de l'Albigeois, dans le haut Languedoc"}]
```
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The model was trained using a set of 2200 paragraphs randomly selected out of 2001 Encyclopédie's entries.
All paragraphs were written in French and are distributed as follows among the Encyclopédie knowledge domains:
| Knowledge domain | Paragraphs |
|---|:---:|
| Géographie | 1096 |
| Histoire | 259 |
| Droit Jurisprudence | 113 |
| Physique | 92 |
| Métiers | 92 |
| Médecine | 88 |
| Philosophie | 69 |
| Histoire naturelle | 65 |
| Belles-lettres | 65 |
| Militaire | 62 |
| Commerce | 48 |
| Beaux-arts | 44 |
| Agriculture | 36 |
| Chasse | 31 |
| Religion | 23 |
| Musique | 17 |
The spans/entities were labeled by the project team along with using pre-labelling with early models to speed up the labelling process.
A train/val/test split was used.
Validation and test sets are composed of 200 paragraphs each: 100 classified as 'Géographie' and 100 from another knowledge domain.
The datasets have the following breakdown of tokens and spans/entities.
| | Train | Validation | Test|
|---|:---:|:---:|:---:|
|Paragraphs| 1,800 | 200 | 200|
| Tokens | 132,398 | 14,959 | 13,881 |
| NC-Spatial | 3,252 | 358 | 355 |
| NP-Spatial | 4,707 | 464 | 519 |
| ENE-Spatial | 3,043 | 326 | 334 |
| Relation | 2,093 | 219 | 226 |
| Latlong | 553 | 66 | 72 |
| NC-Person | 1,378 | 132 | 133 |
| NP-Person | 1,599 | 170 | 150 |
| ENE-Person | 492 | 49 | 57 |
| NP-Misc | 948 | 108 | 96 |
| ENE-Misc | 255 | 31 | 22 |
| Head | 1,261 | 142 | 153 |
| Domain-Mark | 1,069 | 122 | 133 |
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
For full training details and results please see the GitHub repository: [github.com/GEODE-project/ner-spancat-edda](https://github.com/GEODE-project/ner-spancat-edda)
### Evaluation
Evaluation is performed using the spacy [evaluate](https://spacy.io/api/cli#evaluate) command line interface:
* Overall model performances (Test set)
| | Precision | Recall | F-score |
|---|:---:|:---:|:---:|
| | 94.09 | 79.91 | 86.42 |
* Model performances by entity (Test set)
| | Precision | Recall | F-score |
|---|:---:|:---:|:---:|
| NC-Spatial | 96.50 | 93.24 | 94.84 |
| NP-Spatial | 92.74 | 95.95 | 94.32 |
| ENE-Spatial | 91.67 | 95.51 | 93.55 |
| Relation | 97.33 | 64.60 | 77.66 |
| Latlong | 0.00 | 0.00 | 0.00 |
| NC-Person | 93.07 | 70.68 | 80.34 |
| NP-Person | 92.47 | 90.00 | 91.22 |
| ENE-Person | 92.16 | 82.46 | 87.04 |
| NP-Misc | 93.24 | 71.88 | 81.18 |
| ENE-Misc | 0.00 | 0.00 | 0.00 |
| Head | 97.37 | 24.18 | 38.74 |
| Domain-mark | 99.19 | 91.73 | 95.31 |
## Cite this work
> Moncla, L., Vigier, D., & McDonough, K. (2024). GeoEDdA: A Gold Standard Dataset for Geo-semantic Annotation of Diderot & d’Alembert’s Encyclopédie. In proceedings of the 2nd International Workshop on Geographic Information Extraction from Texts (GeoExT'24), ECIR Conference, Glasgow, UK.
## Acknowledgement
The authors are grateful to the [ASLAN project](https://aslan.universite-lyon.fr) (ANR-10-LABX-0081) of the Université de Lyon, for its financial support within the French program "Investments for the Future" operated by the National Research Agency (ANR).
Data courtesy the [ARTFL Encyclopédie Project](https://artfl-project.uchicago.edu), University of Chicago.