license: cc-by-nc-4.0
language:
- fr
metrics:
- accuracy
- recall
- precision
- f1
library_name: spacy
pipeline_tag: token-classification
tags:
- spacy
- token-classification
model-index:
- name: fr_spacy_custom_spancat_edda
results:
- task:
name: spancat
type: span-classification
metrics:
- name: Span Precision
type: precision
value: 0.948
- name: Span Recall
type: recall
value: 0.849
- name: Span F1 Score
type: f_score
value: 0.896
spaCy custom spancat for Diderot & d’Alembert’s Encyclopédie entries
This model identify and classify spans of text for named entities (Spatial, Person and MISC), nested named entities (Spatial, Person and MISC), spatial relations and others from French encyclopaedic entries.
The spans detected by this model are:
- NC-Spatial:
- NP-Spatial:
- ENE-Spatial:
- Relation: spatial relations, e.g.
- Latlong: geographic coordinates, e.g.
- NC-Person:
- NP-Person:
- ENE-Person:
- NP-Misc:
- ENE-Misc:
- Head:
- Domain-Mark: , e.g. Géographie, Histoire
Model Details
Model Description
- Developed by: Ludovic Moncla, Katherine McDonough and Denis Vigier
- Model type: spaCy Span Categorization
- spaCy:
>=3.7.2,<3.8.0
- Components:
tok2vec
,spancat
- Language(s) (NLP): French
- License: cc-by-nc-4.0
Model Sources
Uses
This model can be used to extract entities from any text that are Paeleoecology related or tangential. Potential uses include identifying unique SITE names in research papers in other domains.
Bias, Risks, and Limitations
This model was trained entirely on French encyclopaedic entries and will likely not perform well on text in other languages. Also, the paragraphs used to train the model were chosen based on being already present in the Neotoma database and therefore may have selection bias as they represent what is already known to be relevant to Neotoma and may not correctly manage new, previously missed articles.
How to Get Started with the Model
Use the code below to get started with the model.
pip install https://huggingface.co/GEODE/fr_spacy_custom_spancat_edda/resolve/main/fr_spacy_custom_spancat_edda-any-py3-none-any.whl
# Using spacy.load().
import spacy
nlp = spacy.load("fr_spacy_custom_spancat_edda")
# Importing as module.
import fr_spacy_custom_spancat_edda
nlp = fr_spacy_custom_spancat_edda.load()
doc = nlp("* ALBI, (Géog.) ville de France, capitale de l'Albigeois, dans le haut Languedoc : elle est sur le Tarn. Long. 19. 49. lat. 43. 55. 44.")
spans = []
for span in doc.spans['sc']:
print(span)
spans.append({
"start": span.start_char,
"end": span.end_char,
"labels": [span.label_],
"text": span.text
})
print(spans)
# Output
[{'start': 2, 'end': 6, 'labels': ['Head'], 'text': 'ALBI'},
{'start': 16, 'end': 21, 'labels': ['NC-Spatial'], 'text': 'ville'},
{'start': 25, 'end': 31, 'labels': ['NP-Spatial'], 'text': 'France'},
{'start': 33, 'end': 41, 'labels': ['NC-Spatial'], 'text': 'capitale'},
{'start': 59, 'end': 63, 'labels': ['Relation'], 'text': 'dans'},
{'start': 93, 'end': 96, 'labels': ['Relation'], 'text': 'sur'},
{'start': 9, 'end': 14, 'labels': ['Domain-mark'], 'text': 'Géog.'},
{'start': 46, 'end': 57, 'labels': ['NP-Spatial'], 'text': "l'Albigeois"},
{'start': 97, 'end': 104, 'labels': ['NP-Spatial'], 'text': 'le Tarn'},
{'start': 16,
'end': 31,
'labels': ['ENE-Spatial'],
'text': 'ville de France'},
{'start': 64,
'end': 81,
'labels': ['NP-Spatial'],
'text': 'le haut Languedoc'},
{'start': 33,
'end': 57,
'labels': ['ENE-Spatial'],
'text': "capitale de l'Albigeois"},
{'start': 33,
'end': 81,
'labels': ['ENE-Spatial'],
'text': "capitale de l'Albigeois, dans le haut Languedoc"},
{'start': 16,
'end': 81,
'labels': ['ENE-Spatial'],
'text': "ville de France, capitale de l'Albigeois, dans le haut Languedoc"}]
Training Details
Training Data
The model was trained using a set of 2200 paragraphs randomly selected out of xx Encyclopédie's entries. All paragraphs were written in French and are distributed as follows among the Encyclopédie knowledge domains:
Knowledge domain | Paragraphs |
---|---|
Géographie | 1096 |
Histoire | 259 |
Droit Jurisprudence | 113 |
Physique | 92 |
Métiers | 92 |
Médecine | 88 |
Philosophie | 69 |
Histoire naturelle | 65 |
Belles-lettres | 65 |
Militaire | 62 |
Commerce | 48 |
Beaux-arts | 44 |
Agriculture | 36 |
Chasse | 31 |
Religion | 23 |
Musique | 17 |
The spans/entities were labeled by the project team along with using pre-labelling with early models to speed up the labelling process. A train/val/test split was used. Validation and test sets are composed of 200 paragraphs each: 100 classified as 'Géographie' and 100 from another knowledge domain. The datasets have the following breakdown of tokens and spans/entities.
Train | Validation | Test | |
---|---|---|---|
Paragraphs | 1800 | 200 | 200 |
Tokens | |||
NC-Spatial | |||
NP-Spatial | |||
ENE-Spatial | |||
Relation | |||
Latlong | |||
NC-Person | |||
NP-Person | |||
ENE-Person | |||
NP-Misc | |||
ENE-Misc | |||
Head | |||
Domain-Mark |
Training Procedure
For full training details and results please see the GitHub repository: github.com/GEODE-project/ner-spancat-edda
Acknowledgement
Data courtesy the ARTFL Encyclopédie Project, University of Chicago.
The authors are grateful to the ASLAN project (ANR-10-LABX-0081) of the Université de Lyon, for its financial support within the French program "Investments for the Future" operated by the National Research Agency (ANR).