--- license: mit --- # 📚 Placing the Holocaust Weasel (spacy) Project This is the official spaCy project for the Placing the Holocaust Project. This project houses our data and our Python scripts for converting data, serializing it, training 4 different spaCy models with it, and evaluating those models. It also contains all the metrics from v. 0.0.1. For this project, we are using spaCy v. 3.7.4. ## 📋 project.yml The [`project.yml`](project.yml) defines the data assets required by the project, as well as the available commands and workflows. For details, see the [Weasel documentation](https://github.com/explosion/weasel). ### ⏯ Commands The following commands are defined by the project. They can be executed using [`weasel run [name]`](https://github.com/explosion/weasel/tree/main/docs/cli.md#rocket-run). Commands are only re-run if their inputs have changed. | Command | Description | | --- | --- | | `download-lg` | Download a large spaCy model with pretrained vectors | | `download-md` | Download a medium spaCy model with pretrained vectors | | `convert` | Convert the data to spaCy's binary format | | `convert-sents` | Convert the data to sentences before converting to spaCy's binary format | | `split` | Split data into train/dev/test sets | | `create-config-sm` | Create a new config with a spancat pipeline component for small models | | `train-sm` | Train the spancat model with a small configuration | | `train-md` | Train the spancat model with a medium configuration | | `train-lg` | Train the spancat model with a large configuration | | `train-trf` | Train the spancat model with a transformer configuration | | `evaluate-sm` | Evaluate the small model and export metrics | | `evaluate-md` | Evaluate the medium model and export metrics | | `evaluate-lg` | Evaluate the large model and export metrics | | `evaluate-trf` | Evaluate the transformer model and export metrics | | `build-table` | Build a table from the metrics for README.md | | `readme` | Build a table from the metrics for README.md | | `package` | Package the trained model as a pip package | ### ⏭ Workflows The following workflows are defined by the project. They can be executed using [`weasel run [name]`](https://github.com/explosion/weasel/tree/main/docs/cli.md#rocket-run) and will run the specified commands in order. Commands are only re-run if their inputs have changed. | Workflow | Steps | | --- | --- | | `all-sm-sents` | `convert-sents` → `split` → `create-config-sm` → `train-sm` → `evaluate-sm` | ### 🗂 Assets The following assets are defined by the project. They can be fetched by running [`weasel assets`](https://github.com/explosion/weasel/tree/main/docs/cli.md#open_file_folder-assets) in the project directory. | File | Source | Description | | --- | --- | --- | | [`assets/train.jsonl`](assets/train.jsonl) | Local | Training data. Chunked into sentences. | | [`assets/dev.jsonl`](assets/dev.jsonl) | Local | Validation data. Chunked into sentences. | | [`assets/test.jsonl`](assets/test.jsonl) | Local | Testing data. Chunked into sentences. | | [`assets/annotated_data.json/`](assets/annotated_data.json/) | Local | All data, including negative examples. | | [`assets/annotated_data_spans.jsonl`](assets/annotated_data_spans.jsonl) | Local | Data with examples of span annotations. | | [`corpus/train.spacy`](corpus/train.spacy) | Local | Training data in serialized format. | | [`corpus/dev.spacy`](corpus/dev.spacy) | Local | Validation data in serialized format. | | [`corpus/test.spacy`](corpus/test.spacy) | Local | Testing data in serialized format. | | [`gold-training-data/*`](gold-training-data/*) | Local | Original outputs from Prodigy. | | [`notebooks/*`](notebooks/*) | Local | Notebooks for testing project features. | | [`configs/*`](configs/*) | Local | Config files for training spaCy models. | # Overall Model Performance | Model | Precision | Recall | F-Score | |:------------|------------:|---------:|----------:| | Small | 94.1 | 89.2 | 91.6 | | Medium | 94 | 90.5 | 92.2 | | Large | 94.1 | 91.7 | 92.9 | | Transformer | 93.6 | 91.6 | 92.6 | # Performance per Label | Model | Label | Precision | Recall | F-Score | |:------------|:----------------|------------:|---------:|----------:| | Small | BUILDING | 94.7 | 90.2 | 92.4 | | Medium | BUILDING | 95.2 | 92.8 | 94 | | Large | BUILDING | 94.8 | 93.2 | 94 | | Transformer | BUILDING | 94.3 | 94.2 | 94.3 | | Small | COUNTRY | 97.6 | 94.6 | 96.1 | | Medium | COUNTRY | 96.5 | 96.3 | 96.4 | | Large | COUNTRY | 97.7 | 96.8 | 97.2 | | Transformer | COUNTRY | 96.6 | 96.8 | 96.7 | | Small | DLF | 92.4 | 86.4 | 89.3 | | Medium | DLF | 95 | 84.1 | 89.2 | | Large | DLF | 93.5 | 88.4 | 90.9 | | Transformer | DLF | 94.1 | 90.4 | 92.2 | | Small | ENV_FEATURES | 86.6 | 81.2 | 83.8 | | Medium | ENV_FEATURES | 86.3 | 79.1 | 82.5 | | Large | ENV_FEATURES | 77.5 | 90.1 | 83.3 | | Transformer | ENV_FEATURES | 85.1 | 86.9 | 86 | | Small | INT_SPACE | 93.8 | 85.9 | 89.6 | | Medium | INT_SPACE | 93.9 | 91.3 | 92.6 | | Large | INT_SPACE | 92.4 | 93.8 | 93.1 | | Transformer | INT_SPACE | 94.6 | 91.8 | 93.2 | | Small | NPIP | 92.7 | 86.4 | 89.4 | | Medium | NPIP | 94.5 | 82.4 | 88 | | Large | NPIP | 92.7 | 86.6 | 89.6 | | Transformer | NPIP | 94.8 | 83 | 88.5 | | Small | POPULATED_PLACE | 94 | 90.6 | 92.3 | | Medium | POPULATED_PLACE | 93 | 91.2 | 92.1 | | Large | POPULATED_PLACE | 95.2 | 90.4 | 92.7 | | Transformer | POPULATED_PLACE | 92.1 | 91.3 | 91.7 | | Small | REGION | 84.4 | 68.4 | 75.6 | | Medium | REGION | 81.4 | 75.8 | 78.5 | | Large | REGION | 83 | 76.8 | 79.8 | | Transformer | REGION | 81.2 | 68.4 | 74.3 | | Small | SPATIAL_OBJ | 96 | 90 | 92.9 | | Medium | SPATIAL_OBJ | 95.2 | 93.8 | 94.5 | | Large | SPATIAL_OBJ | 95.3 | 95.5 | 95.4 | | Transformer | SPATIAL_OBJ | 96.3 | 92.8 | 94.5 |