spacy-project / README.md
cl455's picture
Update README.md
93fcbfc verified
metadata
license: mit

πŸ“š Placing the Holocaust Weasel (spacy) Project

This is the official spaCy project for the Placing the Holocaust Project. This project houses our data and our Python scripts for converting data, serializing it, training 4 different spaCy models with it, and evaluating those models. It also contains all the metrics from v. 0.0.1.

For this project, we are using spaCy v. 3.7.4.

Project Overview

Studying experiences of the Holocaust should not be limited to what happened in identifiable, or familiar, named places such as camps or ghettos, cities or villages. Many of the most important events of the Holocaust occurred in unnamed places. For most, physical and temporal disorientation were a real part of what it meant to be a victim of Nazi violence. Our approach to analyzing testimony transcripts recognizes the importance of the unnamed street corner, fence, farm, or hill in both survivor testimonies and conceptualizations of Holocaust landscapes generally.

As a part of the University of Maine’s Placing the Holocaust project, we created a taxonomy of nine place categories to capture this wide array of both unnamed and named places. We were able to train a model to annotate 977 post-war testimony transcripts from the United States Holocaust Memorial Museum (USHMM). The final outcome of the project includes creating an open access site with both a search engine of the transcripts and a mapping tool (forthcoming summer 2024). In releasing our data, we hope that others can build from our methodology to implement their own place-based approach to analyzing their corpus, Holocaust-related or not, and develop their own methods to analyzing testimony transcripts. Please share your work with us!

Labels

Category Definition Examples
BUILDING Includes references to physical structures and places of labor or employment like factories. Institutions such as the "Judenrat" or "Red Cross" are also included. school, home, house, hospital, factory, station, office, store, synagogue, barracks
COUNTRY, CONTINENT, OR LARGER Mostly country names, also includes "earth," "country," and "world." Distinguished from Region and Environmental feature based on context. germany, poland, states, israel, united, country, america, england, france, russia
ENVIRONMENTAL FEATURE Any named or unnamed environmental feature, including bodies of water and landforms. General references like "nature" and "water" are included. woods, forest, river, mountains, ground, trees, water, tree, mountain, sea
IMAGINARY OR OTHER Difficult terms that are context-dependent like "inside," "outside," or "side." Also includes unspecified locations like "community," and conceptual places like "hell" or "heaven." place, outside, places, side, inside, hiding, hell, heaven, part, spot
INTERIOR SPACE References to distinct rooms within a building, or large place features of a building like a "factory floor." room, apartment, floor, kitchen, rooms, gas, basement, bathroom, chambers, bunker
LANDSCAPE FEATURE Places not large enough to be a geographic or populated region but too large to be an Object, includes parts of buildings like "roof" or "chimney." street, door, border, line, farm, window, streets, road, wall, field
OBJECTS Objects of conveyance and movable objects like furniture. In specific contexts, refers to transportation vehicles or items like "ovens," where the common use case of the term prevails. train, car, ship, boat, bed, truck, trains, cars, trucks
POPULATED PLACE Includes cities, towns, villages, and hamlets or crossroads settlements. Names of places can be the same as a ghetto, camp, city, or district. camp, ghetto, town, city, auschwitz, camps, new, york, concentration, village
REGION Sub-national regions, states, provinces, or islands. Includes references to sides of a geopolitical border or military zone. area, side, land, siberia, new, zone, jersey, california, russian, eastern

πŸ“‹ project.yml

The project.yml defines the data assets required by the project, as well as the available commands and workflows. For details, see the Weasel documentation.

⏯ Commands

The following commands are defined by the project. They can be executed using weasel run [name]. Commands are only re-run if their inputs have changed.

Command Description
download-lg Download a large spaCy model with pretrained vectors
download-md Download a medium spaCy model with pretrained vectors
convert Convert the data to spaCy's binary format
convert-sents Convert the data to sentences before converting to spaCy's binary format
split Split data into train/dev/test sets
create-config-sm Create a new config with a spancat pipeline component for small models
train-sm Train the spancat model with a small configuration
train-md Train the spancat model with a medium configuration
train-lg Train the spancat model with a large configuration
train-trf Train the spancat model with a transformer configuration
evaluate-sm Evaluate the small model and export metrics
evaluate-md Evaluate the medium model and export metrics
evaluate-lg Evaluate the large model and export metrics
evaluate-trf Evaluate the transformer model and export metrics
build-table Build a table from the metrics for README.md
readme Build a table from the metrics for README.md
package Package the trained model as a pip package

⏭ Workflows

The following workflows are defined by the project. They can be executed using weasel run [name] and will run the specified commands in order. Commands are only re-run if their inputs have changed.

Workflow Steps
all-sm-sents convert-sents β†’ split β†’ create-config-sm β†’ train-sm β†’ evaluate-sm

πŸ—‚ Assets

The following assets are defined by the project. They can be fetched by running weasel assets in the project directory.

File Source Description
assets/train.jsonl Local Training data. Chunked into sentences.
assets/dev.jsonl Local Validation data. Chunked into sentences.
assets/test.jsonl Local Testing data. Chunked into sentences.
assets/annotated_data.json/ Local All data, including negative examples.
assets/annotated_data_spans.jsonl Local Data with examples of span annotations.
corpus/train.spacy Local Training data in serialized format.
corpus/dev.spacy Local Validation data in serialized format.
corpus/test.spacy Local Testing data in serialized format.
gold-training-data/* Local Original outputs from Prodigy.
notebooks/* Local Notebooks for testing project features.
configs/* Local Config files for training spaCy models.

Model Metrics

Overall Model Performance

Model Precision Recall F-Score
Small 94.1 89.2 91.6
Medium 94 90.5 92.2
Large 94.1 91.7 92.9
Transformer 93.6 91.6 92.6

Performance per Label

Model Label Precision Recall F-Score
Small BUILDING 94.7 90.2 92.4
Medium BUILDING 95.2 92.8 94
Large BUILDING 94.8 93.2 94
Transformer BUILDING 94.3 94.2 94.3
Small COUNTRY 97.6 94.6 96.1
Medium COUNTRY 96.5 96.3 96.4
Large COUNTRY 97.7 96.8 97.2
Transformer COUNTRY 96.6 96.8 96.7
Small DLF 92.4 86.4 89.3
Medium DLF 95 84.1 89.2
Large DLF 93.5 88.4 90.9
Transformer DLF 94.1 90.4 92.2
Small ENV_FEATURES 86.6 81.2 83.8
Medium ENV_FEATURES 86.3 79.1 82.5
Large ENV_FEATURES 77.5 90.1 83.3
Transformer ENV_FEATURES 85.1 86.9 86
Small INT_SPACE 93.8 85.9 89.6
Medium INT_SPACE 93.9 91.3 92.6
Large INT_SPACE 92.4 93.8 93.1
Transformer INT_SPACE 94.6 91.8 93.2
Small NPIP 92.7 86.4 89.4
Medium NPIP 94.5 82.4 88
Large NPIP 92.7 86.6 89.6
Transformer NPIP 94.8 83 88.5
Small POPULATED_PLACE 94 90.6 92.3
Medium POPULATED_PLACE 93 91.2 92.1
Large POPULATED_PLACE 95.2 90.4 92.7
Transformer POPULATED_PLACE 92.1 91.3 91.7
Small REGION 84.4 68.4 75.6
Medium REGION 81.4 75.8 78.5
Large REGION 83 76.8 79.8
Transformer REGION 81.2 68.4 74.3
Small SPATIAL_OBJ 96 90 92.9
Medium SPATIAL_OBJ 95.2 93.8 94.5
Large SPATIAL_OBJ 95.3 95.5 95.4
Transformer SPATIAL_OBJ 96.3 92.8 94.5

Acknowledgements

This project, based in the University of Maine’s History Department (Anne Kelly Knowles, PI), has been funded by National Endowment for the Humanities Digital Humanities Advancement grant no. HAA-287827-22; a Collaborative Research Seed Grant, Center for the Humanities, Washington University in St. Louis; the Clement and Linda McGillicuddy Humanities Center, University of Maine; and the Dale Benson Gift Fund, University of Maine.

Dataset Team

Christine Liu, William Mattingly, Gregory Gaines