Model Card for mr_resnest101e_timm_pretrain_railspace_and_building
A ResNeSt (ResNet based architecture with Split Attention) image classification model. Trained on ImageNet-1k and fine-tuned on gold standard annotations and outputs from early experiments using MapReader (found here).
Model Details
Model Description
- Model type: Image classification /feature backbone
- Finetuned from model: https://huggingface.co/timm/resnest101e.in1k
Classes and labels
- 0: no
- 1: railspace
- 2: building
- 3: railspace & building
Uses
This fine-tuned version of the model is an output of the MapReader pipeline. It was used to classify 'patch' images (cells/regions) of scanned nineteenth-century series maps of Britain provided by the National Library of Scotland (learn more here). We classified patches to indicate the presence of buildings and railway infrastructure. See our ACM SIGSPATIAL Geohumanities Workshop 2022 paper for more details about labels.
Of all the models fine-tuned for the experiments reported in our ACM SIGSPATIAL 2022 paper, we used this resnest101e_timm_pretrain model to infer these railway infrastructure and building labels across 30 million patches on nearly 16k scanned map sheets. You can visualize the results of this work here.
How to Get Started with the Model in MapReader
Please go to the MapReader documentation for instructions on how to use this model in MapReader.
Training, Evaluation and Testing Details
Training, Evaluation and Testing Data
This model was fine-tuned on manually-annotated data.
Training, Evaluation and Testing Procedure
Details can be found here.
Open access version of the article available here.
Results
Data outputs can be found here.
Further details can be found here.
More Information
This model was fine-tuned using MapReader.
The code for MapReader can be found here and the documentation can be found here.
Model Card Authors
- Katie McDonough (k.mcdonough@lancaster.ac.uk)
- Rosie Wood (rwood@turing.ac.uk)
Model Card Contact
Katie McDonough (k.mcdonough@lancaster.ac.uk)
Funding Statement
This work was supported by Living with Machines (AHRC grant AH/S01179X/1) and The Alan Turing Institute (EPSRC grant EP/N510129/1). Living with Machines, funded by the UK Research and Innovation (UKRI) Strategic Priority Fund, is a multidisciplinary collaboration delivered by the Arts and Humanities Research Council (AHRC), with The Alan Turing Institute, the British Library and Cambridge, King's College London, East Anglia, Exeter, and Queen Mary University of London.
- Downloads last month
- 9