File size: 3,300 Bytes
15ca687
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fe92f13
15ca687
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fe92f13
15ca687
 
fe92f13
 
15ca687
fe92f13
15ca687
 
 
 
 
 
 
 
 
 
 
fe92f13
15ca687
 
 
 
fe92f13
15ca687
fe92f13
 
 
15ca687
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
language: 
- english
thumbnail: 
tags:
- token classification
- 
license: 
datasets:
- EMBO/sd-nlp `PANELIZATION`
metrics:
-
---

# sd-ner

## Model description

This model is a [RoBERTa base model](https://huggingface.co/roberta-base) that was further trained using a masked language modeling task on a compendium of english scientific textual examples from the life sciences using the [BioLang dataset](https://huggingface.co/datasets/EMBO/biolang). It was then fine-tuned for token classification on the SourceData [sd-nlp](https://huggingface.co/datasets/EMBO/sd-nlp) dataset with the `PANELIZATION` task to perform 'parsing' or 'segmentation' of figure legends into fragments corresponding to sub-panels.

Figures are usually composite representations of results obtained with heterogenous experimental approaches and systems.  Breaking figures into  panels allows to identify more coherent descriptions of individual scientific experiments.

## Intended uses & limitations

#### How to use

The intended use of this model is for 'parsing' figure legends into sub-fragments corresponding to individual panels as used in SourceData annotations (https://sourcedata.embo.org). 

To have a quick check of the model:

```python
from transformers import pipeline, RobertaTokenizerFast, RobertaForTokenClassification
example = """Fig 4. a, Volume density of early (Avi) and late (Avd) autophagic vacuoles.a, Volume density of early (Avi) and late (Avd) autophagic vacuoles from four independent cultures. Examples of Avi and Avd are shown in b and c, respectively. Bars represent 0.4����m. d, Labelling density of cathepsin-D as estimated in two independent experiments. e, Labelling density of LAMP-1."""
tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_len=512)
model = RobertaForTokenClassification.from_pretrained('EMBO/sd-panels')
ner = pipeline('ner', model, tokenizer=tokenizer)
res = ner(example)
for r in res: print(r['word'], r['entity'])
```

#### Limitations and bias

The model must be used with the `roberta-base` tokenizer.

## Training data

The model was trained for token classification using the [EMBO/sd-nlp `PANELIZATION`](https://huggingface.co/datasets/EMBO/sd-nlp) dataset wich includes manually annotated examples.

## Training procedure

The training was run on a NVIDIA DGX Station with 4XTesla V100 GPUs.

Training code is available at https://github.com/source-data/soda-roberta

- Command: `python -m tokcl.train PANELIZATION --num_train_epochs=10`
- Tokenizer vocab size: 50265
- Training data: EMBO/sd-nlp NER
- TTraining with 2175 examples.                                  
- Evaluating on 622 examples. 
- Training on 2 features: `O`, `B-PANEL_START`
- Epochs: 10.0
- `per_device_train_batch_size`: 32
- `per_device_eval_batch_size`: 32
- `learning_rate`: 0.0001
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0

## Eval results

Testing on 337 examples from test set with `sklearn.metrics`:

```
              precision    recall  f1-score   support

 PANEL_START       0.88      0.97      0.92       785

   micro avg       0.88      0.97      0.92       785
   macro avg       0.88      0.97      0.92       785
weighted avg       0.88      0.97      0.92       785
```