File size: 4,025 Bytes
0cccdfa
 
f71da52
 
 
 
 
 
 
 
 
 
0cccdfa
01e1121
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f71da52
 
5654734
 
 
 
 
 
 
 
 
 
 
01e1121
 
 
 
5654734
 
 
 
 
 
 
 
 
 
 
 
 
01e1121
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
be415ff
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
license: cc-by-4.0
datasets:
- oeg/software_benchmark_v2
language:
- es
metrics:
- accuracy
library_name: transformers
tags:
- software_mentions
- scibert
---

#Software Benchmark SCIBERT model.
This model is a fine-tuned version of the [SCIBERT](https://huggingface.co/allenai/scibert_scivocab_uncased) model on a dataset built based on the corpora SoMESCi and Softcite.

The objective of this model is to extract software mentions from scientific texts in the BIO domain. 

The training code can be found on [Github](https://github.com/oeg-upm/software_mentions_benchmark).

## Corpus

The corpus have been built using two corpora in software mentions.
* SoMESCi [1]. We have used the corpus uploaded to [Github](https://github.com/dave-s477/SoMeSci/tree/9f17a43f342be026f97f03749457d4abb1b01dbf/PLoS_sentences), more specifically, the corpus created with sentences.
* Softcite [2]. This project has published another corpus for software mentions, which is also available on [Github](https://github.com/howisonlab/softcite-dataset/tree/master/data/corpus). We have used the annotations from bio and economics domain.
* Papers with code. We have downloaded a list of publications from the [Papers with Code](https://paperswithcode.com/) site. You can find there publications and software from machine learning domain. To build this corpus, we have selected texts where you can find mentions of the software related with the publication.  

To build this corpus, we have removed the annotations of other entities such as version, url and those which are related with the relation of teh entity with the text. IN this case, we only use the label Application_Mention.

To reconciliate both corpora, we have mapping the labels of both corpora. Also, some decisions about the annotations have been taken, for example, in the case of Microsoft Excel, we have decided to annotate Excel as software mention, not the whole text.

## Training

The corpus have been splitted in a 70-30 proportion for training and testing.

The training code can be found on [Github](https://github.com/oeg-upm/software_mentions_benchmark).

## Evaluation Results

These are the hyperparameters used to train the model:
* evaluation_strategy = "epoch"
* save_strategy="no"
* per_device_train_batch_size=16
* per_device_eval_batch_size=16
* num_train_epochs=3
* weight_decay=1e-5
* learning_rate=1e-4

The evaluation results are: 

* Precision: 0.8928176795580111 	
* Recall: 0.8568398727465536	
* F1-score: 0.8744588744588745

This model has been compared with some generative models such as llama2 and hermes using the testing part of the benchmark. Following, we present the results of partial matches, it means, the predictions are included in the corpus

### Llama2 (7B)
* Precision: 0.6342857142857142
* Recall: 0.7161290322580646
* F1-score: 0.67

### Hermes (13B)

* Precision: 0.4666666666666667
* Recall: 0.509090909090909
* F1-score: 0.4869565217391304

## Acknoledgements

This is a work done thank to the effort of other projects:
* Softcite
* SoMESCi
* [SCIBERT](https://huggingface.co/allenai/scibert_scivocab_uncased) 

## Authors

* Esteban González Guardia
* Daniel Garijo Verdejo

## Contributors

<kbd><img src="https://raw.githubusercontent.com/oeg-upm/TINTO/main/assets/logo-oeg.png" alt="Ontology Engineering Group" width="100"></kbd> 
<kbd><img src="https://raw.githubusercontent.com/oeg-upm/TINTO/main/assets/logo-upm.png" alt="Universidad Politécnica de Madrid" width="100"></kbd>

## References
1. Schindler, D., Bensmann, F., Dietze, S., & Krüger, F. (2021, October). Somesci-A 5 star open data gold standard knowledge graph of software mentions in scientific articles. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management (pp. 4574-4583).
2. Du, C., Cohoon, J., Lopez, P., & Howison, J. (2021). Softcite dataset: A dataset of software mentions in biomedical and economic research publications. Journal of the Association for Information Science and Technology, 72(7), 870-884.