Update README.md
Browse files
README.md
CHANGED
@@ -30,14 +30,159 @@ model = AutoAdapterModel.from_pretrained("allenai/specter_plus_plus")
|
|
30 |
adapter_name = model.load_adapter("allenai/spp_adhoc_query", source="hf", set_active=True)
|
31 |
```
|
32 |
|
33 |
-
##
|
34 |
|
35 |
-
<!--
|
36 |
|
37 |
-
|
|
|
38 |
|
39 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
|
41 |
-
## Citation
|
42 |
|
43 |
-
<!-- Add some description here -->
|
|
|
30 |
adapter_name = model.load_adapter("allenai/spp_adhoc_query", source="hf", set_active=True)
|
31 |
```
|
32 |
|
33 |
+
## SPECTER 2.0
|
34 |
|
35 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
36 |
|
37 |
+
SPECTER 2.0 is the successor to [SPECTER](allenai/specter) and is capable of generating task specific embeddings for scientific tasks when paired with [adapters](https://huggingface.co/models?search=allenai/spp).
|
38 |
+
Given the combination of title and abstract of a scientific paper or a short texual query, the model can be used to generate effective embeddings to be used in downstream applications.
|
39 |
|
40 |
+
# Model Details
|
41 |
+
|
42 |
+
## Model Description
|
43 |
+
|
44 |
+
SPECTER 2.0 has been trained on over 6M triplets of scientific paper citations, which are available [here](https://huggingface.co/datasets/allenai/scirepeval/viewer/cite_prediction_new/evaluation).
|
45 |
+
Post that it is trained on all the [SciRepEval](https://huggingface.co/datasets/allenai/scirepeval) training tasks, with task format specific adapters.
|
46 |
+
|
47 |
+
Task Formats trained on:
|
48 |
+
- Classification
|
49 |
+
- Regression
|
50 |
+
- Proximity
|
51 |
+
- Adhoc Search
|
52 |
+
|
53 |
+
This is the adhoc search query specific adapter. For tasks where papers have to retrieved for a short textual query, use this adapter to encode the query and [allenai/spp_proximity](https://huggingface.co/allenai/spp_adhoc_proximity)
|
54 |
+
|
55 |
+
|
56 |
+
It builds on the work done in [SciRepEval: A Multi-Format Benchmark for Scientific Document Representations](https://api.semanticscholar.org/CorpusID:254018137) and we evaluate the trained model on this benchmark as well.
|
57 |
+
|
58 |
+
|
59 |
+
|
60 |
+
- **Developed by:** Amanpreet Singh, Mike D'Arcy, Arman Cohan, Doug Downey, Sergey Feldman
|
61 |
+
- **Shared by :** Allen AI
|
62 |
+
- **Model type:** bert-base-uncased + adapters
|
63 |
+
- **License:** Apache 2.0
|
64 |
+
- **Finetuned from model [optional]:** [allenai/scibert](https://huggingface.co/allenai/scibert_scivocab_uncased).
|
65 |
+
|
66 |
+
## Model Sources [optional]
|
67 |
+
|
68 |
+
<!-- Provide the basic links for the model. -->
|
69 |
+
|
70 |
+
- **Repository:** [https://github.com/allenai/SPECTER2_0] (https://github.com/allenai/SPECTER2_0)
|
71 |
+
- **Paper [optional]:** [https://api.semanticscholar.org/CorpusID:254018137](https://api.semanticscholar.org/CorpusID:254018137)
|
72 |
+
- **Demo [optional]:** [Usage] (https://github.com/allenai/SPECTER2_0/blob/main/README.md)
|
73 |
+
|
74 |
+
# Uses
|
75 |
+
|
76 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
77 |
+
|
78 |
+
## Direct Use
|
79 |
+
|
80 |
+
|Model|Type|Name and HF link|
|
81 |
+
|--|--|--|
|
82 |
+
|Base|Transformer|[allenai/specter_plus_plus](https://huggingface.co/allenai/specter_plus_plus)|
|
83 |
+
|Classification|Adapter|[allenai/spp_classification](https://huggingface.co/allenai/spp_classification)|
|
84 |
+
|Regression|Adapter|[allenai/spp_regression](https://huggingface.co/allenai/spp_regression)|
|
85 |
+
|Retrieval|Adapter|[allenai/spp_proximity](https://huggingface.co/allenai/spp_proximity)|
|
86 |
+
|Adhoc Query|Adapter|[allenai/spp_adhoc_query](https://huggingface.co/allenai/spp_adhoc_query)|
|
87 |
+
|
88 |
+
```python
|
89 |
+
from transformers import AutoTokenizer, AutoModel
|
90 |
+
|
91 |
+
# load model and tokenizer
|
92 |
+
tokenizer = AutoTokenizer.from_pretrained('allenai/specter_plus_plus')
|
93 |
+
|
94 |
+
#load base model
|
95 |
+
model = AutoModel.from_pretrained('allenai/specter_plus_plus')
|
96 |
+
|
97 |
+
#load the adapter(s) as per the required task, provide an identifier for the adapter in load_as argument and activate it
|
98 |
+
model.load_adapter("allenai/spp_adhoc_query", source="hf", load_as="spp_adhoc_query", set_active=True)
|
99 |
+
|
100 |
+
papers = [{'title': 'BERT', 'abstract': 'We introduce a new language representation model called BERT'},
|
101 |
+
{'title': 'Attention is all you need', 'abstract': ' The dominant sequence transduction models are based on complex recurrent or convolutional neural networks'}]
|
102 |
+
|
103 |
+
# concatenate title and abstract
|
104 |
+
text_batch = [d['title'] + tokenizer.sep_token + (d.get('abstract') or '') for d in papers]
|
105 |
+
# preprocess the input
|
106 |
+
inputs = self.tokenizer(text_batch, padding=True, truncation=True,
|
107 |
+
return_tensors="pt", return_token_type_ids=False, max_length=512)
|
108 |
+
output = model(**inputs)
|
109 |
+
# take the first token in the batch as the embedding
|
110 |
+
embeddings = output.last_hidden_state[:, 0, :]
|
111 |
+
```
|
112 |
+
|
113 |
+
## Downstream Use [optional]
|
114 |
+
|
115 |
+
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
116 |
+
|
117 |
+
For evaluation and downstream usage, please refer to [https://github.com/allenai/scirepeval/blob/main/evaluation/INFERENCE.md](https://github.com/allenai/scirepeval/blob/main/evaluation/INFERENCE.md).
|
118 |
+
|
119 |
+
# Training Details
|
120 |
+
|
121 |
+
## Training Data
|
122 |
+
|
123 |
+
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
124 |
+
|
125 |
+
The base model is trained on citation links between papers and the adapters are trained on 8 large scale tasks across the four formats.
|
126 |
+
All the data is a part of SciRepEval benchmark and is available [here](https://huggingface.co/datasets/allenai/scirepeval).
|
127 |
+
|
128 |
+
The citation link are triplets in the form
|
129 |
+
|
130 |
+
```json
|
131 |
+
{"query": {"title": ..., "abstract": ...}, "pos": {"title": ..., "abstract": ...}, "neg": {"title": ..., "abstract": ...}}
|
132 |
+
```
|
133 |
+
|
134 |
+
consisting of a query paper, a positive citation and a negative which can be from the same/different field of study as the query or citation of a citation.
|
135 |
+
|
136 |
+
## Training Procedure
|
137 |
+
|
138 |
+
Please refer to the [SPECTER paper](https://api.semanticscholar.org/CorpusID:215768677).
|
139 |
+
|
140 |
+
|
141 |
+
### Training Hyperparameters
|
142 |
+
|
143 |
+
|
144 |
+
The model is trained in two stages using [SciRepEval](https://github.com/allenai/scirepeval/blob/main/training/TRAINING.md):
|
145 |
+
- Base Model: First a base model is trained on the above citation triplets.
|
146 |
+
``` batch size = 1024, max input length = 512, learning rate = 2e-5, epochs = 2 warmup steps = 10% fp16```
|
147 |
+
- Adapters: Thereafter, task format specific adapters are trained on the SciRepEval training tasks, where 600K triplets are sampled from above and added to the training data as well.
|
148 |
+
``` batch size = 256, max input length = 512, learning rate = 1e-4, epochs = 6 warmup = 1000 steps fp16```
|
149 |
+
|
150 |
+
|
151 |
+
# Evaluation
|
152 |
+
|
153 |
+
We evaluate the model on [SciRepEval](https://github.com/allenai/scirepeval), a large scale eval benchmark for scientific embedding tasks which which has [SciDocs] as a subset.
|
154 |
+
We also evaluate and establish a new SoTA on [MDCR](https://github.com/zoranmedic/mdcr), a large scale citation recommendation benchmark.
|
155 |
+
|
156 |
+
|Model|SciRepEval In-Train|SciRepEval Out-of-Train|SciRepEval Avg|MDCR(MAP, Recall@5)|
|
157 |
+
|--|--|--|--|--|
|
158 |
+
|[BM-25](https://api.semanticscholar.org/CorpusID:252199740)|n/a|n/a|n/a|(33.7, 28.5)|
|
159 |
+
|[SPECTER](https://huggingface.co/allenai/specter)|54.7|57.4|68.0|(30.6, 25.5)|
|
160 |
+
|[SciNCL](https://huggingface.co/malteos/scincl)|55.6|57.8|69.0|(32.6, 27.3)|
|
161 |
+
|[SciRepEval-Adapters](https://huggingface.co/models?search=scirepeval)|61.9|59.0|70.9|(35.3, 29.6)|
|
162 |
+
|[SPECTER 2.0-base](https://huggingface.co/allenai/specter_plus_plus)|56.3|58.0|69.2|(38.0, 32.4)|
|
163 |
+
|[SPECTER 2.0-Adapters](https://huggingface.co/models?search=allen/spp)|**62.3**|**59.2**|**71.2**|**(38.4, 33.0)**|
|
164 |
+
|
165 |
+
Please cite the following works if you end up using SPECTER 2.0:
|
166 |
+
|
167 |
+
[SPECTER paper](https://api.semanticscholar.org/CorpusID:215768677):
|
168 |
+
|
169 |
+
```bibtex
|
170 |
+
@inproceedings{specter2020cohan,
|
171 |
+
title={{SPECTER: Document-level Representation Learning using Citation-informed Transformers}},
|
172 |
+
author={Arman Cohan and Sergey Feldman and Iz Beltagy and Doug Downey and Daniel S. Weld},
|
173 |
+
booktitle={ACL},
|
174 |
+
year={2020}
|
175 |
+
}
|
176 |
+
```
|
177 |
+
[SciRepEval paper](https://api.semanticscholar.org/CorpusID:254018137)
|
178 |
+
```bibtex
|
179 |
+
@article{Singh2022SciRepEvalAM,
|
180 |
+
title={SciRepEval: A Multi-Format Benchmark for Scientific Document Representations},
|
181 |
+
author={Amanpreet Singh and Mike D'Arcy and Arman Cohan and Doug Downey and Sergey Feldman},
|
182 |
+
journal={ArXiv},
|
183 |
+
year={2022},
|
184 |
+
volume={abs/2211.13308}
|
185 |
+
}
|
186 |
+
```
|
187 |
|
|
|
188 |
|
|