Bogdan Kostić commited on
Commit
acb3364
1 Parent(s): c37f6a2

Add context encoder model

Browse files
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: de
3
+ license: mit
4
+ thumbnail: https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg
5
+ tags:
6
+ - exbert
7
+ ---
8
+
9
+ ![bert_image](https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg)
10
+
11
+ ## Overview
12
+ **Language model:** gbert-base-germandpr
13
+ **Language:** German
14
+ **Training data:** GermanDPR train set (~ 56MB)
15
+ **Eval data:** GermanDPR test set (~ 6MB)
16
+ **Infrastructure**: 4x V100 GPU
17
+ **Published**: Apr 26th, 2021
18
+
19
+ ## Details
20
+ - We trained a dense passage retrieval model with two gbert-base models as encoders of questions and passages.
21
+ - The dataset is GermanDPR, a new, German language dataset, which we hand-annotated and published [online](https://deepset.ai/germanquad).
22
+ - It comprises 9275 question/answer pairs in the training set and 1025 pairs in the test set.
23
+ For each pair, there are one positive context and three hard negative contexts.
24
+ - As the basis of the training data, we used our hand-annotated GermanQuAD dataset as positive samples and generated hard negative samples from the latest German Wikipedia dump (6GB of raw txt files).
25
+ - The data dump was cleaned with tailored scripts, leading to 2.8 million indexed passages from German Wikipedia.
26
+
27
+ See https://deepset.ai/germanquad for more details and dataset download.
28
+
29
+ ## Hyperparameters
30
+ ```
31
+ batch_size = 40
32
+ n_epochs = 20
33
+ num_training_steps = 4640
34
+ num_warmup_steps = 460
35
+ max_seq_len = 32 tokens for question encoder and 300 tokens for passage encoder
36
+ learning_rate = 1e-6
37
+ lr_schedule = LinearWarmup
38
+ embeds_dropout_prob = 0.1
39
+ num_hard_negatives = 2
40
+ ```
41
+ ## Performance
42
+ During training, we monitored the in-batch average rank and the loss and evaluated different batch sizes, numbers of epochs, and number of hard negatives on a dev set split from the train set.
43
+ The dev split contained 1030 question/answer pairs.
44
+ Even without thorough hyperparameter tuning, we observed quite stable learning. Multiple restarts with different seeds produced quite similar results.
45
+ Note that the in-batch average rank is influenced by settings for batch size and number of hard negatives. A smaller number of hard negatives makes the task easier.
46
+ After fixing the hyperparameters we trained the model on the full GermanDPR train set.
47
+
48
+ We further evaluated the retrieval performance of the trained model on the full German Wikipedia with the GermanDPR test set as labels. To this end, we converted the GermanDPR test set to SQuAD format. The DPR model drastically outperforms the BM25 baseline with regard to recall@k.
49
+ ![performancetable](https://lh3.google.com/u/0/d/1lX6G0cp4NTx1yUWs74LI0Gcs41sYy_Fb=w2880-h1578-iv1)
50
+
51
+ ## Usage
52
+ ### In haystack
53
+ You can load the model in [haystack](https://github.com/deepset-ai/haystack/) as a retriever for doing QA at scale:
54
+ ```python
55
+ retriever = DensePassageRetriever(
56
+ document_store=document_store,
57
+ query_embedding_model="deepset/gbert-base-germandpr-question_encoder"
58
+ passage_embedding_model="deepset/gbert-base-germandpr-ctx_encoder"
59
+ )
60
+ ```
61
+
62
+ ## Authors
63
+ - Timo Möller: `timo.moeller [at] deepset.ai`
64
+ - Julian Risch: `julian.risch [at] deepset.ai`
65
+ - Malte Pietsch: `malte.pietsch [at] deepset.ai`
66
+ ## About us
67
+ ![deepset logo](https://raw.githubusercontent.com/deepset-ai/FARM/master/docs/img/deepset_logo.png)
68
+ We bring NLP to the industry via open source!
69
+ Our focus: Industry specific language models & large scale QA systems.
70
+
71
+ Some of our work:
72
+ - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
73
+ - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
74
+ - [FARM](https://github.com/deepset-ai/FARM)
75
+ - [Haystack](https://github.com/deepset-ai/haystack/)
76
+
77
+ Get in touch:
78
+ [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Website](https://deepset.ai)
79
+
80
+ By the way: [we're hiring!](https://apply.workable.com/deepset/)
config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "../../gbert-base-germandpr/lm2/language_model.bin",
3
+ "architectures": [
4
+ "DPRContextEncoder"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "gradient_checkpointing": false,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 768,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 3072,
13
+ "language": "english",
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "dpr",
17
+ "name": "DPRContextEncoder",
18
+ "num_attention_heads": 12,
19
+ "num_hidden_layers": 12,
20
+ "pad_token_id": 0,
21
+ "position_embedding_type": "absolute",
22
+ "projection_dim": 0,
23
+ "revision": null,
24
+ "transformers_version": "4.5.0",
25
+ "type_vocab_size": 2,
26
+ "vocab_size": 31102
27
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:81027779cf0b4e3fea7843b6513ba80b1220346f43ea6ce783292734d5f4496c
3
+ size 439804479
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": false, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": false, "max_len": 512, "special_tokens_map_file": null, "name_or_path": "deepset/gbert-base", "do_basic_tokenize": true, "never_split": null}
vocab.txt ADDED
The diff for this file is too large to render. See raw diff