julianrisch commited on
Commit
9a9bd26
1 Parent(s): 03aa555

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -13
README.md CHANGED
@@ -8,13 +8,14 @@ tags:
8
  - exbert
9
  ---
10
 
11
- ![bert_image](https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg)
12
 
13
  ## Overview
14
  **Language model:** gelectra-base-germanquad-distilled
15
  **Language:** German
16
  **Training data:** GermanQuAD train set (~ 12MB)
17
  **Eval data:** GermanQuAD test set (~ 5MB)
 
18
  **Infrastructure**: 1x V100 GPU
19
  **Published**: Apr 21st, 2021
20
 
@@ -37,6 +38,52 @@ embeds_dropout_prob = 0.1
37
  temperature = 2
38
  distillation_loss_weight = 0.75
39
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  ## Performance
41
  We evaluated the extractive question answering performance on our GermanQuAD test set.
42
  Model types and training data are included in the model name.
@@ -54,18 +101,31 @@ The human baseline was computed for the 3-way test set by taking one answer as p
54
  - Julian Risch: `julian.risch [at] deepset.ai`
55
  - Malte Pietsch: `malte.pietsch [at] deepset.ai`
56
  - Michel Bartels: `michel.bartels [at] deepset.ai`
 
57
  ## About us
58
- ![deepset logo](https://workablehr.s3.amazonaws.com/uploads/account/logo/476306/logo)
59
- We bring NLP to the industry via open source!
60
- Our focus: Industry specific language models & large scale QA systems.
61
-
62
- Some of our work:
63
- - [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
64
- - [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
65
- - [FARM](https://github.com/deepset-ai/FARM)
66
- - [Haystack](https://github.com/deepset-ai/haystack/)
67
-
68
- Get in touch:
69
- [Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Slack](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
 
 
 
 
 
 
 
 
 
 
 
 
70
 
71
  By the way: [we're hiring!](http://www.deepset.ai/jobs)
 
8
  - exbert
9
  ---
10
 
11
+ # gelectra-base distilled for Extractive QA
12
 
13
  ## Overview
14
  **Language model:** gelectra-base-germanquad-distilled
15
  **Language:** German
16
  **Training data:** GermanQuAD train set (~ 12MB)
17
  **Eval data:** GermanQuAD test set (~ 5MB)
18
+ **Code:** See [an example extractive QA pipeline built with Haystack](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline)
19
  **Infrastructure**: 1x V100 GPU
20
  **Published**: Apr 21st, 2021
21
 
 
38
  temperature = 2
39
  distillation_loss_weight = 0.75
40
  ```
41
+
42
+
43
+ ## Usage
44
+
45
+ ### In Haystack
46
+ Haystack is an AI orchestration framework to build customizable, production-ready LLM applications. You can use this model in Haystack to do extractive question answering on documents.
47
+ To load and run the model with [Haystack](https://github.com/deepset-ai/haystack/):
48
+ ```python
49
+ # After running pip install haystack-ai "transformers[torch,sentencepiece]"
50
+
51
+ from haystack import Document
52
+ from haystack.components.readers import ExtractiveReader
53
+
54
+ docs = [
55
+ Document(content="Python is a popular programming language"),
56
+ Document(content="python ist eine beliebte Programmiersprache"),
57
+ ]
58
+
59
+ reader = ExtractiveReader(model="deepset/gelectra-base-germanquad-distilled")
60
+ reader.warm_up()
61
+
62
+ question = "What is a popular programming language?"
63
+ result = reader.run(query=question, documents=docs)
64
+ # {'answers': [ExtractedAnswer(query='What is a popular programming language?', score=0.5740374326705933, data='python', document=Document(id=..., content: '...'), context=None, document_offset=ExtractedAnswer.Span(start=0, end=6),...)]}
65
+ ```
66
+ For a complete example with an extractive question answering pipeline that scales over many documents, check out the [corresponding Haystack tutorial](https://haystack.deepset.ai/tutorials/34_extractive_qa_pipeline).
67
+
68
+ ### In Transformers
69
+ ```python
70
+ from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
71
+
72
+ model_name = "deepset/gelectra-base-germanquad-distilled"
73
+
74
+ # a) Get predictions
75
+ nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
76
+ QA_input = {
77
+ 'question': 'Why is model conversion important?',
78
+ 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.'
79
+ }
80
+ res = nlp(QA_input)
81
+
82
+ # b) Load model & tokenizer
83
+ model = AutoModelForQuestionAnswering.from_pretrained(model_name)
84
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
85
+ ```
86
+
87
  ## Performance
88
  We evaluated the extractive question answering performance on our GermanQuAD test set.
89
  Model types and training data are included in the model name.
 
101
  - Julian Risch: `julian.risch [at] deepset.ai`
102
  - Malte Pietsch: `malte.pietsch [at] deepset.ai`
103
  - Michel Bartels: `michel.bartels [at] deepset.ai`
104
+
105
  ## About us
106
+
107
+ <div class="grid lg:grid-cols-2 gap-x-4 gap-y-3">
108
+ <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
109
+ <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/deepset-logo-colored.png" class="w-40"/>
110
+ </div>
111
+ <div class="w-full h-40 object-cover mb-2 rounded-lg flex items-center justify-center">
112
+ <img alt="" src="https://raw.githubusercontent.com/deepset-ai/.github/main/haystack-logo-colored.png" class="w-40"/>
113
+ </div>
114
+ </div>
115
+
116
+ [deepset](http://deepset.ai/) is the company behind the production-ready open-source AI framework [Haystack](https://haystack.deepset.ai/).
117
+
118
+ Some of our other work:
119
+ - [Distilled roberta-base-squad2 (aka "tinyroberta-squad2")](https://huggingface.co/deepset/tinyroberta-squad2)
120
+ - [German BERT](https://deepset.ai/german-bert), [GermanQuAD and GermanDPR](https://deepset.ai/germanquad), [German embedding model](https://huggingface.co/mixedbread-ai/deepset-mxbai-embed-de-large-v1)
121
+ - [deepset Cloud](https://www.deepset.ai/deepset-cloud-product), [deepset Studio](https://www.deepset.ai/deepset-studio)
122
+
123
+ ## Get in touch and join the Haystack community
124
+
125
+ <p>For more info on Haystack, visit our <strong><a href="https://github.com/deepset-ai/haystack">GitHub</a></strong> repo and <strong><a href="https://docs.haystack.deepset.ai">Documentation</a></strong>.
126
+
127
+ We also have a <strong><a class="h-7" href="https://haystack.deepset.ai/community">Discord community open to everyone!</a></strong></p>
128
+
129
+ [Twitter](https://twitter.com/Haystack_AI) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://haystack.deepset.ai/) | [YouTube](https://www.youtube.com/@deepset_ai)
130
 
131
  By the way: [we're hiring!](http://www.deepset.ai/jobs)