Ihor commited on
Commit
9bfc5e4
1 Parent(s): a46bb62

add more use cases

Browse files
Files changed (1) hide show
  1. README.md +99 -1
README.md CHANGED
@@ -26,7 +26,11 @@ This is a model based on [DeBERTaV3-base](https://huggingface.co/microsoft/deber
26
 
27
  It demonstrates better quality on the diverse set of text classification datasets in a zero-shot setting than [Bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) while being almost 3 times smaller.
28
 
29
- Moreover, the model can be used for multiple information extraction tasks in zero-shot setting, including:
 
 
 
 
30
  * Named-entity recognition;
31
  * Relation extraction;
32
  * Entity linking;
@@ -99,6 +103,100 @@ Below, you can see the F1 score on several text classification datasets. All tes
99
  | [Comprehendo (184M)](https://huggingface.co/knowledgator/comprehend_it-base) | 0.90 | 0.7982 | 0.5660 |
100
  | SetFit [BAAI/bge-small-en-v1.5 (33.4M)](https://huggingface.co/BAAI/bge-small-en-v1.5) | 0.86 | 0.5636 | 0.5754 |
101
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
102
  ### Future reading
103
  Check our blogpost - ["The new milestone in zero-shot capabilities (it’s not Generative AI)."](https://medium.com/p/9b5a081fbf27), where we highlighted possible use-cases of the model and why next-token prediction is not the only way to achive amazing zero-shot capabilites.
104
  While most of the AI industry is focused on generative AI and decoder-based models, we are committed to developing encoder-based models.
 
26
 
27
  It demonstrates better quality on the diverse set of text classification datasets in a zero-shot setting than [Bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) while being almost 3 times smaller.
28
 
29
+ Moreover, the model can be used for multiple information extraction tasks in zero-shot setting.
30
+
31
+ Possible use cases of the model:
32
+ * Text classification
33
+ * Reranking of search results;
34
  * Named-entity recognition;
35
  * Relation extraction;
36
  * Entity linking;
 
103
  | [Comprehendo (184M)](https://huggingface.co/knowledgator/comprehend_it-base) | 0.90 | 0.7982 | 0.5660 |
104
  | SetFit [BAAI/bge-small-en-v1.5 (33.4M)](https://huggingface.co/BAAI/bge-small-en-v1.5) | 0.86 | 0.5636 | 0.5754 |
105
 
106
+ ### Alternative usage
107
+ Besides text classification, the model can be used for many other information extraction tasks.
108
+
109
+ **Question-answering**
110
+
111
+ The model can be used to solve open question-answering as well as reading comprehension tasks if it's possible to transform a task into a multi-choice Q&A.
112
+ ```python
113
+ #open question-answering
114
+ question = "What is the capital city of Ukraine?"
115
+ candidate_answers = ['Kyiv', 'London', 'Berlin', 'Warsaw']
116
+ classifier(question, candidate_answers)
117
+
118
+ # labels': ['Kyiv', 'Warsaw', 'London', 'Berlin'],
119
+ # 'scores': [0.8633171916007996,
120
+ # 0.11328165978193283,
121
+ # 0.012766502797603607,
122
+ # 0.010634596459567547]
123
+ ```
124
+
125
+ ```python
126
+ #reading comprehension
127
+ question = 'In what country is Normandy located?'
128
+ text = 'The Normans (Norman: Nourmands; French: Normands; Latin: Normanni) were the people who in the 10th and 11th centuries gave their name to Normandy, a region in France. They were descended from Norse ("Norman" comes from "Norseman") raiders and pirates from Denmark, Iceland and Norway who, under their leader Rollo, agreed to swear fealty to King Charles III of West Francia. Through generations of assimilation and mixing with the native Frankish and Roman-Gaulish populations, their descendants would gradually merge with the Carolingian-based cultures of West Francia. The distinct cultural and ethnic identity of the Normans emerged initially in the first half of the 10th century, and it continued to evolve over the succeeding centuries.'
129
+ input_ = f"{question}\n{text}"
130
+
131
+ candidate_answers = ['Denmark', 'Iceland', 'France', "Norway"]
132
+
133
+ classifier(input_, candidate_answers)
134
+
135
+ # 'labels': ['France', 'Iceland', 'Norway', 'Denmark'],
136
+ # 'scores': [0.9102861285209656,
137
+ # 0.03861876204609871,
138
+ # 0.028696594759821892,
139
+ # 0.02239849977195263]
140
+ ```
141
+
142
+ ```python
143
+ #binary question-answering
144
+ question = "Does drug development regulation become more aligned with modern technologies and trends, choose yes or no?"
145
+ text = "Drug development has become unbearably slow and expensive. A key underlying problem is the clinical prediction challenge: the inability to predict which drug candidates will be safe in the human body and for whom. Recently, a dramatic regulatory change has removed FDA's mandated reliance on antiquated, ineffective animal studies. A new frontier is an integration of several disruptive technologies [machine learning (ML), patient-on-chip, real-time sensing, and stem cells], which, when integrated, have the potential to address this challenge, drastically cutting the time and cost of developing drugs, and tailoring them to individual patients."
146
+ input_ = f"{question}\n{text}"
147
+
148
+ candidate_answers = ['yes', 'no']
149
+
150
+ classifier(input_, candidate_answers)
151
+
152
+ # 'labels': ['yes', 'no'],
153
+ # 'scores': [0.5876278281211853, 0.4123721718788147]}
154
+ ```
155
+
156
+ **Named-entity classification and disambiguation**
157
+
158
+ The model can be used to classify named entities or disambiguate similar ones. It can be put as one of the components in a entity-linking systems as a reranker.
159
+ ```python
160
+ text = """Knowledgator is an open-source ML research organization focused on advancing the information extraction field."""
161
+
162
+ candidate_labels = ['Knowledgator - company',
163
+ 'Knowledgator - product',
164
+ 'Knowledgator - city']
165
+
166
+ classifier(text, candidate_labels)
167
+
168
+ # 'labels': ['Knowledgator - company',
169
+ # 'Knowledgator - product',
170
+ # 'Knowledgator - city'],
171
+ # 'scores': [0.887371301651001, 0.097423255443573, 0.015205471776425838]
172
+ ```
173
+
174
+ **Relation classification**
175
+
176
+ With the same principle, the model can be utilized to classify relations from a text.
177
+ ```python
178
+ text = """The FKBP5 gene codifies a co-chaperone protein associated with the modulation of glucocorticoid receptor interaction involved in the adaptive stress response. The FKBP5 intracellular concentration affects the binding affinity of the glucocorticoid receptor (GR) to glucocorticoids (GCs). This gene has glucocorticoid response elements (GRES) located in introns 2, 5 and 7, which affect its expression. Recent studies have examined GRE activity and the effects of genetic variants on transcript efficiency and their contribution to susceptibility to behavioral disorders. Epigenetic changes and environmental factors can influence the effects of these allele-specific variants, impacting the response to GCs of the FKBP5 gene. The main epigenetic mark investigated in FKBP5 intronic regions is DNA methylation, however, few studies have been performed for all GRES located in these regions. One of the major findings was the association of low DNA methylation levels in the intron 7 of FKBP5 in patients with psychiatric disorders. To date, there are no reports of DNA methylation in introns 2 and 5 of the gene associated with diagnoses of psychiatric disorders. This review highlights what has been discovered so far about the relationship between polymorphisms and epigenetic targets in intragenic regions, and reveals the gaps that need to be explored, mainly concerning the role of DNA methylation in these regions and how it acts in psychiatric disease susceptibility."""
179
+
180
+ candidate_labels = ['FKBP5-associated with -> PTSD',
181
+ 'FKBP5 - has no effect on -> PTSD',
182
+ 'FKBP5 - is similar to -> PTSD',
183
+ 'FKBP5 - inhibitor of-> PTSD',
184
+ 'FKBP5 - ancestor of -> PTSD']
185
+
186
+ classifier(text, candidate_labels)
187
+
188
+ # 'labels': ['FKBP5-associated with -> PTSD',
189
+ # 'FKBP5 - is similar to -> PTSD',
190
+ # 'FKBP5 - has no effect on -> PTSD',
191
+ # 'FKBP5 - ancestor of -> PTSD',
192
+ # 'FKBP5 - inhibitor of-> PTSD'],
193
+ # 'scores': [0.5880666971206665,
194
+ # 0.17369700968265533,
195
+ # 0.14067059755325317,
196
+ # 0.05044548586010933,
197
+ # 0.04712018370628357]
198
+ ```
199
+
200
  ### Future reading
201
  Check our blogpost - ["The new milestone in zero-shot capabilities (it’s not Generative AI)."](https://medium.com/p/9b5a081fbf27), where we highlighted possible use-cases of the model and why next-token prediction is not the only way to achive amazing zero-shot capabilites.
202
  While most of the AI industry is focused on generative AI and decoder-based models, we are committed to developing encoder-based models.