MoritzLaurer HF staff commited on
Commit
ef7c556
1 Parent(s): 49c9969

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -15,7 +15,8 @@ language:
15
  - tr
16
  - ur
17
  - vu
18
- - zh
 
19
  tags:
20
  - zero-shot-classification
21
  - text-classification
@@ -42,6 +43,7 @@ As of December 2021, mDeBERTa-base is the best performing multilingual base-size
42
  ```python
43
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
44
  import torch
 
45
 
46
  model_name = "MoritzLaurer/mDeBERTa-v3-base-mnli-xnli"
47
  tokenizer = AutoTokenizer.from_pretrained(model_name)
@@ -53,10 +55,8 @@ hypothesis = "Emmanuel Macron is the President of France"
53
  input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
54
  output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
55
  prediction = torch.softmax(output["logits"][0], -1).tolist()
56
-
57
  label_names = ["entailment", "neutral", "contradiction"]
58
  prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
59
-
60
  print(prediction)
61
  ```
62
 
@@ -87,7 +87,7 @@ average | ar | bg | de | el | en | es | fr | hi | ru | sw | th | tr | ur | vu |
87
  ## Limitations and bias
88
  Please consult the original DeBERTa-V3 paper and literature on different NLI datasets for potential biases.
89
 
90
- ## BibTeX entry and citation info
91
  If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k.
92
 
93
  ## Ideas for cooperation or questions?
 
15
  - tr
16
  - ur
17
  - vu
18
+ - zh
19
+ license: mit
20
  tags:
21
  - zero-shot-classification
22
  - text-classification
 
43
  ```python
44
  from transformers import AutoTokenizer, AutoModelForSequenceClassification
45
  import torch
46
+ device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
47
 
48
  model_name = "MoritzLaurer/mDeBERTa-v3-base-mnli-xnli"
49
  tokenizer = AutoTokenizer.from_pretrained(model_name)
 
55
  input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt")
56
  output = model(input["input_ids"].to(device)) # device = "cuda:0" or "cpu"
57
  prediction = torch.softmax(output["logits"][0], -1).tolist()
 
58
  label_names = ["entailment", "neutral", "contradiction"]
59
  prediction = {name: round(float(pred) * 100, 1) for pred, name in zip(prediction, label_names)}
 
60
  print(prediction)
61
  ```
62
 
 
87
  ## Limitations and bias
88
  Please consult the original DeBERTa-V3 paper and literature on different NLI datasets for potential biases.
89
 
90
+ ## Citation
91
  If you use this model, please cite: Laurer, Moritz, Wouter van Atteveldt, Andreu Salleras Casas, and Kasper Welbers. 2022. ‘Less Annotating, More Classifying – Addressing the Data Scarcity Issue of Supervised Machine Learning with Deep Transfer Learning and BERT - NLI’. Preprint, June. Open Science Framework. https://osf.io/74b8k.
92
 
93
  ## Ideas for cooperation or questions?