AI-Ahmed commited on
Commit
50e08e7
1 Parent(s): 624eb31

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -1
README.md CHANGED
@@ -12,6 +12,26 @@ models:
12
  metrics:
13
  - accuracy
14
  - loss
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  pipeline_tag: text-classification
16
  widget:
17
  - text: How is the life of a math student? Could you describe your own experiences? Which level of preparation is enough for the exam jlpt5?
@@ -54,6 +74,23 @@ gradient_accumulation_steps=8
54
 
55
  - [wandb - deberta_qqa_classification](https://wandb.ai/ai-ahmed/deberta_qqa_classification?workspace=user-ai-ahmed)
56
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
  ## Information Citation
58
 
59
  ```bibtex
@@ -65,4 +102,4 @@ booktitle={International Conference on Learning Representations},
65
  year={2021},
66
  url={https://openreview.net/forum?id=XPZIaotutsD}
67
  }
68
- ```
 
12
  metrics:
13
  - accuracy
14
  - loss
15
+ model-index:
16
+ - name: deberta-v3-base-funetuned-cls-qqa
17
+ results:
18
+ - task:
19
+ type: text-classification
20
+ name: Text Classification
21
+ dataset:
22
+ name: qqp
23
+ type: qqp
24
+ config: sst2
25
+ split: validation
26
+ metrics:
27
+ - name: Accuracy
28
+ type: accuracy
29
+ value: 0.917969
30
+ verified: true
31
+ - name: loss
32
+ type: loss
33
+ value: 0.217410
34
+ verified: true
35
  pipeline_tag: text-classification
36
  widget:
37
  - text: How is the life of a math student? Could you describe your own experiences? Which level of preparation is enough for the exam jlpt5?
 
74
 
75
  - [wandb - deberta_qqa_classification](https://wandb.ai/ai-ahmed/deberta_qqa_classification?workspace=user-ai-ahmed)
76
 
77
+ ## Model Testing
78
+
79
+ ```python
80
+ import torch
81
+ from transformers import AutoTokenizer, AutoModelForSequenceClassification
82
+ model_name = "AI-Ahmed/deberta-v3-base-funetuned-cls-qqa"
83
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
84
+ model = AutoModelForSequenceClassification.from_pretrained(model_name)
85
+ tokenized_input = tokenizer("How is the life of a math student? Could you describe your own experiences? Which level of preparation is enough for the exam jlpt5?", return_tensors="pt")
86
+
87
+ with torch.no_grad():
88
+ logits = model(**tokenized_input).logits
89
+
90
+ predicted_class_id = logits.argmax().item()
91
+ model.config.id2label[predicted_class_id]
92
+ ```
93
+
94
  ## Information Citation
95
 
96
  ```bibtex
 
102
  year={2021},
103
  url={https://openreview.net/forum?id=XPZIaotutsD}
104
  }
105
+ ```