Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ tags:
|
|
8 |
- zero-shot-classification
|
9 |
- debarta-v3
|
10 |
model-index:
|
11 |
-
- name:
|
12 |
results: []
|
13 |
datasets:
|
14 |
- tyqiangz/multilingual-sentiments
|
@@ -30,7 +30,7 @@ language:
|
|
30 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
31 |
should probably proofread and complete it, then remove this comment. -->
|
32 |
|
33 |
-
#
|
34 |
|
35 |
This model is distilled from the zero-shot classification pipeline on the Multilingual Sentiment
|
36 |
dataset using this [script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/zero-shot-distillation).
|
@@ -50,7 +50,7 @@ but we'll pretend and ignore the annotations for the sake of example.
|
|
50 |
from transformers import pipeline
|
51 |
|
52 |
distilled_student_sentiment_classifier = pipeline(
|
53 |
-
model="
|
54 |
return_all_scores=True
|
55 |
)
|
56 |
|
@@ -75,43 +75,6 @@ distilled_student_sentiment_classifier("私はこの映画が大好きで、何
|
|
75 |
|
76 |
```
|
77 |
|
78 |
-
|
79 |
-
## Training procedure
|
80 |
-
|
81 |
-
Notebook link: [here](https://github.com/LxYuan0420/nlp/blob/main/notebooks/Distilling_Zero_Shot_multilingual_distilbert_sentiments_student.ipynb)
|
82 |
-
|
83 |
-
### Training hyperparameters
|
84 |
-
|
85 |
-
Result can be reproduce using the following commands:
|
86 |
-
|
87 |
-
```bash
|
88 |
-
python transformers/examples/research_projects/zero-shot-distillation/distill_classifier.py \
|
89 |
-
--data_file ./multilingual-sentiments/train_unlabeled.txt \
|
90 |
-
--class_names_file ./multilingual-sentiments/class_names.txt \
|
91 |
-
--hypothesis_template "The sentiment of this text is {}." \
|
92 |
-
--teacher_name_or_path MoritzLaurer/mDeBERTa-v3-base-mnli-xnli \
|
93 |
-
--teacher_batch_size 32 \
|
94 |
-
--student_name_or_path distilbert-base-multilingual-cased \
|
95 |
-
--output_dir ./distilbert-base-multilingual-cased-sentiments-student \
|
96 |
-
--per_device_train_batch_size 16 \
|
97 |
-
--fp16
|
98 |
-
```
|
99 |
-
|
100 |
-
If you are training this model on Colab, make the following code changes to avoid Out-of-memory error message:
|
101 |
-
```bash
|
102 |
-
###### modify L78 to disable fast tokenizer
|
103 |
-
default=False,
|
104 |
-
|
105 |
-
###### update dataset map part at L313
|
106 |
-
dataset = dataset.map(tokenizer, input_columns="text", fn_kwargs={"padding": "max_length", "truncation": True, "max_length": 512})
|
107 |
-
|
108 |
-
###### add following lines to L213
|
109 |
-
del model
|
110 |
-
print(f"Manually deleted Teacher model, free some memory for student model.")
|
111 |
-
|
112 |
-
###### add following lines to L337
|
113 |
-
trainer.push_to_hub()
|
114 |
-
tokenizer.push_to_hub("distilbert-base-multilingual-cased-sentiments-student")
|
115 |
|
116 |
```
|
117 |
|
|
|
8 |
- zero-shot-classification
|
9 |
- debarta-v3
|
10 |
model-index:
|
11 |
+
- name: Softechlb/Sent_analysis_CVs
|
12 |
results: []
|
13 |
datasets:
|
14 |
- tyqiangz/multilingual-sentiments
|
|
|
30 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
31 |
should probably proofread and complete it, then remove this comment. -->
|
32 |
|
33 |
+
# Softechlb/Sent_analysis_CVs
|
34 |
|
35 |
This model is distilled from the zero-shot classification pipeline on the Multilingual Sentiment
|
36 |
dataset using this [script](https://github.com/huggingface/transformers/tree/main/examples/research_projects/zero-shot-distillation).
|
|
|
50 |
from transformers import pipeline
|
51 |
|
52 |
distilled_student_sentiment_classifier = pipeline(
|
53 |
+
model="Softechlb/Sent_analysis_CVs",
|
54 |
return_all_scores=True
|
55 |
)
|
56 |
|
|
|
75 |
|
76 |
```
|
77 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
78 |
|
79 |
```
|
80 |
|