buruzaemon commited on
Commit
971db40
1 Parent(s): aab8e71

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -6
README.md CHANGED
@@ -34,7 +34,7 @@ It achieves the following results on the evaluation set:
34
 
35
  ## Model description
36
 
37
- This is an initial example of knowledge-distillation where the student loss is all cross-entropy loss \\(L_{CE}\\) of the ground-truth labels and none of the distillation loss \\(L_{KD}\\).
38
 
39
  ## Intended uses & limitations
40
 
@@ -42,17 +42,13 @@ More information needed
42
 
43
  ## Training and evaluation data
44
 
45
- The training and evaluation data come straight from the `train` and `validation` splits in the clinc_oos dataset, respectively; and tokenized using the `distilbert-base-uncased` tokenization.
46
 
47
  ## Training procedure
48
 
49
- Please see page 224 in Chapter 8: Making Transformers Efficient in Production, Natural Language Processing with Transformers, May 2022.
50
-
51
  ### Training hyperparameters
52
 
53
  The following hyperparameters were used during training:
54
- - alpha: 1.0
55
- - temperature: 2.0
56
  - learning_rate: 2e-05
57
  - train_batch_size: 48
58
  - eval_batch_size: 48
 
34
 
35
  ## Model description
36
 
37
+ More information needed
38
 
39
  ## Intended uses & limitations
40
 
 
42
 
43
  ## Training and evaluation data
44
 
45
+ More information needed
46
 
47
  ## Training procedure
48
 
 
 
49
  ### Training hyperparameters
50
 
51
  The following hyperparameters were used during training:
 
 
52
  - learning_rate: 2e-05
53
  - train_batch_size: 48
54
  - eval_batch_size: 48