RicardoRei commited on
Commit
ca2a08f
1 Parent(s): eb7385d

WMT20 model

Browse files
Files changed (3) hide show
  1. README.md +1 -3
  2. checkpoints/model.ckpt +2 -2
  3. hparams.yaml +7 -7
README.md CHANGED
@@ -103,9 +103,7 @@ tags:
103
 
104
  This is a [COMET](https://github.com/Unbabel/COMET) quality estimation model: It receives a source sentence and the respective translation and returns a score that reflects the quality of the translation.
105
 
106
- **NOTE:**
107
- - This model was recently replaced by an improved version [wmt22-cometkiwi-da](https://huggingface.co/Unbabel/wmt22-cometkiwi-da)
108
- - This model is equivalent as `wmt20-comet-qe-da-v2` from previous [COMET](https://github.com/Unbabel/COMET) versions (<2.0).
109
 
110
  # Paper
111
 
 
103
 
104
  This is a [COMET](https://github.com/Unbabel/COMET) quality estimation model: It receives a source sentence and the respective translation and returns a score that reflects the quality of the translation.
105
 
106
+ **NOTE:** This model was recently replaced by an improved version [wmt22-cometkiwi-da](https://huggingface.co/Unbabel/wmt22-cometkiwi-da)
 
 
107
 
108
  # Paper
109
 
checkpoints/model.ckpt CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:05d892bf4a3e34b9a4de239109387d43107b2a8c55ad34b73a929ca6c1ede24e
3
- size 2277497201
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0dc381dfa76e78607d95f3ff8245e1b7e7010252fda43e6163802f67eba95732
3
+ size 2277430715
hparams.yaml CHANGED
@@ -1,21 +1,21 @@
 
1
  activations: Tanh
2
- batch_size: 4
3
  class_identifier: referenceless_regression_metric
4
  dropout: 0.1
5
  encoder_learning_rate: 1.0e-05
6
  encoder_model: XLM-RoBERTa
7
- final_activation: null
8
  hidden_sizes:
9
  - 2048
10
  - 1024
11
  keep_embeddings_frozen: true
12
  layer: mix
13
  layerwise_decay: 0.95
14
- learning_rate: 3.1e-05
15
  load_weights_from_checkpoint: null
16
- nr_frozen_epochs: 0.3
17
- optimizer: AdamW
18
  pool: avg
19
  pretrained_model: xlm-roberta-large
20
- train_data: data/scores-1719.csv
21
- validation_data: data/2020-mqm.csv
 
 
1
+ # Training Seed 3
2
  activations: Tanh
3
+ batch_size: 2
4
  class_identifier: referenceless_regression_metric
5
  dropout: 0.1
6
  encoder_learning_rate: 1.0e-05
7
  encoder_model: XLM-RoBERTa
 
8
  hidden_sizes:
9
  - 2048
10
  - 1024
11
  keep_embeddings_frozen: true
12
  layer: mix
13
  layerwise_decay: 0.95
14
+ learning_rate: 3.0e-05
15
  load_weights_from_checkpoint: null
16
+ optimizer: Adam
 
17
  pool: avg
18
  pretrained_model: xlm-roberta-large
19
+ train_data: data/scores_1719.csv
20
+ validation_data: data/scores_1719.csv
21
+ final_activation: "Sigmoid"