borough-oblast commited on
Commit
6f8c091
1 Parent(s): 8a30a56

Training in progress epoch 0

Browse files
Files changed (3) hide show
  1. README.md +5 -7
  2. config.json +1 -1
  3. tf_model.h5 +1 -1
README.md CHANGED
@@ -1,6 +1,4 @@
1
  ---
2
- license: apache-2.0
3
- base_model: borough-oblast/distilbert-base-multilingual-cased-finetuned-en-de-2
4
  tags:
5
  - generated_from_keras_callback
6
  model-index:
@@ -13,10 +11,10 @@ probably proofread and complete it, then remove this comment. -->
13
 
14
  # borough-oblast/distilbert-base-multilingual-cased-finetuned-en-de-v4
15
 
16
- This model is a fine-tuned version of [borough-oblast/distilbert-base-multilingual-cased-finetuned-en-de-2](https://huggingface.co/borough-oblast/distilbert-base-multilingual-cased-finetuned-en-de-2) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Train Loss: 2.5904
19
- - Validation Loss: 2.7529
20
  - Epoch: 0
21
 
22
  ## Model description
@@ -36,14 +34,14 @@ More information needed
36
  ### Training hyperparameters
37
 
38
  The following hyperparameters were used during training:
39
- - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -876, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
40
  - training_precision: float32
41
 
42
  ### Training results
43
 
44
  | Train Loss | Validation Loss | Epoch |
45
  |:----------:|:---------------:|:-----:|
46
- | 2.5904 | 2.7529 | 0 |
47
 
48
 
49
  ### Framework versions
 
1
  ---
 
 
2
  tags:
3
  - generated_from_keras_callback
4
  model-index:
 
11
 
12
  # borough-oblast/distilbert-base-multilingual-cased-finetuned-en-de-v4
13
 
14
+ This model was trained from scratch on an unknown dataset.
15
  It achieves the following results on the evaluation set:
16
+ - Train Loss: 1.9530
17
+ - Validation Loss: 2.7814
18
  - Epoch: 0
19
 
20
  ## Model description
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
+ - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'transformers.optimization_tf', 'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 123673, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}, 'registered_name': 'WarmUp'}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
38
  - training_precision: float32
39
 
40
  ### Training results
41
 
42
  | Train Loss | Validation Loss | Epoch |
43
  |:----------:|:---------------:|:-----:|
44
+ | 1.9530 | 2.7814 | 0 |
45
 
46
 
47
  ### Framework versions
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "borough-oblast/distilbert-base-multilingual-cased-finetuned-en-de-2",
3
  "activation": "gelu",
4
  "architectures": [
5
  "DistilBertForMaskedLM"
 
1
  {
2
+ "_name_or_path": "borough-oblast/distilbert-base-multilingual-cased-finetuned-en-de-v4",
3
  "activation": "gelu",
4
  "architectures": [
5
  "DistilBertForMaskedLM"
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5695f546b359cced1e4eaa0f53944185597af60ef1fe204f12b1708d0ee444be
3
  size 910749348
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ae3d07fec51e1b1442d0457fe9598a72e4d84cfdba001c5a2d44c828073b9854
3
  size 910749348