zongxiao commited on
Commit
9110c55
1 Parent(s): f172a29

End of training

Browse files
Files changed (3) hide show
  1. README.md +63 -0
  2. generation_config.json +9 -0
  3. pytorch_model.bin +1 -1
README.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - zh
4
+ license: mit
5
+ base_model: GCYY/speecht5_finetuned_fleurs_zh
6
+ tags:
7
+ - text-to-speech
8
+ - generated_from_trainer
9
+ datasets:
10
+ - google/fleurs
11
+ model-index:
12
+ - name: SpeechT5 TTS Chinese
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # SpeechT5 TTS Chinese
20
+
21
+ This model is a fine-tuned version of [GCYY/speecht5_finetuned_fleurs_zh](https://huggingface.co/GCYY/speecht5_finetuned_fleurs_zh) on the fleurs dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - eval_loss: 0.3956
24
+ - eval_runtime: 36.5499
25
+ - eval_samples_per_second: 12.586
26
+ - eval_steps_per_second: 1.587
27
+ - epoch: 15.44
28
+ - step: 2000
29
+
30
+ ## Model description
31
+
32
+ More information needed
33
+
34
+ ## Intended uses & limitations
35
+
36
+ More information needed
37
+
38
+ ## Training and evaluation data
39
+
40
+ More information needed
41
+
42
+ ## Training procedure
43
+
44
+ ### Training hyperparameters
45
+
46
+ The following hyperparameters were used during training:
47
+ - learning_rate: 5e-06
48
+ - train_batch_size: 8
49
+ - eval_batch_size: 8
50
+ - seed: 42
51
+ - gradient_accumulation_steps: 4
52
+ - total_train_batch_size: 32
53
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
54
+ - lr_scheduler_type: linear
55
+ - lr_scheduler_warmup_steps: 500
56
+ - training_steps: 4000
57
+
58
+ ### Framework versions
59
+
60
+ - Transformers 4.33.3
61
+ - Pytorch 2.0.1
62
+ - Datasets 2.14.5
63
+ - Tokenizers 0.13.3
generation_config.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "decoder_start_token_id": 2,
5
+ "eos_token_id": 2,
6
+ "max_length": 1876,
7
+ "pad_token_id": 1,
8
+ "transformers_version": "4.33.3"
9
+ }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:44ce313005f571031beddcea44b9f38c8c5123edbe88bfdc010c728dd2488790
3
  size 585011517
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:39b17d052653d5b0d4a83e25ceff21f672431c29f622bbb44e3ac34dc8cd428a
3
  size 585011517