anantoj commited on
Commit
07e4690
1 Parent(s): 6d49f66

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -43
README.md CHANGED
@@ -1,70 +1,77 @@
1
  ---
 
2
  license: apache-2.0
3
  tags:
4
- - generated_from_trainer
 
5
  metrics:
6
- - accuracy
7
- - f1
8
  model-index:
9
- - name: distil-wav2vec2-adult-child-id-cls-v3
10
- results: []
11
  ---
12
 
13
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
- should probably proofread and complete it, then remove this comment. -->
15
 
16
- # distil-wav2vec2-adult-child-id-cls-v3
17
 
18
- This model is a fine-tuned version of [anantoj/wav2vec2-adult-child-id-cls-v2](https://huggingface.co/anantoj/wav2vec2-adult-child-id-cls-v2) on the None dataset.
19
- It achieves the following results on the evaluation set:
20
- - Loss: 0.1560
21
- - Accuracy: 0.9489
22
- - F1: 0.9480
23
 
24
- ## Model description
25
 
26
- More information needed
 
 
27
 
28
- ## Intended uses & limitations
29
 
30
- More information needed
31
 
32
- ## Training and evaluation data
33
-
34
- More information needed
35
 
36
  ## Training procedure
37
 
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
- - learning_rate: 3e-05
42
- - train_batch_size: 32
43
- - eval_batch_size: 32
44
- - seed: 42
45
- - gradient_accumulation_steps: 4
46
- - total_train_batch_size: 128
47
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
- - lr_scheduler_type: linear
49
- - lr_scheduler_warmup_ratio: 0.1
50
- - num_epochs: 7
 
51
 
52
  ### Training results
53
 
54
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
55
- |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
56
- | 0.2494 | 1.0 | 76 | 0.1706 | 0.9454 | 0.9421 |
57
- | 0.2015 | 2.0 | 152 | 0.1519 | 0.9483 | 0.9464 |
58
- | 0.1674 | 3.0 | 228 | 0.1560 | 0.9489 | 0.9480 |
59
- | 0.1596 | 4.0 | 304 | 0.1760 | 0.9449 | 0.9414 |
60
- | 0.0873 | 5.0 | 380 | 0.1825 | 0.9478 | 0.9452 |
61
- | 0.0996 | 6.0 | 456 | 0.1733 | 0.9478 | 0.9460 |
62
- | 0.1055 | 7.0 | 532 | 0.1749 | 0.9454 | 0.9433 |
 
 
 
 
 
 
63
 
 
64
 
65
  ### Framework versions
66
 
67
- - Transformers 4.19.0.dev0
68
- - Pytorch 1.11.0+cu102
69
- - Datasets 2.2.1
70
- - Tokenizers 0.12.1
1
  ---
2
+ language: id
3
  license: apache-2.0
4
  tags:
5
+ - audio-classification
6
+ - generated_from_trainer
7
  metrics:
8
+ - accuracy
9
+ - f1
10
  model-index:
11
+ - name: distil-wav2vec2-adult-child-id-cls-52m
12
+ results: []
13
  ---
14
 
15
+ # DistilWav2Vec2 Adult/Child Indonesian Speech Classifier 52M
 
16
 
17
+ DistilWav2Vec2 Adult/Child Indonesian Speech Classifier is an audio classification model based on the [wav2vec 2.0](https://arxiv.org/abs/2006.11477) architecture. This model is a distilled version of [wav2vec2-adult-child-id-cls](https://huggingface.co/bookbot/wav2vec2-adult-child-id-cls) on a private adult/child Indonesian speech classification dataset.
18
 
19
+ This model was trained using HuggingFace's PyTorch framework. All training was done on a Tesla P100, provided by Kaggle. Training metrics were logged via Tensorboard.
 
 
 
 
20
 
21
+ ## Model
22
 
23
+ | Model | #params | Arch. | Training/Validation data (text) |
24
+ | ---------------------------------------- | ------- | ----------- | ---------------------------------------------------- |
25
+ | `distil-wav2vec2-adult-child-id-cls-52m` | 52m | wav2vec 2.0 | Adult/Child Indonesian Speech Classification Dataset |
26
 
27
+ ## Evaluation Results
28
 
29
+ The model achieves the following results on evaluation:
30
 
31
+ | Dataset | Loss | Accuracy | F1 |
32
+ | -------------------------------------------- | ------ | -------- | ------ |
33
+ | Adult/Child Indonesian Speech Classification | 0.1560 | 94.89% | 0.9480 |
34
 
35
  ## Training procedure
36
 
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
+
41
+ - `learning_rate`: 3e-05
42
+ - `train_batch_size`: 32
43
+ - `eval_batch_size`: 32
44
+ - `seed`: 42
45
+ - `gradient_accumulation_steps`: 4
46
+ - `total_train_batch_size`: 128
47
+ - `optimizer`: Adam with `betas=(0.9,0.999)` and `epsilon=1e-08`
48
+ - `lr_scheduler_type`: linear
49
+ - `lr_scheduler_warmup_ratio`: 0.1
50
+ - `num_epochs`: 7
51
 
52
  ### Training results
53
 
54
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
55
+ | :-----------: | :---: | :--: | :-------------: | :------: | :----: |
56
+ | 0.2494 | 1.0 | 76 | 0.1706 | 0.9454 | 0.9421 |
57
+ | 0.2015 | 2.0 | 152 | 0.1519 | 0.9483 | 0.9464 |
58
+ | 0.1674 | 3.0 | 228 | 0.1560 | 0.9489 | 0.9480 |
59
+ | 0.1596 | 4.0 | 304 | 0.1760 | 0.9449 | 0.9414 |
60
+ | 0.0873 | 5.0 | 380 | 0.1825 | 0.9478 | 0.9452 |
61
+ | 0.0996 | 6.0 | 456 | 0.1733 | 0.9478 | 0.9460 |
62
+ | 0.1055 | 7.0 | 532 | 0.1749 | 0.9454 | 0.9433 |
63
+
64
+ ## Disclaimer
65
+
66
+ Do consider the biases which came from pre-training datasets that may be carried over into the results of this model.
67
+
68
+ ## Authors
69
 
70
+ DistilWav2Vec2 Adult/Child Indonesian Speech Classifier was trained and evaluated by [Ananto Joyoadikusumo](https://anantoj.github.io/). All computation and development are done on Kaggle.
71
 
72
  ### Framework versions
73
 
74
+ - Transformers 4.16.2
75
+ - Pytorch 1.10.2+cu102
76
+ - Datasets 1.18.3
77
+ - Tokenizers 0.10.3