abhishtagatya commited on
Commit
51e5305
1 Parent(s): 0f16e00

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -6
README.md CHANGED
@@ -11,6 +11,8 @@ metrics:
11
  model-index:
12
  - name: hubert-base-960h-itw-deepfake
13
  results: []
 
 
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -22,13 +24,24 @@ This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggi
22
  It achieves the following results on the evaluation set:
23
  - Loss: 0.0756
24
  - Accuracy: 0.9873
25
- - Far: 0.0083
26
- - Frr: 0.0203
27
- - Eer: 0.0143
28
 
29
  ## Model description
30
 
31
- More information needed
 
 
 
 
 
 
 
 
 
 
 
32
 
33
  ## Intended uses & limitations
34
 
@@ -55,7 +68,7 @@ The following hyperparameters were used during training:
55
 
56
  ### Training results
57
 
58
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | Far | Frr | Eer |
59
  |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|:------:|
60
  | 0.4081 | 0.39 | 2500 | 0.1152 | 0.9722 | 0.0285 | 0.0267 | 0.0276 |
61
  | 0.1168 | 0.79 | 5000 | 0.0822 | 0.9844 | 0.0120 | 0.0216 | 0.0168 |
@@ -69,4 +82,4 @@ The following hyperparameters were used during training:
69
  - Transformers 4.38.0.dev0
70
  - Pytorch 2.1.2+cu121
71
  - Datasets 2.16.2.dev0
72
- - Tokenizers 0.15.1
 
11
  model-index:
12
  - name: hubert-base-960h-itw-deepfake
13
  results: []
14
+ language:
15
+ - en
16
  ---
17
 
18
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
24
  It achieves the following results on the evaluation set:
25
  - Loss: 0.0756
26
  - Accuracy: 0.9873
27
+ - FAR: 0.0083
28
+ - FRR: 0.0203
29
+ - EER: 0.0143
30
 
31
  ## Model description
32
 
33
+ ### Quick Use
34
+
35
+ ```py3
36
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
37
+
38
+ config = AutoConfig.from_pretrained("abhishtagatya/hubert-base-960h-itw-deepfake")
39
+ feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("abhishtagatya/hubert-base-960h-itw-deepfake")
40
+
41
+ model = HubertForSequenceClassification.from_pretrained("abhishtagatya/hubert-base-960h-itw-deepfake", config=config,).to(device)
42
+
43
+ # Your Logic Here
44
+ ```
45
 
46
  ## Intended uses & limitations
47
 
 
68
 
69
  ### Training results
70
 
71
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy | FAR | FRR | EER |
72
  |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:------:|:------:|
73
  | 0.4081 | 0.39 | 2500 | 0.1152 | 0.9722 | 0.0285 | 0.0267 | 0.0276 |
74
  | 0.1168 | 0.79 | 5000 | 0.0822 | 0.9844 | 0.0120 | 0.0216 | 0.0168 |
 
82
  - Transformers 4.38.0.dev0
83
  - Pytorch 2.1.2+cu121
84
  - Datasets 2.16.2.dev0
85
+ - Tokenizers 0.15.1