itsLeen commited on
Commit
e5058f5
1 Parent(s): 2a34278

itsLeen/swin-large-ai-or-nott

Browse files
Files changed (5) hide show
  1. README.md +15 -15
  2. all_results.json +6 -6
  3. model.safetensors +1 -1
  4. train_results.json +6 -6
  5. training_args.bin +1 -1
README.md CHANGED
@@ -18,8 +18,8 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  This model is a fine-tuned version of [itsLeen/swin-large-ai-or-not](https://huggingface.co/itsLeen/swin-large-ai-or-not) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.2853
22
- - Accuracy: 0.9558
23
 
24
  ## Model description
25
 
@@ -38,29 +38,29 @@ More information needed
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
- - learning_rate: 3e-05
42
- - train_batch_size: 16
43
  - eval_batch_size: 8
44
  - seed: 42
45
  - gradient_accumulation_steps: 4
46
- - total_train_batch_size: 64
47
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
48
  - lr_scheduler_type: cosine
49
  - lr_scheduler_warmup_steps: 20
50
- - num_epochs: 10
51
  - mixed_precision_training: Native AMP
52
  - label_smoothing_factor: 0.1
53
 
54
  ### Training results
55
 
56
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
57
- |:-------------:|:------:|:----:|:---------------:|:--------:|
58
- | 0.3736 | 1.4035 | 20 | 0.3182 | 0.9425 |
59
- | 0.2452 | 2.8070 | 40 | 0.2907 | 0.9558 |
60
- | 0.2165 | 4.2105 | 60 | 0.2798 | 0.9602 |
61
- | 0.2092 | 5.6140 | 80 | 0.2825 | 0.9602 |
62
- | 0.2045 | 7.0175 | 100 | 0.2837 | 0.9469 |
63
- | 0.2035 | 8.4211 | 120 | 0.2853 | 0.9558 |
64
 
65
 
66
  ### Framework versions
 
18
 
19
  This model is a fine-tuned version of [itsLeen/swin-large-ai-or-not](https://huggingface.co/itsLeen/swin-large-ai-or-not) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.2806
22
+ - Accuracy: 0.9690
23
 
24
  ## Model description
25
 
 
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
+ - learning_rate: 1e-07
42
+ - train_batch_size: 8
43
  - eval_batch_size: 8
44
  - seed: 42
45
  - gradient_accumulation_steps: 4
46
+ - total_train_batch_size: 32
47
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: cosine
49
  - lr_scheduler_warmup_steps: 20
50
+ - num_epochs: 40
51
  - mixed_precision_training: Native AMP
52
  - label_smoothing_factor: 0.1
53
 
54
  ### Training results
55
 
56
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
57
+ |:-------------:|:-------:|:----:|:---------------:|:--------:|
58
+ | 0.2653 | 1.7699 | 50 | 0.2818 | 0.9558 |
59
+ | 0.2522 | 3.5398 | 100 | 0.2812 | 0.9646 |
60
+ | 0.2616 | 5.3097 | 150 | 0.2810 | 0.9690 |
61
+ | 0.2541 | 7.0796 | 200 | 0.2808 | 0.9690 |
62
+ | 0.2536 | 8.8496 | 250 | 0.2807 | 0.9690 |
63
+ | 0.2534 | 10.6195 | 300 | 0.2806 | 0.9690 |
64
 
65
 
66
  ### Framework versions
all_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
- "epoch": 8.421052631578947,
3
- "total_flos": 5.966797516630917e+17,
4
- "train_loss": 0.24207958777745564,
5
- "train_runtime": 1194.2706,
6
- "train_samples_per_second": 7.569,
7
- "train_steps_per_second": 0.117
8
  }
 
1
  {
2
+ "epoch": 10.619469026548673,
3
+ "total_flos": 7.521173340291072e+17,
4
+ "train_loss": 0.25671261151631675,
5
+ "train_runtime": 1380.6774,
6
+ "train_samples_per_second": 26.19,
7
+ "train_steps_per_second": 0.811
8
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1cfe1d515600afb82df0073d71b40137847b5792c2c11d3e489d78bd0d1ca3f8
3
  size 347498816
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b7661b52584d40cb832b515e771902924bf4e8401e5264f85cdef203294fe5f6
3
  size 347498816
train_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
- "epoch": 8.421052631578947,
3
- "total_flos": 5.966797516630917e+17,
4
- "train_loss": 0.24207958777745564,
5
- "train_runtime": 1194.2706,
6
- "train_samples_per_second": 7.569,
7
- "train_steps_per_second": 0.117
8
  }
 
1
  {
2
+ "epoch": 10.619469026548673,
3
+ "total_flos": 7.521173340291072e+17,
4
+ "train_loss": 0.25671261151631675,
5
+ "train_runtime": 1380.6774,
6
+ "train_samples_per_second": 26.19,
7
+ "train_steps_per_second": 0.811
8
  }
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9d2d5c2bcc95c18f3dbbf93b7ce07c297b9e2322d645a3026790b93106eb820d
3
  size 5112
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:be45d6798b56c91e3f7551041114e56a03bd7b0b61d38e28393ec7f4e76fb908
3
  size 5112