lombardata commited on
Commit
26f1340
1 Parent(s): d71dcd6

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +60 -6
README.md CHANGED
@@ -2,13 +2,13 @@
2
  language:
3
  - eng
4
  license: apache-2.0
5
- base_model: facebook/dinov2-large
6
  tags:
7
  - multilabel-image-classification
8
  - multilabel
9
  - generated_from_trainer
10
  metrics:
11
  - accuracy
 
12
  model-index:
13
  - name: DinoVdeau-large-2024_04_03-with_data_aug_batch-size32_epochs150_freeze
14
  results: []
@@ -19,7 +19,7 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  # DinoVdeau-large-2024_04_03-with_data_aug_batch-size32_epochs150_freeze
21
 
22
- This model is a fine-tuned version of [facebook/dinov2-large](https://huggingface.co/facebook/dinov2-large) on the multilabel_complete_dataset dataset.
23
  It achieves the following results on the evaluation set:
24
  - Loss: 0.1181
25
  - F1 Micro: 0.8219
@@ -30,18 +30,71 @@ It achieves the following results on the evaluation set:
30
 
31
  ## Model description
32
 
33
- More information needed
 
34
 
35
  ## Intended uses & limitations
36
 
37
- More information needed
38
 
39
  ## Training and evaluation data
40
 
41
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
 
43
  ## Training procedure
44
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
@@ -50,7 +103,8 @@ The following hyperparameters were used during training:
50
  - eval_batch_size: 32
51
  - seed: 42
52
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
53
- - lr_scheduler_type: linear
 
54
  - num_epochs: 150
55
 
56
  ### Training results
 
2
  language:
3
  - eng
4
  license: apache-2.0
 
5
  tags:
6
  - multilabel-image-classification
7
  - multilabel
8
  - generated_from_trainer
9
  metrics:
10
  - accuracy
11
+ base_model: facebook/dinov2-large
12
  model-index:
13
  - name: DinoVdeau-large-2024_04_03-with_data_aug_batch-size32_epochs150_freeze
14
  results: []
 
19
 
20
  # DinoVdeau-large-2024_04_03-with_data_aug_batch-size32_epochs150_freeze
21
 
22
+ DinoVd'eau is a fine-tuned version of [facebook/dinov2-large](https://huggingface.co/facebook/dinov2-large) on the multilabel_complete_dataset dataset.
23
  It achieves the following results on the evaluation set:
24
  - Loss: 0.1181
25
  - F1 Micro: 0.8219
 
30
 
31
  ## Model description
32
 
33
+ DinoVd'eau is a model built on top of dinov2 model for underwater multilabel image classification.The classification head is a combination of linear, ReLU, batch normalization, and dropout layers.
34
+ - **Developed by:** [lombardata](https://huggingface.co/lombardata), credits to [César Leblanc](https://huggingface.co/CesarLeblanc)
35
 
36
  ## Intended uses & limitations
37
 
38
+ You can use the raw model for classify diverse marine species, encompassing coral morphotypes classes taken from the Global Coral Reef Monitoring Network (GCRMN), habitats classes and seagrass species.
39
 
40
  ## Training and evaluation data
41
 
42
+ Details on the number of images for each class are given in the following table:
43
+ | |train |val |test |Total |
44
+ |--- | --- | --- | --- | --- |
45
+ | Acropore_branched | 1504 | 445 | 430 | 2379 |
46
+ | Acropore_digitised | 593 | 151 | 144 | 888 |
47
+ | Acropore_sub_massive | 148 | 54 | 41 | 243 |
48
+ | Acropore_tabular | 1012 | 290 | 287 | 1589 |
49
+ | Algae_assembly | 2545 | 858 | 835 | 4238 |
50
+ | Algae_drawn_up | 376 | 123 | 121 | 620 |
51
+ | Algae_limestone | 1652 | 561 | 559 | 2772 |
52
+ | Algae_sodding | 3094 | 1011 | 1012 | 5117 |
53
+ | Atra/Leucospilota | 1081 | 352 | 359 | 1792 |
54
+ | Bleached_coral | 220 | 70 | 70 | 360 |
55
+ | Blurred | 192 | 62 | 66 | 320 |
56
+ | Dead_coral | 2001 | 637 | 626 | 3264 |
57
+ | Fish | 2068 | 611 | 642 | 3321 |
58
+ | Homo_sapiens | 162 | 60 | 60 | 282 |
59
+ | Human_object | 157 | 60 | 53 | 270 |
60
+ | Living_coral | 147 | 56 | 47 | 250 |
61
+ | Millepore | 378 | 131 | 128 | 637 |
62
+ | No_acropore_encrusting | 422 | 152 | 151 | 725 |
63
+ | No_acropore_foliaceous | 200 | 46 | 40 | 286 |
64
+ | No_acropore_massive | 1033 | 337 | 335 | 1705 |
65
+ | No_acropore_solitary | 193 | 56 | 54 | 303 |
66
+ | No_acropore_sub_massive | 1412 | 418 | 426 | 2256 |
67
+ | Rock | 4487 | 1481 | 1489 | 7457 |
68
+ | Sand | 5806 | 1959 | 1954 | 9719 |
69
+ | Scrap | 3063 | 1030 | 1030 | 5123 |
70
+ | Sea_cucumber | 1396 | 453 | 445 | 2294 |
71
+ | Sea_urchins | 319 | 122 | 104 | 545 |
72
+ | Sponge | 273 | 107 | 90 | 470 |
73
+ | Syringodium_isoetifolium | 1198 | 399 | 398 | 1995 |
74
+ | Thalassodendron_ciliatum | 781 | 260 | 262 | 1303 |
75
+ | Useless | 579 | 193 | 193 | 965 |
76
+
77
 
78
  ## Training procedure
79
 
80
+ ### Data Augmentation
81
+
82
+ Data were augmented using the following transformations :
83
+ - training transformations : Sequential(
84
+ (0): PreProcess()
85
+ (1): Resize(output_size=(518, 518), p=1.0, p_batch=1.0, same_on_batch=True, size=(518, 518), side=short, resample=bilinear, align_corners=True, antialias=False)
86
+ (2): RandomHorizontalFlip(p=0.25, p_batch=1.0, same_on_batch=False)
87
+ (3): RandomVerticalFlip(p=0.25, p_batch=1.0, same_on_batch=False)
88
+ (4): ColorJiggle(brightness=0.0, contrast=0.0, saturation=0.0, hue=0.0, p=0.25, p_batch=1.0, same_on_batch=False)
89
+ (5): RandomPerspective(distortion_scale=0.5, p=0.25, p_batch=1.0, same_on_batch=False, align_corners=False, resample=bilinear)
90
+ (6): Normalize(p=1.0, p_batch=1.0, same_on_batch=True, mean=tensor([0.4850, 0.4560, 0.4060]), std=tensor([0.2290, 0.2240, 0.2250]))
91
+ )
92
+ - validation transformations : Sequential(
93
+ (0): PreProcess()
94
+ (1): Resize(output_size=(518, 518), p=1.0, p_batch=1.0, same_on_batch=True, size=(518, 518), side=short, resample=bilinear, align_corners=True, antialias=False)
95
+ (2): Normalize(p=1.0, p_batch=1.0, same_on_batch=True, mean=tensor([0.4850, 0.4560, 0.4060]), std=tensor([0.2290, 0.2240, 0.2250]))
96
+ )
97
+
98
  ### Training hyperparameters
99
 
100
  The following hyperparameters were used during training:
 
103
  - eval_batch_size: 32
104
  - seed: 42
105
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
106
+ - lr_scheduler_type: ReduceLROnPlateau with a patience of 5 epochs and a factor of 0.1
107
+ - freeze_encoder: True
108
  - num_epochs: 150
109
 
110
  ### Training results