grounding-dino-tiny-finetuned-cppe-5-10k-steps
This model is a fine-tuned version of IDEA-Research/grounding-dino-tiny on the cppe-5 dataset. It achieves the following results on the evaluation set:
- Loss: 5.5317
- Map: 0.0151
- Map 50: 0.0275
- Map 75: 0.0157
- Map Small: 0.0125
- Map Medium: 0.0149
- Map Large: 0.0236
- Mar 1: 0.0202
- Mar 10: 0.0902
- Mar 100: 0.1127
- Mar Small: 0.0815
- Mar Medium: 0.0975
- Mar Large: 0.1461
- Map Coverall: 0.0755
- Mar 100 Coverall: 0.5636
- Map Face Shield: 0.0
- Mar 100 Face Shield: 0.0
- Map Gloves: 0.0
- Mar 100 Gloves: 0.0
- Map Goggles: 0.0
- Mar 100 Goggles: 0.0
- Map Mask: 0.0
- Mar 100 Mask: 0.0
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
Training results
Training Loss | Epoch | Step | Validation Loss | Map | Map 50 | Map 75 | Map Small | Map Medium | Map Large | Mar 1 | Mar 10 | Mar 100 | Mar Small | Mar Medium | Mar Large | Map Coverall | Mar 100 Coverall | Map Face Shield | Mar 100 Face Shield | Map Gloves | Mar 100 Gloves | Map Goggles | Mar 100 Goggles | Map Mask | Mar 100 Mask |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
8355.9482 | 1.0 | 850 | 6.6137 | 0.014 | 0.0272 | 0.0134 | 0.0048 | 0.0111 | 0.0243 | 0.0149 | 0.0893 | 0.1073 | 0.0523 | 0.0889 | 0.1328 | 0.0702 | 0.5366 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
6.7523 | 2.0 | 1700 | 6.2357 | 0.0162 | 0.0302 | 0.0148 | 0.0106 | 0.0189 | 0.0192 | 0.0247 | 0.0894 | 0.107 | 0.0643 | 0.0968 | 0.1258 | 0.0809 | 0.535 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
6.5566 | 3.0 | 2550 | 6.0890 | 0.0158 | 0.0294 | 0.0134 | 0.01 | 0.0199 | 0.0215 | 0.0222 | 0.0876 | 0.1065 | 0.0671 | 0.0846 | 0.1324 | 0.0791 | 0.5323 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
6.2217 | 4.0 | 3400 | 5.9028 | 0.0144 | 0.0271 | 0.0134 | 0.0066 | 0.0096 | 0.0225 | 0.0232 | 0.0857 | 0.107 | 0.06 | 0.0823 | 0.1397 | 0.0721 | 0.5348 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
6.0963 | 5.0 | 4250 | 5.8411 | 0.0126 | 0.0215 | 0.014 | 0.0055 | 0.0138 | 0.0178 | 0.0201 | 0.0811 | 0.1052 | 0.044 | 0.0942 | 0.1377 | 0.0631 | 0.5258 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
5.996 | 6.0 | 5100 | 5.7244 | 0.0162 | 0.0311 | 0.0166 | 0.0059 | 0.0145 | 0.0221 | 0.0223 | 0.0869 | 0.1088 | 0.0667 | 0.0919 | 0.1328 | 0.0812 | 0.5437 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
5.8971 | 7.0 | 5950 | 5.5473 | 0.0154 | 0.027 | 0.016 | 0.01 | 0.014 | 0.0208 | 0.0244 | 0.0946 | 0.1154 | 0.084 | 0.106 | 0.1311 | 0.0769 | 0.5772 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
5.7451 | 8.0 | 6800 | 5.5231 | 0.0146 | 0.0267 | 0.0148 | 0.0021 | 0.0161 | 0.0183 | 0.0256 | 0.0905 | 0.1125 | 0.0325 | 0.1062 | 0.128 | 0.0731 | 0.5624 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
5.7931 | 9.0 | 7650 | 5.5190 | 0.0182 | 0.032 | 0.0195 | 0.0175 | 0.0147 | 0.0249 | 0.0299 | 0.1048 | 0.1138 | 0.08 | 0.0945 | 0.1309 | 0.091 | 0.5688 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
5.7435 | 10.0 | 8500 | 5.5317 | 0.0151 | 0.0275 | 0.0157 | 0.0125 | 0.0149 | 0.0236 | 0.0202 | 0.0902 | 0.1127 | 0.0815 | 0.0975 | 0.1461 | 0.0755 | 0.5636 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
Framework versions
- Transformers 4.45.0.dev0
- Pytorch 2.2.2
- Datasets 2.20.0
- Tokenizers 0.19.1
- Downloads last month
- 57
Model tree for danelcsb/grounding-dino-tiny-finetuned-cppe-5-10k-steps
Base model
IDEA-Research/grounding-dino-tiny