andrewbo29 commited on
Commit
76a7594
·
1 Parent(s): ded2782

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -15,7 +15,7 @@ Vision transformer baze-sized (ViT base) feature model. Pre-trained with [Masked
15
 
16
  ## Training Procedure
17
 
18
- ViT-base MAE F1 was pre-trained on the custom dataset containing more than 1 million Formula 1 images from seasons 2021, 2022, 2023 with both racing and not racing scenes. The traing was performed on a cluster of 8 A100 80GB GPUs provided by [Nebius](https://nebius.com/).
19
 
20
  ### Training Hyperparameters
21
 
@@ -28,6 +28,10 @@ ViT-base MAE F1 was pre-trained on the custom dataset containing more than 1 mil
28
 
29
  ## Comparison with ViT-base MAE pre-trained on ImageNet-1K
30
 
 
 
 
 
31
  ## How to use
32
 
33
  Usage is the same as in [Transformers library realization of MAE](https://huggingface.co/facebook/vit-mae-base).
 
15
 
16
  ## Training Procedure
17
 
18
+ F1 ViT-base MAE was pre-trained on the custom dataset containing more than 1 million Formula 1 images from seasons 2021, 2022, 2023 with both racing and not racing scenes. The traing was performed on a cluster of 8 A100 80GB GPUs provided by [Nebius](https://nebius.com/).
19
 
20
  ### Training Hyperparameters
21
 
 
28
 
29
  ## Comparison with ViT-base MAE pre-trained on ImageNet-1K
30
 
31
+ Comparison of F1 ViT-base MAE and original ViT-base MAE pre-trained on ImageNet-1K (from https://huggingface.co/facebook/vit-mae-base) by reconstruction results on images from Formula 1 domain. Top is ViT-base MAE F1 reconstruction output, bottom is original ViT-base MAE.
32
+
33
+ <img src="comparison_1.png" alt="drawing" width="800"/>
34
+
35
  ## How to use
36
 
37
  Usage is the same as in [Transformers library realization of MAE](https://huggingface.co/facebook/vit-mae-base).