rwightman HF staff commited on
Commit
3e473ec
1 Parent(s): 82d3215

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -23,7 +23,7 @@ Goals:
23
 
24
  Firsts:
25
  * First known ConvNeXt CLIP models trained at scale in the range of CLIP ViT-B/16 and RN50x4 models
26
- * First released model weights exploring increase of augmentation + regularization for image tower via adding (increased resize range of RRC, adding random erasing, adding stochastic depth)
27
 
28
  The models utilize the [timm](https://github.com/rwightman/pytorch-image-models) ConvNeXt-Base model (`convnext_base`) as the image tower, and the same text tower as the RN50x4 (depth 12, embed dim 640) model from OpenAI CLIP. The base models are trained at 256x256 image resolution and roughly match the RN50x4 models on FLOPs and activation counts. The models with `320` in the name are trained at 320x320.
29
 
 
23
 
24
  Firsts:
25
  * First known ConvNeXt CLIP models trained at scale in the range of CLIP ViT-B/16 and RN50x4 models
26
+ * First released model weights exploring increase of augmentation + regularization for image tower via adding (greater scale range of RRC, random erasing, stochastic depth)
27
 
28
  The models utilize the [timm](https://github.com/rwightman/pytorch-image-models) ConvNeXt-Base model (`convnext_base`) as the image tower, and the same text tower as the RN50x4 (depth 12, embed dim 640) model from OpenAI CLIP. The base models are trained at 256x256 image resolution and roughly match the RN50x4 models on FLOPs and activation counts. The models with `320` in the name are trained at 320x320.
29