JJitsev commited on
Commit
15efd0f
1 Parent(s): e8a3ef1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -24,7 +24,7 @@ pipeline_tag: zero-shot-image-classification
24
 
25
  ## Model Description
26
 
27
- A CLIP ViT-g/14 model trained with the LAION-2B English subset of LAION-5B (https://laion.ai/blog/laion-5b/) using OpenCLIP (https://github.com/mlfoundations/open_clip).
28
 
29
  Model training done by Jenia Jitsev on [JUWELS Booster](https://apps.fz-juelich.de/jsc/hps/juwels/booster-overview.html) at [Juelich Supercomputing Center](https://www.fz-juelich.de/en/ias/jsc) and on the [stability.ai](https://stability.ai/) AWS HPC cluster.
30
  Training performed in frame of reproducible scaling law studies, published as [research paper at CVPR 2023](https://openaccess.thecvf.com/content/CVPR2023/html/Cherti_Reproducible_Scaling_Laws_for_Contrastive_Language-Image_Learning_CVPR_2023_paper.html). See also the [research repository](https://github.com/LAION-AI/scaling-laws-openclip)
@@ -33,7 +33,7 @@ Training performed in frame of reproducible scaling law studies, published as [r
33
 
34
  As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
35
 
36
- The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and upcoming paper include additional discussion as it relates specifically to the training dataset.
37
 
38
  ## Direct Use
39
 
@@ -89,9 +89,9 @@ An initial round of benchmarks have been performed on a wider range of datasets,
89
 
90
  # Acknowledgements
91
 
92
- We gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding the work by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS Booster at Jülich Supercomputing Centre (JSC)
93
  We also acknowledge storage resources on JUST granted and operated by JSC, as well as computing resources from the Helmholtz Data Federation (HDF).
94
- We als acknowledge [stability.ai](https://stability.ai/) providing additional compute used to train this model.
95
 
96
  # Citation
97
 
 
24
 
25
  ## Model Description
26
 
27
+ A CLIP ViT-g/14 model trained with the LAION-2B English subset of LAION-5B (https://laion.ai/blog/laion-5b/, https://openreview.net/forum?id=M3Y74vmsMcY) using OpenCLIP (https://github.com/mlfoundations/open_clip).
28
 
29
  Model training done by Jenia Jitsev on [JUWELS Booster](https://apps.fz-juelich.de/jsc/hps/juwels/booster-overview.html) at [Juelich Supercomputing Center](https://www.fz-juelich.de/en/ias/jsc) and on the [stability.ai](https://stability.ai/) AWS HPC cluster.
30
  Training performed in frame of reproducible scaling law studies, published as [research paper at CVPR 2023](https://openaccess.thecvf.com/content/CVPR2023/html/Cherti_Reproducible_Scaling_Laws_for_Contrastive_Language-Image_Learning_CVPR_2023_paper.html). See also the [research repository](https://github.com/LAION-AI/scaling-laws-openclip)
 
33
 
34
  As per the original [OpenAI CLIP model card](https://github.com/openai/CLIP/blob/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1/model-card.md), this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such model.
35
 
36
+ The OpenAI CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. Additionally, the LAION-5B blog (https://laion.ai/blog/laion-5b/) and [LAION-5B NeurIPS paper](https://openreview.net/forum?id=M3Y74vmsMcY) include additional discussion as it relates specifically to the training dataset.
37
 
38
  ## Direct Use
39
 
 
89
 
90
  # Acknowledgements
91
 
92
+ We gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding the work by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer [JUWELS Booster](https://apps.fz-juelich.de/jsc/hps/juwels/booster-overview.html) at Jülich Supercomputing Centre (JSC)
93
  We also acknowledge storage resources on JUST granted and operated by JSC, as well as computing resources from the Helmholtz Data Federation (HDF).
94
+ We further acknowledge [stability.ai](https://stability.ai/) providing additional compute used to train this model.
95
 
96
  # Citation
97