Authors

  • Hengyu Shi
  • Boynn

Fine-tuned CLIP-ViT-bigG-14 Model

This model is a fine-tuned version based on laion/CLIP-ViT-bigG-14-laion2B-39B-b160k.

Usage Method

base_model = CLIPTextModelWithProjection.from_pretrained("Boynn/CLIP-ViT-bigG-14-laion2B-39B-b160k-sft")

Downloads last month
9
Safetensors
Model size
695M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.