Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Posts
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

sonoisa
/
clip-vit-b-32-japanese-v1

Feature Extraction
Transformers
PyTorch
Safetensors
Japanese
bert
clip
sentence-similarity
Model card Files Files and versions Community
1
clip-vit-b-32-japanese-v1
Ctrl+K
Ctrl+K
  • 2 contributors
History: 4 commits
sonoisa's picture
sonoisa
Add CLIP visual encoder
f153cbe about 3 years ago
  • visual_model
    Add CLIP visual encoder about 3 years ago
  • .gitattributes
    1.18 kB
    initial commit about 3 years ago
  • README.md
    245 Bytes
    Create README.md about 3 years ago
  • config.json
    678 Bytes
    Add CLIP text encoder for Japanese about 3 years ago
  • output_linear.bin
    4.72 MB
    LFS
    Add CLIP text encoder for Japanese about 3 years ago
  • pytorch_model.bin
    443 MB
    LFS
    Add CLIP text encoder for Japanese about 3 years ago
  • special_tokens_map.json
    112 Bytes
    Add CLIP text encoder for Japanese about 3 years ago
  • tokenizer_config.json
    493 Bytes
    Add CLIP text encoder for Japanese about 3 years ago
  • training_args.json
    593 Bytes
    Add CLIP text encoder for Japanese about 3 years ago
  • vocab.txt
    258 kB
    Add CLIP text encoder for Japanese about 3 years ago