malusama commited on
Commit
e33e190
·
verified ·
1 Parent(s): e53cbe6

Fix model card repo names and safetensors wording

Browse files
Files changed (2) hide show
  1. README.md +6 -6
  2. configuration_m2_encoder.py +1 -1
README.md CHANGED
@@ -23,9 +23,9 @@ This folder is generated from `Ant-Multi-Modal-Framework/prj/M2_Encoder` and is
23
  - `AutoModel.from_pretrained(..., trust_remote_code=True)`
24
  - Zero-shot image-text retrieval and zero-shot image classification
25
 
26
- ## Required Weight File
27
 
28
- Put the model weight file in the repo root with this exact filename:
29
 
30
  `m2_encoder_1B.safetensors`
31
 
@@ -40,7 +40,7 @@ The original ModelScope sample computes probabilities from the raw normalized em
40
  ```python
41
  from transformers import AutoModel, AutoProcessor
42
 
43
- repo_id = "your-name/your-m2-encoder-repo"
44
 
45
  model = AutoModel.from_pretrained(repo_id, trust_remote_code=True)
46
  processor = AutoProcessor.from_pretrained(repo_id, trust_remote_code=True)
@@ -68,7 +68,7 @@ Those logits are useful, but they are not the same computation as the raw dot pr
68
  Option 1:
69
 
70
  ```bash
71
- python upload_to_hub.py --repo-id your-name/your-m2-encoder-repo
72
  ```
73
 
74
  Option 2:
@@ -77,7 +77,7 @@ Option 2:
77
  huggingface-cli login
78
  git init
79
  git lfs install
80
- git remote add origin https://huggingface.co/your-name/your-m2-encoder-repo
81
  git add .
82
  git commit -m "Upload M2-Encoder HF export"
83
  git push origin main
@@ -115,4 +115,4 @@ Example response fields:
115
  - This is a Hugging Face remote-code adapter, not a native `transformers` implementation.
116
  - The underlying model code still comes from the official M2-Encoder repo.
117
  - You need `trust_remote_code=True`.
118
- - The weights are not bundled by default when exporting unless you pass `--checkpoint`.
 
23
  - `AutoModel.from_pretrained(..., trust_remote_code=True)`
24
  - Zero-shot image-text retrieval and zero-shot image classification
25
 
26
+ ## Included Weight File
27
 
28
+ This repo includes the model weight file in the repo root with this exact filename:
29
 
30
  `m2_encoder_1B.safetensors`
31
 
 
40
  ```python
41
  from transformers import AutoModel, AutoProcessor
42
 
43
+ repo_id = "malusama/M2-Encoder-1B"
44
 
45
  model = AutoModel.from_pretrained(repo_id, trust_remote_code=True)
46
  processor = AutoProcessor.from_pretrained(repo_id, trust_remote_code=True)
 
68
  Option 1:
69
 
70
  ```bash
71
+ python upload_to_hub.py --repo-id malusama/M2-Encoder-1B
72
  ```
73
 
74
  Option 2:
 
77
  huggingface-cli login
78
  git init
79
  git lfs install
80
+ git remote add origin https://huggingface.co/malusama/M2-Encoder-1B
81
  git add .
82
  git commit -m "Upload M2-Encoder HF export"
83
  git push origin main
 
115
  - This is a Hugging Face remote-code adapter, not a native `transformers` implementation.
116
  - The underlying model code still comes from the official M2-Encoder repo.
117
  - You need `trust_remote_code=True`.
118
+ - The `.safetensors` weight file is already included in this Hub repo.
configuration_m2_encoder.py CHANGED
@@ -25,7 +25,7 @@ class M2EncoderConfig(PretrainedConfig):
25
  precision=32,
26
  test_only=True,
27
  flash_attn=False,
28
- model_file="m2_encoder_1B.ckpt",
29
  architectures=None,
30
  auto_map=None,
31
  **kwargs,
 
25
  precision=32,
26
  test_only=True,
27
  flash_attn=False,
28
+ model_file="m2_encoder_1B.safetensors",
29
  architectures=None,
30
  auto_map=None,
31
  **kwargs,