thenlper commited on
Commit
7f84fe7
·
verified ·
1 Parent(s): 9eed5b3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -6957,7 +6957,7 @@ The `GME` models support three types of input: **text**, **image**, and **image-
6957
 
6958
  **Developed by**: Tongyi Lab, Alibaba Group
6959
 
6960
- **Paper**: GME: Improving Universal Multimodal Retrieval by Multimodal LLMs
6961
 
6962
 
6963
  ## Model List
@@ -7036,7 +7036,7 @@ We validated the performance on our universal multimodal retrieval benchmark (**
7036
 
7037
  The [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard) English tab shows the text embeddings performence of our model.
7038
 
7039
- **More detailed experimental results can be found in the [paper](https://arxiv.org/pdf/2407.19669)**.
7040
 
7041
  ## Limitations
7042
 
@@ -7059,7 +7059,7 @@ We encourage and value diverse applications of GME models and continuous enhance
7059
 
7060
  In addition to the open-source [GME](https://huggingface.co/collections/Alibaba-NLP/gme-models-67667e092da3491f630964d6) series models, GME series models are also available as commercial API services on Alibaba Cloud.
7061
 
7062
- - [MultiModal Embedding Models](https://help.aliyun.com/zh/model-studio/developer-reference/general-text-embedding/): The `multimodal-embedding-v1` model service is available.
7063
 
7064
  Note that the models behind the commercial APIs are not entirely identical to the open-source models.
7065
 
@@ -7079,9 +7079,9 @@ If you find our paper or models helpful, please consider cite:
7079
  title={GME: Improving Universal Multimodal Retrieval by Multimodal LLMs},
7080
  author={Zhang, Xin and Zhang, Yanzhao and Xie, Wen and Li, Mingxin and Dai, Ziqi and Long, Dingkun and Xie, Pengjun and Zhang, Meishan and Li, Wenjie and Zhang, Min},
7081
  year={2024},
7082
- eprint={2412.xxxxx},
7083
  archivePrefix={arXiv},
7084
  primaryClass={cs.CL},
7085
- url={https://arxiv.org/abs/2412.xxxxx},
7086
  }
7087
  ```
 
6957
 
6958
  **Developed by**: Tongyi Lab, Alibaba Group
6959
 
6960
+ **Paper**: [GME: Improving Universal Multimodal Retrieval by Multimodal LLMs](http://arxiv.org/abs/2412.16855)
6961
 
6962
 
6963
  ## Model List
 
7036
 
7037
  The [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard) English tab shows the text embeddings performence of our model.
7038
 
7039
+ **More detailed experimental results can be found in the [paper](http://arxiv.org/abs/2412.16855)**.
7040
 
7041
  ## Limitations
7042
 
 
7059
 
7060
  In addition to the open-source [GME](https://huggingface.co/collections/Alibaba-NLP/gme-models-67667e092da3491f630964d6) series models, GME series models are also available as commercial API services on Alibaba Cloud.
7061
 
7062
+ - [MultiModal Embedding Models](https://help.aliyun.com/zh/model-studio/developer-reference/multimodal-embedding-api-reference?spm=a2c4g.11186623.0.0.321c1d1cqmoJ5C): The `multimodal-embedding-v1` model service is available.
7063
 
7064
  Note that the models behind the commercial APIs are not entirely identical to the open-source models.
7065
 
 
7079
  title={GME: Improving Universal Multimodal Retrieval by Multimodal LLMs},
7080
  author={Zhang, Xin and Zhang, Yanzhao and Xie, Wen and Li, Mingxin and Dai, Ziqi and Long, Dingkun and Xie, Pengjun and Zhang, Meishan and Li, Wenjie and Zhang, Min},
7081
  year={2024},
7082
+ eprint={2412.16855},
7083
  archivePrefix={arXiv},
7084
  primaryClass={cs.CL},
7085
+ url={http://arxiv.org/abs/2412.16855},
7086
  }
7087
  ```