BAAI
/

ldwang commited on
Commit
7999e1d
1 Parent(s): a7ec183
Files changed (1) hide show
  1. README.md +34 -16
README.md CHANGED
@@ -30,25 +30,36 @@ FlagEmbedding can map any text to a low-dimensional dense vector which can be us
30
  And it also can be used in vector databases for LLMs.
31
 
32
  ************* 🌟**Updates**🌟 *************
33
- - 09/15/2023: Release [paper](https://arxiv.org/pdf/2309.07597.pdf) and [dataset](https://data.baai.ac.cn/details/BAAI-MTP).
34
- - 09/12/2023: New Release:
 
 
35
  - **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
36
  - **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
37
- - 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
 
 
 
 
 
 
38
  - 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
39
- - 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
40
- - 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
41
- - 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
 
 
42
 
43
 
44
  ## Model List
45
 
46
  `bge` is short for `BAAI general embedding`.
47
 
48
- | Model | Language | | Description | query instruction for retrieval\* |
49
  |:-------------------------------|:--------:| :--------:| :--------:|:--------:|
50
- | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient \** | |
51
- | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient \** | |
 
52
  | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
53
  | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
54
  | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
@@ -63,11 +74,15 @@ And it also can be used in vector databases for LLMs.
63
  | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
64
 
65
 
66
- \*: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
67
 
68
- \**: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
69
  For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
70
 
 
 
 
 
71
  ## Frequently asked questions
72
 
73
  <details>
@@ -104,7 +119,11 @@ please select an appropriate similarity threshold based on the similarity distri
104
  <summary>3. When does the query instruction need to be used</summary>
105
 
106
  <!-- ### When does the query instruction need to be used -->
107
-
 
 
 
 
108
  For a retrieval task that uses short queries to find long related documents,
109
  it is recommended to add instructions for these short queries.
110
  **The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
@@ -364,7 +383,7 @@ which is more accurate than embedding model (i.e., bi-encoder) but more time-con
364
  Therefore, it can be used to re-rank the top-k documents returned by embedding model.
365
  We train the cross-encoder on a multilingual pair data,
366
  The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
367
- More details pelease refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
368
 
369
 
370
  ## Contact
@@ -374,7 +393,8 @@ You also can email Shitao Xiao(stxiao@baai.ac.cn) and Zheng Liu(liuzheng@baai.ac
374
 
375
  ## Citation
376
 
377
- If you find our work helpful, please cite us:
 
378
  ```
379
  @misc{bge_embedding,
380
  title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
@@ -389,5 +409,3 @@ If you find our work helpful, please cite us:
389
  ## License
390
  FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
391
 
392
-
393
-
 
30
  And it also can be used in vector databases for LLMs.
31
 
32
  ************* 🌟**Updates**🌟 *************
33
+ - 10/12/2023: Release [LLM-Embedder](./FlagEmbedding/llm_embedder/README.md), a unified embedding model to support diverse retrieval augmentation needs for LLMs. [Paper](https://arxiv.org/pdf/2310.07554.pdf) :fire:
34
+ - 09/15/2023: The [technical report](https://arxiv.org/pdf/2309.07597.pdf) of BGE has been released
35
+ - 09/15/2023: The [masive training data](https://data.baai.ac.cn/details/BAAI-MTP) of BGE has been released
36
+ - 09/12/2023: New models:
37
  - **New reranker model**: release cross-encoder models `BAAI/bge-reranker-base` and `BAAI/bge-reranker-large`, which are more powerful than embedding model. We recommend to use/fine-tune them to re-rank top-k documents returned by embedding models.
38
  - **update embedding model**: release `bge-*-v1.5` embedding model to alleviate the issue of the similarity distribution, and enhance its retrieval ability without instruction.
39
+
40
+
41
+ <details>
42
+ <summary>More</summary>
43
+ <!-- ### More -->
44
+
45
+ - 09/07/2023: Update [fine-tune code](https://github.com/FlagOpen/FlagEmbedding/blob/master/FlagEmbedding/baai_general_embedding/README.md): Add script to mine hard negatives and support adding instruction during fine-tuning.
46
  - 08/09/2023: BGE Models are integrated into **Langchain**, you can use it like [this](#using-langchain); C-MTEB **leaderboard** is [available](https://huggingface.co/spaces/mteb/leaderboard).
47
+ - 08/05/2023: Release base-scale and small-scale models, **best performance among the models of the same size 🤗**
48
+ - 08/02/2023: Release `bge-large-*`(short for BAAI General Embedding) Models, **rank 1st on MTEB and C-MTEB benchmark!** :tada: :tada:
49
+ - 08/01/2023: We release the [Chinese Massive Text Embedding Benchmark](https://github.com/FlagOpen/FlagEmbedding/blob/master/C_MTEB) (**C-MTEB**), consisting of 31 test dataset.
50
+
51
+ </details>
52
 
53
 
54
  ## Model List
55
 
56
  `bge` is short for `BAAI general embedding`.
57
 
58
+ | Model | Language | | Description | query instruction for retrieval [1] |
59
  |:-------------------------------|:--------:| :--------:| :--------:|:--------:|
60
+ | [BAAI/llm-embedder](https://huggingface.co/BAAI/llm-embedder) | English | [Inference](./FlagEmbedding/llm_embedder/README.md) [Fine-tune](./FlagEmbedding/llm_embedder/README.md) | a unified embedding model to support diverse retrieval augmentation needs for LLMs | See [README](./FlagEmbedding/llm_embedder/README.md) |
61
+ | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
62
+ | [BAAI/bge-reranker-base](https://huggingface.co/BAAI/bge-reranker-base) | Chinese and English | [Inference](#usage-for-reranker) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker) | a cross-encoder model which is more accurate but less efficient [2] | |
63
  | [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
64
  | [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
65
  | [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) | English | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | version 1.5 with more reasonable similarity distribution | `Represent this sentence for searching relevant passages: ` |
 
74
  | [BAAI/bge-small-zh](https://huggingface.co/BAAI/bge-small-zh) | Chinese | [Inference](#usage-for-embedding-model) [Fine-tune](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/finetune) | a small-scale model but with competitive performance | `为这个句子生成表示以用于检索相关文章:` |
75
 
76
 
77
+ [1\]: If you need to search the relevant passages to a query, we suggest to add the instruction to the query; in other cases, no instruction is needed, just use the original query directly. In all cases, **no instruction** needs to be added to passages.
78
 
79
+ [2\]: Different from embedding model, reranker uses question and document as input and directly output similarity instead of embedding. To balance the accuracy and time cost, cross-encoder is widely used to re-rank top-k documents retrieved by other simple models.
80
  For examples, use bge embedding model to retrieve top 100 relevant documents, and then use bge reranker to re-rank the top 100 document to get the final top-3 results.
81
 
82
+ All models have been uploaded to Huggingface Hub, and you can see them at https://huggingface.co/BAAI.
83
+ If you cannot open the Huggingface Hub, you also can download the models at https://model.baai.ac.cn/models .
84
+
85
+
86
  ## Frequently asked questions
87
 
88
  <details>
 
119
  <summary>3. When does the query instruction need to be used</summary>
120
 
121
  <!-- ### When does the query instruction need to be used -->
122
+
123
+ For the `bge-*-v1.5`, we improve its retrieval ability when not using instruction.
124
+ No instruction only has a slight degradation in retrieval performance compared with using instruction.
125
+ So you can generate embedding without instruction in all cases for convenience.
126
+
127
  For a retrieval task that uses short queries to find long related documents,
128
  it is recommended to add instructions for these short queries.
129
  **The best method to decide whether to add instructions for queries is choosing the setting that achieves better performance on your task.**
 
383
  Therefore, it can be used to re-rank the top-k documents returned by embedding model.
384
  We train the cross-encoder on a multilingual pair data,
385
  The data format is the same as embedding model, so you can fine-tune it easily following our [example](https://github.com/FlagOpen/FlagEmbedding/tree/master/examples/reranker).
386
+ More details please refer to [./FlagEmbedding/reranker/README.md](https://github.com/FlagOpen/FlagEmbedding/tree/master/FlagEmbedding/reranker)
387
 
388
 
389
  ## Contact
 
393
 
394
  ## Citation
395
 
396
+ If you find this repository useful, please consider giving a star :star: and citation
397
+
398
  ```
399
  @misc{bge_embedding,
400
  title={C-Pack: Packaged Resources To Advance General Chinese Embedding},
 
409
  ## License
410
  FlagEmbedding is licensed under the [MIT License](https://github.com/FlagOpen/FlagEmbedding/blob/master/LICENSE). The released models can be used for commercial purposes free of charge.
411