Update README.md
Browse files
README.md
CHANGED
@@ -9,10 +9,10 @@ tags:
|
|
9 |
- embeddings
|
10 |
pipeline_tag: sentence-similarity
|
11 |
---
|
12 |
-
# Granite-Embedding-
|
13 |
|
14 |
**Model Summary:**
|
15 |
-
Granite-Embedding-
|
16 |
|
17 |
- **Developers:** Granite Embedding Team, IBM
|
18 |
- **GitHub Repository:** [ibm-granite/granite-embedding-models](https://github.com/ibm-granite/granite-embedding-models)
|
@@ -68,7 +68,7 @@ client.create_collection(
|
|
68 |
)
|
69 |
|
70 |
embeddings_model = model.sparse.SpladeEmbeddingFunction(
|
71 |
-
model_name="ibm-granite/granite-embedding-
|
72 |
device="cpu",
|
73 |
batch_size=2,
|
74 |
k_tokens_query=50,
|
@@ -115,18 +115,20 @@ for r in res:
|
|
115 |
```
|
116 |
**Evaluation:**
|
117 |
|
118 |
-
Granite-Embedding-
|
|
|
119 |
|
120 |
| Model | Paramters (M)| Vocab Size | BEIR Retrieval (13) |
|
121 |
|---------------------------------|:------------:|:-------------------:|:-------------------: |
|
122 |
|naver/splade-v3-distilbert |67 |30522 |50.0 |
|
123 |
-
|granite-embedding-
|
|
|
124 |
|
125 |
|
126 |
**Model Architecture:**
|
127 |
-
Granite-Embedding-
|
128 |
|
129 |
-
| Model | granite-embedding-
|
130 |
| :--------- | :-------:|
|
131 |
| Embedding size | **384** |
|
132 |
| Number of layers | **6** |
|
@@ -170,13 +172,13 @@ Overall, the training data consists of four key sources: (1) unsupervised title-
|
|
170 |
| IBM Internal Triples | 40,290 |
|
171 |
| IBM Internal Title-Body Pairs | 1,524,586 |
|
172 |
|
173 |
-
Notably, we do not use the popular MS-MARCO retrieval dataset in our training corpus due to its non-commercial license
|
174 |
|
175 |
**Infrastructure:**
|
176 |
We train Granite Embedding Models using IBM's computing cluster, Cognitive Compute Cluster, which is outfitted with NVIDIA A100 80gb GPUs. This cluster provides a scalable and efficient infrastructure for training our models over multiple GPUs.
|
177 |
|
178 |
**Ethical Considerations and Limitations:**
|
179 |
-
The data used to train the base language model was filtered to remove text containing hate, abuse, and profanity. Granite-Embedding-
|
180 |
|
181 |
**Resources**
|
182 |
- ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite
|
|
|
9 |
- embeddings
|
10 |
pipeline_tag: sentence-similarity
|
11 |
---
|
12 |
+
# Granite-Embedding-30m-Sparse
|
13 |
|
14 |
**Model Summary:**
|
15 |
+
Granite-Embedding-30m-Sparse is a 30M parameter sparse biencoder embedding model from the Granite Experimental suite that can be used to generate high quality text embeddings. This model produces variable length bag-of-word like dictionary, containing expansions of sentence tokens and their corresponding weights and is trained using a combination of open source relevance-pair datasets with permissive, enterprise-friendly license, and IBM collected and generated datasets. While maintaining competitive scores on academic benchmarks such as BEIR, this model also performs well on many enterprise use cases. This model is developed using retrieval oriented pretraining, contrastive finetuning and knowledge distillation for improved performance.
|
16 |
|
17 |
- **Developers:** Granite Embedding Team, IBM
|
18 |
- **GitHub Repository:** [ibm-granite/granite-embedding-models](https://github.com/ibm-granite/granite-embedding-models)
|
|
|
68 |
)
|
69 |
|
70 |
embeddings_model = model.sparse.SpladeEmbeddingFunction(
|
71 |
+
model_name="ibm-granite/granite-embedding-30m-sparse/",
|
72 |
device="cpu",
|
73 |
batch_size=2,
|
74 |
k_tokens_query=50,
|
|
|
115 |
```
|
116 |
**Evaluation:**
|
117 |
|
118 |
+
Granite-Embedding-30m-Sparse is competive in performance to the naver/splade-v3-distilbert despite being half the parameter size. We also compare the sparse model with similar sized dense embedding counterpart `ibm-granite/granite-embedding-30m-english`. The performance of the models on MTEB Retrieval (i.e., BEIR) is reported below.
|
119 |
+
To maintain consistency with results reported by `naver/splade-v3-distilbert`, we do not include CQADupstack and MS-MARCO int he table below.
|
120 |
|
121 |
| Model | Paramters (M)| Vocab Size | BEIR Retrieval (13) |
|
122 |
|---------------------------------|:------------:|:-------------------:|:-------------------: |
|
123 |
|naver/splade-v3-distilbert |67 |30522 |50.0 |
|
124 |
+
|granite-embedding-30m-english |30 |50265 |50.6 |
|
125 |
+
|granite-embedding-30m-sparse |30 |50265 |50.8 |
|
126 |
|
127 |
|
128 |
**Model Architecture:**
|
129 |
+
Granite-Embedding-30m-Sparse is based on an encoder-only RoBERTa like transformer architecture, trained internally at IBM Research.
|
130 |
|
131 |
+
| Model | granite-embedding-30m-sparse |
|
132 |
| :--------- | :-------:|
|
133 |
| Embedding size | **384** |
|
134 |
| Number of layers | **6** |
|
|
|
172 |
| IBM Internal Triples | 40,290 |
|
173 |
| IBM Internal Title-Body Pairs | 1,524,586 |
|
174 |
|
175 |
+
Notably, we do not use the popular MS-MARCO retrieval dataset in our training corpus due to its non-commercial license.
|
176 |
|
177 |
**Infrastructure:**
|
178 |
We train Granite Embedding Models using IBM's computing cluster, Cognitive Compute Cluster, which is outfitted with NVIDIA A100 80gb GPUs. This cluster provides a scalable and efficient infrastructure for training our models over multiple GPUs.
|
179 |
|
180 |
**Ethical Considerations and Limitations:**
|
181 |
+
The data used to train the base language model was filtered to remove text containing hate, abuse, and profanity. Granite-Embedding-30m-Sparse is trained only for English texts, and has a context length of 512 tokens (longer texts will be truncated to this size).
|
182 |
|
183 |
**Resources**
|
184 |
- ⭐️ Learn about the latest updates with Granite: https://www.ibm.com/granite
|