Update README.md
Browse files
README.md
CHANGED
@@ -116,7 +116,7 @@ for r in res:
|
|
116 |
**Evaluation:**
|
117 |
|
118 |
Granite-Embedding-30m-Sparse is competive in performance to the naver/splade-v3-distilbert despite being half the parameter size. We also compare the sparse model with similar sized dense embedding counterpart `ibm-granite/granite-embedding-30m-english`. The performance of the models on MTEB Retrieval (i.e., BEIR) is reported below.
|
119 |
-
To maintain consistency with results reported by `naver/splade-v3-distilbert`, we do not include CQADupstack and MS-MARCO
|
120 |
|
121 |
| Model | Paramters (M)| Vocab Size | BEIR Retrieval (13) |
|
122 |
|---------------------------------|:------------:|:-------------------:|:-------------------: |
|
|
|
116 |
**Evaluation:**
|
117 |
|
118 |
Granite-Embedding-30m-Sparse is competive in performance to the naver/splade-v3-distilbert despite being half the parameter size. We also compare the sparse model with similar sized dense embedding counterpart `ibm-granite/granite-embedding-30m-english`. The performance of the models on MTEB Retrieval (i.e., BEIR) is reported below.
|
119 |
+
To maintain consistency with results reported by `naver/splade-v3-distilbert`, we do not include CQADupstack and MS-MARCO in the table below.
|
120 |
|
121 |
| Model | Paramters (M)| Vocab Size | BEIR Retrieval (13) |
|
122 |
|---------------------------------|:------------:|:-------------------:|:-------------------: |
|