rewrite results note: simpler language, clarify CoIR task scope
Browse files
README.md
CHANGED
|
@@ -62,7 +62,7 @@ Results on the [CoIR benchmark](https://github.com/CoIR-team/coir) (NDCG@10, `mt
|
|
| 62 |
| potion-retrieval-32M | 32M | 32.10 | 4.22 | 31.80 | 36.71 | 45.11 | 38.64 | 29.97 | 32.62 | 8.70 | 56.26 | 36.93 |
|
| 63 |
| potion-base-32M | 32M | 31.42 | 3.37 | 29.58 | 34.77 | 42.69 | 37.88 | 28.51 | 30.55 | 14.61 | 53.36 | 38.88 |
|
| 64 |
|
| 65 |
-
|
| 66 |
|
| 67 |
## Model Details
|
| 68 |
|
|
|
|
| 62 |
| potion-retrieval-32M | 32M | 32.10 | 4.22 | 31.80 | 36.71 | 45.11 | 38.64 | 29.97 | 32.62 | 8.70 | 56.26 | 36.93 |
|
| 63 |
| potion-base-32M | 32M | 31.42 | 3.37 | 29.58 | 34.77 | 42.69 | 37.88 | 28.51 | 30.55 | 14.61 | 53.36 | 38.88 |
|
| 64 |
|
| 65 |
+
CoIR covers a broad range of code retrieval scenarios. For the use case of finding code given a natural language query, **CosQA** and **CodeFeedback (ST/MT)** are the most relevant tasks. Others are less so: **COIRCodeSearchNetRetrieval** retrieves text given a code query (the reverse direction), and the **CodeTransOcean** tasks target cross-language code translation. The hybrid row combines dense retrieval with BM25 using min-max score normalization and equal weighting (alpha=0.5).
|
| 66 |
|
| 67 |
## Model Details
|
| 68 |
|