add the arxiv link
Browse files
README.md
CHANGED
|
@@ -15,7 +15,7 @@ datasets:
|
|
| 15 |
|
| 16 |
# OpenDecoder
|
| 17 |
|
| 18 |
-
This model implements the OpenDecoder architecture described in [OpenDecoder: Open Large Language Model Decoding to Incorporate Document Quality in RAG], a scalable approach for integrating retrieval signals directly into autoregressive generation.
|
| 19 |
|
| 20 |
The checkpoint we release here is trained on NQ and HotpotQA datasets, under the **robust training** setting introduced in the paper, where for each query, a total of ten passages are constructed as input: the top-5 highest-ranked relevant passages are always included, followed by three passages randomly sampled from ranks 6–100 to represent partially relevant context, and two passages randomly sampled from beyond rank 100 in the collection to simulate irrelevant documents.
|
| 21 |
|
|
|
|
| 15 |
|
| 16 |
# OpenDecoder
|
| 17 |
|
| 18 |
+
This model implements the OpenDecoder architecture described in [OpenDecoder: Open Large Language Model Decoding to Incorporate Document Quality in RAG](https://arxiv.org/abs/2601.09028), a scalable approach for integrating retrieval signals directly into autoregressive generation.
|
| 19 |
|
| 20 |
The checkpoint we release here is trained on NQ and HotpotQA datasets, under the **robust training** setting introduced in the paper, where for each query, a total of ten passages are constructed as input: the top-5 highest-ranked relevant passages are always included, followed by three passages randomly sampled from ranks 6–100 to represent partially relevant context, and two passages randomly sampled from beyond rank 100 in the collection to simulate irrelevant documents.
|
| 21 |
|