Update README.md
Browse files
README.md
CHANGED
@@ -96,12 +96,13 @@ configs:
|
|
96 |
---
|
97 |
|
98 |
# L-CITEEVAL: DO LONG-CONTEXT MODELS TRULY LEVERAGE CONTEXT FOR RESPONDING?
|
99 |
-
|
100 |
|
101 |
## Benchmark Quickview
|
102 |
*L-CiteEval* is a multi-task long-context understanding with citation benchmark, covering **5 task categories**, including single-document question answering, multi-document question answering, summarization, dialogue understanding, and synthetic tasks, encompassing **11 different long-context tasks**. The context lengths for these tasks range from **8K to 48K**.
|
103 |
![](assets/dataset.png)
|
104 |
|
|
|
105 |
## Data Prepare
|
106 |
#### Load Data
|
107 |
```
|
|
|
96 |
---
|
97 |
|
98 |
# L-CITEEVAL: DO LONG-CONTEXT MODELS TRULY LEVERAGE CONTEXT FOR RESPONDING?
|
99 |
+
[Paper](https://arxiv.org/abs/2410.02115)   [Github](https://github.com/ZetangForward/L-CITEEVAL)   [Zhihu](https://zhuanlan.zhihu.com/p/817442176)
|
100 |
|
101 |
## Benchmark Quickview
|
102 |
*L-CiteEval* is a multi-task long-context understanding with citation benchmark, covering **5 task categories**, including single-document question answering, multi-document question answering, summarization, dialogue understanding, and synthetic tasks, encompassing **11 different long-context tasks**. The context lengths for these tasks range from **8K to 48K**.
|
103 |
![](assets/dataset.png)
|
104 |
|
105 |
+
|
106 |
## Data Prepare
|
107 |
#### Load Data
|
108 |
```
|