Update README.md
Browse files
README.md
CHANGED
@@ -281,16 +281,9 @@ ContextualBench is a powerful evaluation framework designed to assess the perfor
|
|
281 |
* Extensive Evaluation Dataset: Supports 7 contextual tasks, including: Question Answering (QA), Multi-Hop Question Answering, Classification tasks
|
282 |
* Multi-LLM Family Support: Compatible with a wide range of LLM families, including: Hugging Face models, Gemma, Mistral, OpenAI, Cohere.
|
283 |
|
284 |
-
|
285 |
-
The dataset can be loaded using the command
|
286 |
-
```python
|
287 |
-
task = "hotpotqa" # it can be any other option
|
288 |
-
load_dataset("Salesforce/ContextualBench", task, split="validation")
|
289 |
-
```
|
290 |
-
|
291 |
## Component Datasets of ContextualBench
|
292 |
|
293 |
-
>
|
294 |
|
295 |
### 2WikiHotpotQA
|
296 |
|
|
|
281 |
* Extensive Evaluation Dataset: Supports 7 contextual tasks, including: Question Answering (QA), Multi-Hop Question Answering, Classification tasks
|
282 |
* Multi-LLM Family Support: Compatible with a wide range of LLM families, including: Hugging Face models, Gemma, Mistral, OpenAI, Cohere.
|
283 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
284 |
## Component Datasets of ContextualBench
|
285 |
|
286 |
+
> Users need to make their own assessment regarding any obligations or responsibilities under the corresponding licenses or terms and conditions pertaining to the original datasets and data.
|
287 |
|
288 |
### 2WikiHotpotQA
|
289 |
|