File size: 2,179 Bytes
4e59657
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2ce6bbd
 
 
4e59657
 
23b0ee2
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
configs:
- config_name: default
  data_files:
  - split: 2k
    path: data/2k-*
  - split: 4k
    path: data/4k-*
  - split: 8k
    path: data/8k-*
  - split: 16k
    path: data/16k-*
dataset_info:
  features:
  - name: conversations
    list:
    - name: from
      dtype: string
    - name: tok_len
      dtype: int64
    - name: value
      dtype: string
  splits:
  - name: 2k
    num_bytes: 3555934
    num_examples: 600
  - name: 4k
    num_bytes: 6926324
    num_examples: 600
  - name: 8k
    num_bytes: 13605196
    num_examples: 600
  - name: 16k
    num_bytes: 24856440
    num_examples: 600
  download_size: 10741984
  dataset_size: 48943894
---

![image/png](https://cdn-uploads.huggingface.co/production/uploads/64c14f6b02e1f8f67c73bd05/9Jt6ZK8Jvr6YvGb531sHf.png)

# Dataset Card for "WikiQA-Free_Form_QA"

The WikiQA task is the task of answering a question based on the information given in a Wikipedia document. We have built upon the short answer format data in Google Natural Questions to construct our QA task. It is formatted as a document and a question. We ensure the answer to the question is a short answer which is either a single word or a small sentence directly cut pasted from the document. Having the task structured as such, we can pinpoint exactly where the LLM was supposed to "look" for the answer in the context, and thus effectively evaluate every part of the expanded context length by carefully placing the answer in different locations.

We have selected large Wikipedia documents and have truncated them to get multiple versions of the same document with sizes varying between 2000 to 16000 tokens. For each size of the document, we also have multiple versions which place the question and the answer text at different locations i.e whether it occurs in the first 10%, the bulk or last 10% of the document. Having multiple version of the same document allows us to get a exhaustive and fair evaluation across model sizes, and within one model's context positions since we intrinsically are asking for the same information.

For further details see:
[https;//github.com/abacusai/Long-Context](https://github.com/abacusai/Long-Context).