File size: 1,610 Bytes
a8de101 c3a10f8 a8de101 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
---
configs:
- config_name: default
data_files:
- split: train
path: "data/train_instances.json"
- split: dev
path: "data/dev_instances.json"
- split: test
path: "data/test_instances.json"
- config_name: has_html
data_files:
- split: train
path: "data/train_instances_with_html.json"
- split: dev
path: "data/dev_instances_with_html.json"
- split: test
path: "data/test_instances_with_html.json"
---
# Preprocessed QASPER dataset
Working doc: https://docs.google.com/document/d/1gYPhPNJ5LGttgjix1dwai8pdNcqS6PbqhsM7W0rhKNQ/edit?usp=sharing
Original:
- Dataset: https://github.com/allenai/qasper-led-baseline
- Baseline repo: https://github.com/allenai/qasper-led-baseline
- HF: https://huggingface.co/datasets/allenai/qasper
Differences of our implementation over the original implementation:
1. We use the dataset provided at https://huggingface.co/datasets/allenai/qasper since it doesn't require manually downloading files.
2. We remove usage of `allennlp` since the Python package cannot be installed anymore.
3. We add baselines to [qasper/models](qasper/models/). Currently, we have
- QASPER (Longformer Encoder Decoder)
- GPT-3.5-Turbo
- TODO: RAG (with R=TF-IDF or Contriever) implemented in LangChain?
4. We replace `allennlp` special tokens with the special tokens of the HF transformer tokenizer:
- paragraph separator: '</s>' -> tokenizer.sep_token
- sequence pair start tokens: _tokenizer.sequence_pair_start_tokens -> tokenizer.bos_token
## Usage
```
from datasets import load_dataset
dataset = load_dataset("ag2435/qasper")
``` |