--- task_categories: - question-answering dataset_info: features: - name: id dtype: string - name: title dtype: string - name: context dtype: string - name: question dtype: string - name: answers sequence: - name: text dtype: string - name: answer_start dtype: int32 splits: - name: train num_bytes: 79061690.62181075 num_examples: 87285 - name: validation num_bytes: 10388764.166508988 num_examples: 10485 download_size: 16137496 dataset_size: 89450454.78831974 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* --- ## Dataset Card for "squad" This truncated dataset is derived from the Stanford Question Answering Dataset (SQuAD) for reading comprehension. Its primary aim is to extract instances from the original SQuAD dataset that align with the context length of BERT, RoBERTa, OPT, and T5 models. ### Preprocessing and Filtering Preprocessing involves tokenization using the BertTokenizer (WordPiece), RoBertaTokenizer (Byte-level BPE), OPTTokenizer (Byte-Pair Encoding), and T5Tokenizer (Sentence Piece). Each sample is then checked to ensure that the length of the tokenized input is within the specified model_max_length for each tokenizer.