--- language: en tags: - question-answering - pytorch - bert - squad license: mit datasets: - squad pipeline_tag: question-answering model-index: - name: bert-sliding-window_epoch_1 results: - task: type: question-answering name: Question Answering dataset: name: SQuAD type: squad config: plain_text split: validation metrics: - type: exact_match value: N/A name: Exact Match - type: f1 value: N/A name: F1 --- # bert-sliding-window_epoch_1 ## Model description This is a fine-tuned version of [DistilBERT](https://huggingface.co/distilbert-base-cased-distilled-squad) for question answering tasks. The model was trained on SQuAD dataset. ## Training procedure The model was trained with the following hyperparameters: - Learning Rate: 1e-05 - Batch Size: 32 - Epochs: 10 - Weight Decay: 0.01 ## Intended uses & limitations This model is intended to be used for question answering tasks, particularly on SQuAD-like datasets. It performs best on factual questions where the answer can be found as a span of text within the given context. ## Training Details ### Training Data The model was trained on the SQuAD dataset, which consists of questions posed by crowdworkers on a set of Wikipedia articles. ### Training Hyperparameters The model was trained with the following hyperparameters: * learning_rate: 1e-05 * batch_size: 32 * num_epochs: 10 * weight_decay: 0.01 ## Uses This model can be used for: - Extracting answers from text passages given questions - Question answering tasks - Reading comprehension tasks ## Limitations - The model can only extract answers that are directly present in the given context - Performance may vary on out-of-domain texts - The model may struggle with complex reasoning questions ## Additional Information - Model type: DistilBERT - Language: English - License: MIT - Framework: PyTorch