--- license: apache-2.0 pipeline_tag: question-answering tags: - question-answering - transformers - generated_from_trainer datasets: - squad_v2 - LLukas22/nq-simplified language: - en --- # all-MiniLM-L12-v2-qa-en This model is an extractive qa model. It's a fine-tuned version of [all-MiniLM-L12-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L12-v2) on the following datasets: [squad_v2](https://huggingface.co/datasets/squad_v2), [LLukas22/nq-simplified](https://huggingface.co/datasets/LLukas22/nq-simplified). ## Usage You can use the model like this: ```python from transformers import pipeline #Make predictions model_name = "LLukas22/all-MiniLM-L12-v2-qa-en" nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { "question": "What's my name?", "context": "My name is Clara and I live in Berkeley." } result = nlp(QA_input) print(result) ``` Alternatively you can load the model and tokenizer on their own: ```python from transformers import AutoModelForQuestionAnswering, AutoTokenizer #Make predictions model_name = "LLukas22/all-MiniLM-L12-v2-qa-en" model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` ## Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2E-05 - per device batch size: 60 - effective batch size: 180 - seed: 42 - optimizer: AdamW with betas (0.9,0.999) and eps 1E-08 - weight decay: 1E-02 - D-Adaptation: False - Warmup: False - number of epochs: 10 - mixed_precision_training: bf16 ## Training results | Epoch | Train Loss | Validation Loss | | ----- | ---------- | --------------- | | 0 | 2.65 | 1.88 | | 1 | 1.83 | 1.74 | | 2 | 1.69 | 1.69 | | 3 | 1.63 | 1.68 | | 4 | 1.6 | 1.67 | | 5 | 1.58 | 1.66 | | 6 | 1.57 | 1.66 | | 7 | 1.57 | 1.66 | ## Evaluation results | Epoch | f1 | exact_match | | ----- | ----- | ----- | | 0 | 0.507 | 0.378 | | 1 | 0.53 | 0.418 | | 2 | 0.544 | 0.431 | | 3 | 0.552 | 0.429 | | 4 | 0.557 | 0.439 | | 5 | 0.561 | 0.438 | | 6 | 0.564 | 0.441 | | 7 | 0.566 | 0.441 | ## Framework versions - Transformers: 4.25.1 - PyTorch: 2.0.0+cu118 - PyTorch Lightning: 1.8.6 - Datasets: 2.7.1 - Tokenizers: 0.13.1 - Sentence Transformers: 2.2.2 ## Additional Information This model was trained as part of my Master's Thesis **'Evaluation of transformer based language models for use in service information systems'**. The source code is available on [Github](https://github.com/LLukas22/Retrieval-Augmented-QA).