consciousAI commited on
Commit
1328d88
1 Parent(s): 1d73898

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +71 -0
README.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - Question Answering
5
+ metrics:
6
+ - squad
7
+ model-index:
8
+ - name: consciousAI/question-answering-roberta-base-s-v2
9
+ results: []
10
+ ---
11
+
12
+ # Question Answering
13
+ The model is intended to be used for Q&A task, given the question & context, the model would attempt to infer the answer text, answer span & confidence score.<br>
14
+ Model is encoder-only (deepset/roberta-base-squad2) with QuestionAnswering LM Head, fine-tuned on SQUADx dataset with **exact_match:** 84.83 & **f1:** 91.80 performance scores.
15
+
16
+ [Live Demo: Question Answering Encoders vs Generative](https://huggingface.co/spaces/consciousAI/question_answering)
17
+
18
+ Please follow this link for [Encoder based Question Answering V1](https://huggingface.co/consciousAI/question-answering-roberta-base-s/)
19
+ Please follow this link for [Generative Question Answering](https://huggingface.co/consciousAI/question-answering-generative-t5-v1-base-s-q-c/)
20
+
21
+ Example code:
22
+ ```
23
+ from transformers import pipeline
24
+
25
+ model_checkpoint = "consciousAI/question-answering-roberta-base-s-v2"
26
+
27
+ context = """
28
+ 🤗 Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration
29
+ between them. It's straightforward to train your models with one before loading them for inference with the other.
30
+ """
31
+ question = "Which deep learning libraries back 🤗 Transformers?"
32
+
33
+ question_answerer = pipeline("question-answering", model=model_checkpoint)
34
+ question_answerer(question=question, context=context)
35
+
36
+ ```
37
+
38
+ ## Training and evaluation data
39
+
40
+ SQUAD Split
41
+
42
+ ## Training procedure
43
+
44
+ Preprocessing:
45
+ 1. SQUAD Data longer chunks were sub-chunked with input context max-length 384 tokens and stride as 128 tokens.
46
+ 2. Target answers readjusted for sub-chunks, sub-chunks with no-answers or partial answers were set to target answer span as (0,0)
47
+
48
+ Metrics:
49
+ 1. Adjusted accordingly to handle sub-chunking.
50
+ 2. n best = 20
51
+ 3. skip answers with length zero or higher than max answer length (30)
52
+
53
+ ### Training hyperparameters
54
+ Custom Training Loop:
55
+ The following hyperparameters were used during training:
56
+ - learning_rate: 2e-5
57
+ - train_batch_size: 32
58
+ - eval_batch_size: 32
59
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
+ - lr_scheduler_type: linear
61
+ - num_epochs: 2
62
+
63
+ ### Training results
64
+ {'exact_match': 84.83443708609272, 'f1': 91.79987545811638}
65
+
66
+ ### Framework versions
67
+
68
+ - Transformers 4.23.0.dev0
69
+ - Pytorch 1.12.1+cu113
70
+ - Datasets 2.5.2
71
+ - Tokenizers 0.13.0