Aharneish commited on
Commit
d857c99
1 Parent(s): 51c5da8

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -10
README.md CHANGED
@@ -20,14 +20,8 @@ More information needed
20
 
21
  ## Intended uses & limitations
22
 
23
- the model can be used using the following commands
24
- ```python
25
- from transformers import AutoTokenizer, AutoModelForQuestionAnswering
26
-
27
- tokenizer = AutoTokenizer.from_pretrained("Aharneish/qa-model")
28
 
29
- model = AutoModelForQuestionAnswering.from_pretrained("Aharneish/qa-model")
30
- ```
31
  ## Training and evaluation data
32
 
33
  More information needed
@@ -37,13 +31,13 @@ More information needed
37
  ### Training hyperparameters
38
 
39
  The following hyperparameters were used during training:
40
- - learning_rate: 2e-05
41
  - train_batch_size: 8
42
  - eval_batch_size: 8
43
  - seed: 42
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
- - num_epochs: 40
47
  - mixed_precision_training: Native AMP
48
 
49
  ### Training results
@@ -52,7 +46,7 @@ The following hyperparameters were used during training:
52
 
53
  ### Framework versions
54
 
55
- - Transformers 4.27.4
56
  - Pytorch 2.0.0+cu118
57
  - Datasets 2.11.0
58
  - Tokenizers 0.13.3
 
20
 
21
  ## Intended uses & limitations
22
 
23
+ More information needed
 
 
 
 
24
 
 
 
25
  ## Training and evaluation data
26
 
27
  More information needed
 
31
  ### Training hyperparameters
32
 
33
  The following hyperparameters were used during training:
34
+ - learning_rate: 3e-05
35
  - train_batch_size: 8
36
  - eval_batch_size: 8
37
  - seed: 42
38
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
39
  - lr_scheduler_type: linear
40
+ - num_epochs: 50
41
  - mixed_precision_training: Native AMP
42
 
43
  ### Training results
 
46
 
47
  ### Framework versions
48
 
49
+ - Transformers 4.28.1
50
  - Pytorch 2.0.0+cu118
51
  - Datasets 2.11.0
52
  - Tokenizers 0.13.3