Question Answering
PEFT
English
medical
Tonic commited on
Commit
88f8ae5
1 Parent(s): 3c8ac2c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -28
README.md CHANGED
@@ -9,36 +9,24 @@ tags:
9
  - medical
10
  ---
11
 
12
- ---
13
- # For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
14
- # Doc / guide: https://huggingface.co/docs/hub/model-cards
15
- {{ card_data }}
16
- ---
17
-
18
- # Model Card for {{ model_id | default("Model ID", true) }}
19
 
20
  This is a medical fine tuned model from the [Falcon-7b-Instruction](https://huggingface.co/tiiuae/falcon-7b-instruct) Base using 500 steps & 6 epochs with [MedAware](https://huggingface.co/datasets/keivalya/MedQuad-MedicalQnADataset) Dataset from [keivalya](https://huggingface.co/datasets/keivalya)
21
- {{ model_summary | default("", true) }}
22
 
23
  ## Model Details
24
 
25
  ### Model Description
26
 
27
- <!-- Provide a longer summary of what this model is. -->
28
-
29
- {{ model_description | default("", true) }}
30
-
31
- - **Developed by:** {{ developers | default("[[Tonic](https://www.huggingface.co/tonic)]", true)}}
32
- - **Shared by [optional]:** {{ shared_by | default("[[Tonic](https://www.huggingface.co/tonic)]", true)}}
33
- - **Model type:** {{ model_type | default("[Medical Fine-Tuned Conversational Falcon 7b (Instruct)]", true)}}
34
- - **Language(s) (NLP):** {{ language | default("[More Information Needed]", true)}}
35
- - **License:** {{ license | default("[More Information Needed]", true)}}
36
- - **Finetuned from model [optional]:** {{ finetuned_from | default("[tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct)", true)}}
37
-
38
  ### Model Sources [optional]
39
 
40
- <!-- Provide the basic links for the model. -->
41
-
42
  - **Repository:** https://github.com/Josephrp/AI-challenge-hackathon/blob/master/falcon_7b_instruct_GaiaMiniMed_dataset.ipynb
43
  - **Demo [optional]:** {{ demo | default("[More Information Needed]", true)}}
44
 
@@ -54,7 +42,7 @@ This model should perform better at medical QnA tasks in a conversational manner
54
 
55
  It is our hope that it will help improve patient outcomes and public health.
56
 
57
- ### Downstream Use [optional]
58
 
59
  Use this model next to others and have group conversations to produce diagnoses , public health advisory , and personal hygene improvements.
60
 
@@ -78,7 +66,6 @@ Use the code below to get started with the model.
78
 
79
  ### Results
80
 
81
- {{ results | default("[
82
 
83
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62a3bb1cd0d8c2c2169f0b88/F8GfMSJcAaH7pXvpUK_r3.png)
84
 
@@ -87,31 +74,34 @@ Use the code below to get started with the model.
87
  TrainOutput(global_step=6150, training_loss=1.0597990553941183,
88
  {'epoch': 6.0})
89
  ```
90
- ]", true)}}
91
 
92
 
93
  ### Training Data
94
 
95
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
96
 
97
  {{ training_data | default("
98
  ```json
 
99
  DatasetDict({
100
  train: Dataset({
101
  features: ['qtype', 'Question', 'Answer'],
102
  num_rows: 16407
103
  })
104
  })
 
105
  ```
106
- ", true)}}
107
 
108
  ### Training Procedure
109
 
110
 
111
  #### Preprocessing [optional]
112
 
113
- {{ preprocessing | default("[trainable params: 4718592 || all params: 3613463424 || trainables%: 0.13058363808693696]", true)}}
 
 
114
 
 
115
 
116
  #### Training Hyperparameters
117
 
@@ -119,10 +109,11 @@ DatasetDict({
119
 
120
  #### Speeds, Sizes, Times [optional]
121
 
122
- ```
123
 
124
  metrics={'train_runtime': 30766.4612, 'train_samples_per_second': 3.2, 'train_steps_per_second': 0.2,
125
  'total_flos': 1.1252790565109983e+18, 'train_loss': 1.0597990553941183,", true)}}
 
126
  ```
127
 
128
  ## Environmental Impact
 
9
  - medical
10
  ---
11
 
12
+ # Model Card for GaiaMiniMed
 
 
 
 
 
 
13
 
14
  This is a medical fine tuned model from the [Falcon-7b-Instruction](https://huggingface.co/tiiuae/falcon-7b-instruct) Base using 500 steps & 6 epochs with [MedAware](https://huggingface.co/datasets/keivalya/MedQuad-MedicalQnADataset) Dataset from [keivalya](https://huggingface.co/datasets/keivalya)
15
+
16
 
17
  ## Model Details
18
 
19
  ### Model Description
20
 
21
+ - **Developed by:** [Tonic](https://www.huggingface.co/tonic)
22
+ - **Shared by :** [Tonic](https://www.huggingface.co/tonic)
23
+ - **Model type:** Medical Fine-Tuned Conversational Falcon 7b (Instruct)
24
+ - **Language(s) (NLP):** English
25
+ - **License:** MIT
26
+ - **Finetuned from model:**[tiiuae/falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct)
27
+ -
 
 
 
 
28
  ### Model Sources [optional]
29
 
 
 
30
  - **Repository:** https://github.com/Josephrp/AI-challenge-hackathon/blob/master/falcon_7b_instruct_GaiaMiniMed_dataset.ipynb
31
  - **Demo [optional]:** {{ demo | default("[More Information Needed]", true)}}
32
 
 
42
 
43
  It is our hope that it will help improve patient outcomes and public health.
44
 
45
+ ### Downstream Use
46
 
47
  Use this model next to others and have group conversations to produce diagnoses , public health advisory , and personal hygene improvements.
48
 
 
66
 
67
  ### Results
68
 
 
69
 
70
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62a3bb1cd0d8c2c2169f0b88/F8GfMSJcAaH7pXvpUK_r3.png)
71
 
 
74
  TrainOutput(global_step=6150, training_loss=1.0597990553941183,
75
  {'epoch': 6.0})
76
  ```
 
77
 
78
 
79
  ### Training Data
80
 
 
81
 
82
  {{ training_data | default("
83
  ```json
84
+
85
  DatasetDict({
86
  train: Dataset({
87
  features: ['qtype', 'Question', 'Answer'],
88
  num_rows: 16407
89
  })
90
  })
91
+
92
  ```
93
+
94
 
95
  ### Training Procedure
96
 
97
 
98
  #### Preprocessing [optional]
99
 
100
+ ```
101
+
102
+ trainable params: 4718592 || all params: 3613463424 || trainables%: 0.13058363808693696
103
 
104
+ ```
105
 
106
  #### Training Hyperparameters
107
 
 
109
 
110
  #### Speeds, Sizes, Times [optional]
111
 
112
+ ```json
113
 
114
  metrics={'train_runtime': 30766.4612, 'train_samples_per_second': 3.2, 'train_steps_per_second': 0.2,
115
  'total_flos': 1.1252790565109983e+18, 'train_loss': 1.0597990553941183,", true)}}
116
+
117
  ```
118
 
119
  ## Environmental Impact