jiey2 commited on
Commit
e8ec16d
1 Parent(s): cd1aa3f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -123
README.md CHANGED
@@ -6,197 +6,109 @@ tags:
6
  - medical
7
  - llama-factory
8
  ---
9
- # Model Card for Model ID
10
-
11
- <!-- Provide a quick summary of what the model is/does. -->
12
-
13
- This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
14
 
15
  ## Model Details
16
 
17
  ### Model Description
18
 
19
- <!-- Provide a longer summary of what this model is. -->
20
-
21
-
22
-
23
- - **Developed by:** [More Information Needed]
24
- - **Funded by [optional]:** [More Information Needed]
25
- - **Shared by [optional]:** [More Information Needed]
26
- - **Model type:** [More Information Needed]
27
- - **Language(s) (NLP):** [More Information Needed]
28
- - **License:** [More Information Needed]
29
- - **Finetuned from model [optional]:** [More Information Needed]
30
-
31
- ### Model Sources [optional]
32
-
33
- <!-- Provide the basic links for the model. -->
34
 
35
- - **Repository:** [More Information Needed]
36
- - **Paper [optional]:** [More Information Needed]
37
- - **Demo [optional]:** [More Information Needed]
 
 
38
 
39
  ## Uses
40
 
41
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
42
-
43
  ### Direct Use
44
 
45
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
46
-
47
- [More Information Needed]
48
-
49
- ### Downstream Use [optional]
50
-
51
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
52
-
53
- [More Information Needed]
54
 
55
  ### Out-of-Scope Use
56
 
57
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
58
-
59
- [More Information Needed]
60
 
61
  ## Bias, Risks, and Limitations
62
 
63
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
64
-
65
- [More Information Needed]
66
 
67
  ### Recommendations
68
 
69
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
70
-
71
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
72
 
73
  ## How to Get Started with the Model
74
 
75
  Use the code below to get started with the model.
76
 
77
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
78
 
79
  ## Training Details
80
 
81
  ### Training Data
82
 
83
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
84
-
85
- [More Information Needed]
86
 
87
  ### Training Procedure
88
 
89
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
90
-
91
  #### Preprocessing [optional]
92
 
93
- [More Information Needed]
94
-
95
 
96
  #### Training Hyperparameters
97
 
98
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
99
-
100
- #### Speeds, Sizes, Times [optional]
101
-
102
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
103
-
104
- [More Information Needed]
105
 
106
  ## Evaluation
107
 
108
- <!-- This section describes the evaluation protocols and provides the results. -->
109
-
110
  ### Testing Data, Factors & Metrics
111
 
112
  #### Testing Data
113
 
114
- <!-- This should link to a Dataset Card if possible. -->
115
-
116
- [More Information Needed]
117
 
118
  #### Factors
119
 
120
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
121
-
122
- [More Information Needed]
123
 
124
  #### Metrics
125
 
126
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
127
-
128
- [More Information Needed]
129
 
130
  ### Results
131
 
132
- [More Information Needed]
133
 
134
  #### Summary
135
 
 
136
 
 
137
 
138
- ## Model Examination [optional]
139
-
140
- <!-- Relevant interpretability work for the model goes here -->
141
-
142
- [More Information Needed]
143
-
144
- ## Environmental Impact
145
-
146
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
147
 
148
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
149
-
150
- - **Hardware Type:** [More Information Needed]
151
- - **Hours used:** [More Information Needed]
152
- - **Cloud Provider:** [More Information Needed]
153
- - **Compute Region:** [More Information Needed]
154
- - **Carbon Emitted:** [More Information Needed]
155
-
156
- ## Technical Specifications [optional]
157
 
158
  ### Model Architecture and Objective
159
 
160
- [More Information Needed]
161
 
162
  ### Compute Infrastructure
163
 
164
- [More Information Needed]
165
-
166
  #### Hardware
167
 
168
- [More Information Needed]
169
 
170
  #### Software
171
 
172
- [More Information Needed]
173
-
174
- ## Citation [optional]
175
-
176
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
177
-
178
- **BibTeX:**
179
-
180
- [More Information Needed]
181
-
182
- **APA:**
183
-
184
- [More Information Needed]
185
-
186
- ## Glossary [optional]
187
-
188
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
189
-
190
- [More Information Needed]
191
-
192
- ## More Information [optional]
193
-
194
- [More Information Needed]
195
-
196
- ## Model Card Authors [optional]
197
-
198
- [More Information Needed]
199
-
200
- ## Model Card Contact
201
-
202
- [More Information Needed]
 
6
  - medical
7
  - llama-factory
8
  ---
9
+ # Model Card for DMX-QWEN-2-7B-AVOCADO
 
 
 
 
10
 
11
  ## Model Details
12
 
13
  ### Model Description
14
 
15
+ DMX-QWEN-2-7B-AVOCADO is a specialized model based on Qwen-2-7b, fine-tuned using a LoRA (Low-Rank Adaptation) technique and merged back into the base model. The model has been trained specifically to map Chinese medicine concepts to evidence-based medicine.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
+ - **Developed by:** 2billionbeats Limited
18
+ - **Model type:** LoRA fine-tuned transformer model
19
+ - **Language(s) (NLP):** Chinese, English
20
+ - **License:** MIT
21
+ - **Finetuned from model [optional]:** Qwen-2-7b
22
 
23
  ## Uses
24
 
 
 
25
  ### Direct Use
26
 
27
+ This model can be used directly for tasks that involve mapping Chinese medicine concepts to evidence-based medicine terminologies and practices. It can be employed in applications such as medical text analysis, clinical decision support, and educational tools for traditional Chinese medicine.
 
 
 
 
 
 
 
 
28
 
29
  ### Out-of-Scope Use
30
 
31
+ This model is not designed for general-purpose language tasks outside the specified domain of Chinese medicine and evidence-based medicine. It should not be used for critical medical decision-making without proper human oversight.
 
 
32
 
33
  ## Bias, Risks, and Limitations
34
 
35
+ This model may contain biases present in the training data, particularly those related to cultural perspectives on medicine. It should not be used as the sole source of medical advice or decision-making. The limitations of the model in accurately representing both Chinese and evidence-based medical concepts should be recognized.
 
 
36
 
37
  ### Recommendations
38
 
39
+ Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. It is recommended to use this model in conjunction with other medical resources and professional expertise.
 
 
40
 
41
  ## How to Get Started with the Model
42
 
43
  Use the code below to get started with the model.
44
 
45
+ ```python
46
+ from transformers import AutoModelForCausalLM, AutoTokenizer
47
+
48
+ tokenizer = AutoTokenizer.from_pretrained("your-model-repo/DM-QWEN-2-7B-AVOCADO")
49
+ model = AutoModelForCausalLM.from_pretrained("your-model-repo/DM-QWEN-2-7B-AVOCADO")
50
+
51
+ input_text = "Your input text here"
52
+ inputs = tokenizer(input_text, return_tensors="pt")
53
+ outputs = model.generate(**inputs)
54
+ print(tokenizer.decode(outputs[0]))
55
+ ```
56
 
57
  ## Training Details
58
 
59
  ### Training Data
60
 
61
+ The model was trained on a dataset specifically curated to include mappings between Chinese medicine and evidence-based medicine. [Link to the Dataset Card]
 
 
62
 
63
  ### Training Procedure
64
 
 
 
65
  #### Preprocessing [optional]
66
 
67
+ The training data underwent preprocessing to ensure the accurate representation of both Chinese medicine and evidence-based medicine terminologies.
 
68
 
69
  #### Training Hyperparameters
70
 
71
+ - **Training regime:** fp16 mixed precision
 
 
 
 
 
 
72
 
73
  ## Evaluation
74
 
 
 
75
  ### Testing Data, Factors & Metrics
76
 
77
  #### Testing Data
78
 
79
+ The model was evaluated using a separate test set containing mappings between Chinese and evidence-based medicine. [Link to Dataset Card]
 
 
80
 
81
  #### Factors
82
 
83
+ The evaluation considered various subpopulations and domains within the medical texts to ensure broad applicability.
 
 
84
 
85
  #### Metrics
86
 
87
+ The evaluation metrics included accuracy, precision, recall, and F1 score, chosen for their relevance in assessing the model's performance in text classification tasks.
 
 
88
 
89
  ### Results
90
 
91
+ The model achieved an accuracy of [X]%, precision of [Y]%, recall of [Z]%, and F1 score of [W]%.
92
 
93
  #### Summary
94
 
95
+ The model demonstrates strong performance in mapping Chinese medicine concepts to evidence-based medicine, with high accuracy and balanced precision and recall.
96
 
97
+ ## Model Examination
98
 
99
+ Further interpretability work is needed to understand the model's decision-making process better.
 
 
 
 
 
 
 
 
100
 
 
 
 
 
 
 
 
 
 
101
 
102
  ### Model Architecture and Objective
103
 
104
+ The model is based on the Qwen-2-7b architecture, fine-tuned using LoRA to adapt it for the specific task of mapping Chinese medicine to evidence-based medicine.
105
 
106
  ### Compute Infrastructure
107
 
 
 
108
  #### Hardware
109
 
110
+ The training was conducted on NVIDIA A100 GPUs.
111
 
112
  #### Software
113
 
114
+ The training utilized PyTorch and the Hugging Face Transformers library.