SrikanthChellappa
commited on
Commit
•
dbc6514
1
Parent(s):
1e35e1a
Update README.md
Browse files
README.md
CHANGED
@@ -1,10 +1,11 @@
|
|
1 |
---
|
2 |
license: llama3
|
3 |
-
library_name:
|
4 |
tags:
|
5 |
-
- trl
|
6 |
-
- sft
|
7 |
- generated_from_trainer
|
|
|
|
|
|
|
8 |
base_model: meta-llama/Meta-Llama-3-8B-Instruct
|
9 |
thumbnail: https://collaiborate.com/logo/logo-blue-bg-1.png
|
10 |
model-index:
|
@@ -74,13 +75,10 @@ Collaiborator-MEDLLM-Llama-3-8b-v1 was trained using an NVIDIA A40 GPU, which pr
|
|
74 |
## How to use
|
75 |
|
76 |
import transformers
|
77 |
-
|
78 |
import torch
|
79 |
|
80 |
-
|
81 |
model_id = "collaiborateorg/Collaiborator-MEDLLM-Llama-3-8B-v1"
|
82 |
|
83 |
-
|
84 |
pipeline = transformers.pipeline(
|
85 |
"text-generation",
|
86 |
model=model_id,
|
@@ -88,26 +86,22 @@ pipeline = transformers.pipeline(
|
|
88 |
device_map="auto",
|
89 |
)
|
90 |
|
91 |
-
|
92 |
messages = [
|
93 |
{"role": "system", "content": "You are an expert trained on healthcare and biomedical domain!"},
|
94 |
{"role": "user", "content": "I'm a 35-year-old male and for the past few months, I've been experiencing fatigue, increased sensitivity to cold, and dry, itchy skin. What is the diagnosis here?"},
|
95 |
]
|
96 |
|
97 |
-
|
98 |
prompt = pipeline.tokenizer.apply_chat_template(
|
99 |
messages,
|
100 |
tokenize=False,
|
101 |
add_generation_prompt=True
|
102 |
)
|
103 |
|
104 |
-
|
105 |
terminators = [
|
106 |
pipeline.tokenizer.eos_token_id,
|
107 |
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
|
108 |
]
|
109 |
|
110 |
-
|
111 |
outputs = pipeline(
|
112 |
prompt,
|
113 |
max_new_tokens=256,
|
@@ -116,7 +110,6 @@ outputs = pipeline(
|
|
116 |
temperature=0.6,
|
117 |
top_p=0.9,
|
118 |
)
|
119 |
-
|
120 |
print(outputs[0]["generated_text"][len(prompt):])
|
121 |
|
122 |
### Contact Information
|
@@ -131,7 +124,7 @@ Website: https://www.collaiborate.com
|
|
131 |
|
132 |
The following hyperparameters were used during training:
|
133 |
- learning_rate: 0.0002
|
134 |
-
- train_batch_size:
|
135 |
- eval_batch_size: 8
|
136 |
- seed: 42
|
137 |
- gradient_accumulation_steps: 4
|
@@ -139,6 +132,7 @@ The following hyperparameters were used during training:
|
|
139 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
140 |
- lr_scheduler_type: cosine
|
141 |
- lr_scheduler_warmup_ratio: 0.03
|
|
|
142 |
- mixed_precision_training: Native AMP
|
143 |
|
144 |
### Framework versions
|
@@ -151,7 +145,7 @@ The following hyperparameters were used during training:
|
|
151 |
|
152 |
### Citation
|
153 |
|
154 |
-
If you use Collaiborator-MEDLLM-Llama-3-8b
|
155 |
|
156 |
@misc{Collaiborator_MEDLLM,
|
157 |
author = Collaiborator,
|
|
|
1 |
---
|
2 |
license: llama3
|
3 |
+
library_name: transformers
|
4 |
tags:
|
|
|
|
|
5 |
- generated_from_trainer
|
6 |
+
- medical
|
7 |
+
- Healthcare & Lifesciences
|
8 |
+
- BioMed
|
9 |
base_model: meta-llama/Meta-Llama-3-8B-Instruct
|
10 |
thumbnail: https://collaiborate.com/logo/logo-blue-bg-1.png
|
11 |
model-index:
|
|
|
75 |
## How to use
|
76 |
|
77 |
import transformers
|
|
|
78 |
import torch
|
79 |
|
|
|
80 |
model_id = "collaiborateorg/Collaiborator-MEDLLM-Llama-3-8B-v1"
|
81 |
|
|
|
82 |
pipeline = transformers.pipeline(
|
83 |
"text-generation",
|
84 |
model=model_id,
|
|
|
86 |
device_map="auto",
|
87 |
)
|
88 |
|
|
|
89 |
messages = [
|
90 |
{"role": "system", "content": "You are an expert trained on healthcare and biomedical domain!"},
|
91 |
{"role": "user", "content": "I'm a 35-year-old male and for the past few months, I've been experiencing fatigue, increased sensitivity to cold, and dry, itchy skin. What is the diagnosis here?"},
|
92 |
]
|
93 |
|
|
|
94 |
prompt = pipeline.tokenizer.apply_chat_template(
|
95 |
messages,
|
96 |
tokenize=False,
|
97 |
add_generation_prompt=True
|
98 |
)
|
99 |
|
|
|
100 |
terminators = [
|
101 |
pipeline.tokenizer.eos_token_id,
|
102 |
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
|
103 |
]
|
104 |
|
|
|
105 |
outputs = pipeline(
|
106 |
prompt,
|
107 |
max_new_tokens=256,
|
|
|
110 |
temperature=0.6,
|
111 |
top_p=0.9,
|
112 |
)
|
|
|
113 |
print(outputs[0]["generated_text"][len(prompt):])
|
114 |
|
115 |
### Contact Information
|
|
|
124 |
|
125 |
The following hyperparameters were used during training:
|
126 |
- learning_rate: 0.0002
|
127 |
+
- train_batch_size: 12
|
128 |
- eval_batch_size: 8
|
129 |
- seed: 42
|
130 |
- gradient_accumulation_steps: 4
|
|
|
132 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
133 |
- lr_scheduler_type: cosine
|
134 |
- lr_scheduler_warmup_ratio: 0.03
|
135 |
+
- training_steps: 2000
|
136 |
- mixed_precision_training: Native AMP
|
137 |
|
138 |
### Framework versions
|
|
|
145 |
|
146 |
### Citation
|
147 |
|
148 |
+
If you use Collaiborator-MEDLLM-Llama-3-8b in your research or applications, please cite it as follows:
|
149 |
|
150 |
@misc{Collaiborator_MEDLLM,
|
151 |
author = Collaiborator,
|