abhishek-ch commited on
Commit
b6f7ffc
1 Parent(s): 0363fb6

Upload folder using huggingface_hub

Browse files
Files changed (2) hide show
  1. README.md +3 -65
  2. model-00003-of-00003.safetensors +1 -1
README.md CHANGED
@@ -18,7 +18,7 @@ tags:
18
  - biology
19
  - mlx
20
  datasets:
21
- - health_fact
22
  base_model:
23
  - BioMistral/BioMistral-7B
24
  - mistralai/Mistral-7B-Instruct-v0.1
@@ -26,79 +26,17 @@ pipeline_tag: text-generation
26
  ---
27
 
28
  # abhishek-ch/biomistral-7b-synthetic-ehr
29
-
30
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6460910f455531c6be78b2dd/tGtYB0b3eS7A4zbqp1xz0.png)
31
-
32
-
33
  This model was converted to MLX format from [`BioMistral/BioMistral-7B-DARE`]().
34
  Refer to the [original model card](https://huggingface.co/BioMistral/BioMistral-7B-DARE) for more details on the model.
35
-
36
-
37
  ## Use with mlx
38
 
39
  ```bash
40
  pip install mlx-lm
41
  ```
42
 
43
- The model was LoRA fine-tuned on [health_facts](https://huggingface.co/datasets/health_fact) and
44
- Synthetic EHR dataset inspired by MIMIC-IV using the format below, for 1000 steps (~1M tokens) using mlx.
45
-
46
  ```python
47
- def format_prompt(prompt:str, question: str) -> str:
48
- return """<s>[INST]
49
- ## Instructions
50
- {}
51
- ## User Question
52
- {}.
53
- [/INST]</s>
54
- """.format(prompt, question)
55
- ```
56
-
57
- Example For Synthetic EHR Diagnosis System Prompt
58
- ```
59
- You are an expert in provide diagnosis summary based on clinical notes inspired by MIMIC-IV-Note dataset.
60
- These notes encompass Chief Complaint along with Patient Summary & medical admission details.
61
- ```
62
-
63
- Example for Healthfacts Check System Prompt
64
- ```
65
- You are a Public Health AI Assistant. You can do the fact-checking of public health claims. \nEach answer labelled with true, false, unproven or mixture. \nPlease provide the reason behind the answer
66
- ```
67
 
68
- ## Loading the model using `mlx`
69
-
70
- ```python
71
- from mlx_lm import generate, load
72
  model, tokenizer = load("abhishek-ch/biomistral-7b-synthetic-ehr")
73
- response = generate(
74
- fused_model,
75
- fused_tokenizer,
76
- prompt=format_prompt(prompt, question),
77
- verbose=True, # Set to True to see the prompt and response
78
- temp=0.0,
79
- max_tokens=512,
80
- )
81
- ```
82
-
83
- ## Loading the model using `transformers`
84
-
85
- ```python
86
- from transformers import AutoModelForCausalLM, AutoTokenizer
87
-
88
- repo_id = "abhishek-ch/biomistral-7b-synthetic-ehr"
89
-
90
- tokenizer = AutoTokenizer.from_pretrained(repo_id)
91
- model = AutoModelForCausalLM.from_pretrained(repo_id)
92
- model.to("mps")
93
-
94
- input_text = format_prompt(system_prompt, question)
95
- input_ids = tokenizer(input_text, return_tensors="pt").to("mps")
96
-
97
- outputs = model.generate(
98
- **input_ids,
99
- max_new_tokens=512,
100
- )
101
- print(tokenizer.decode(outputs[0]))
102
-
103
  ```
104
-
 
18
  - biology
19
  - mlx
20
  datasets:
21
+ - pubmed
22
  base_model:
23
  - BioMistral/BioMistral-7B
24
  - mistralai/Mistral-7B-Instruct-v0.1
 
26
  ---
27
 
28
  # abhishek-ch/biomistral-7b-synthetic-ehr
 
 
 
 
29
  This model was converted to MLX format from [`BioMistral/BioMistral-7B-DARE`]().
30
  Refer to the [original model card](https://huggingface.co/BioMistral/BioMistral-7B-DARE) for more details on the model.
 
 
31
  ## Use with mlx
32
 
33
  ```bash
34
  pip install mlx-lm
35
  ```
36
 
 
 
 
37
  ```python
38
+ from mlx_lm import load, generate
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
 
 
 
 
 
40
  model, tokenizer = load("abhishek-ch/biomistral-7b-synthetic-ehr")
41
+ response = generate(model, tokenizer, prompt="hello", verbose=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42
  ```
 
model-00003-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:dab62f894e01be307be5b477a52a77965a7c3d24d23a703982ba40b9f3a2a552
3
  size 3869410022
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7ce4b3720a2fb5c47e9129cd08a5cce042d30d65a13b9356b3cba34495bfe57a
3
  size 3869410022