ZiweiChen commited on
Commit
2fc7a45
1 Parent(s): 40d82a5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -16
README.md CHANGED
@@ -13,25 +13,48 @@ tags:
13
  ---
14
  # Model Card for Model ID
15
 
16
- <!-- Provide a quick summary of what the model is/does. -->
17
-
18
- This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
19
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  ## Model Details
21
 
22
  ### Model Description
23
 
24
- <!-- Provide a longer summary of what this model is. -->
25
-
26
-
27
-
28
- - **Developed by:** [More Information Needed]
29
- - **Funded by [optional]:** [More Information Needed]
30
- - **Shared by [optional]:** [More Information Needed]
31
- - **Model type:** [More Information Needed]
32
- - **Language(s) (NLP):** [More Information Needed]
33
- - **License:** [More Information Needed]
34
- - **Finetuned from model [optional]:** [More Information Needed]
35
-
36
 
37
 
 
13
  ---
14
  # Model Card for Model ID
15
 
16
+ ## How to use
17
+
18
+ Loading the model from Hunggingface:
19
+ ```python
20
+ from transformers import AutoTokenizer, AutoModelForCausalLM
21
+
22
+ tokenizer = AutoTokenizer.from_pretrained("ZiweiChen/BioMistral-Clinical-7B")
23
+ model = AutoModelForCausalLM.from_pretrained("ZiweiChen/BioMistral-Clinical-7B")
24
+ ```
25
+ Lightweight model loading can be used - using 4-bit quantization!
26
+ ```python
27
+ from transformers import AutoTokenizer, BitsAndBytesConfig, AutoModelForCausalLM
28
+ import torch
29
+
30
+ bnb_config = BitsAndBytesConfig(
31
+ load_in_4bit=True,
32
+ bnb_4bit_use_double_quant=True,
33
+ bnb_4bit_quant_type="nf4",
34
+ bnb_4bit_compute_dtype=torch.bfloat16
35
+ )
36
+
37
+ tokenizer = AutoTokenizer.from_pretrained("ZiweiChen/BioMistral-Clinical-7B")
38
+ model = AutoModelForCausalLM.from_pretrained("ZiweiChen/BioMistral-Clinical-7B", quantization_config=bnb_config)
39
+
40
+ ```
41
+ How to Generate text:
42
+ ```python
43
+ model_device = next(model.parameters()).device
44
+
45
+ prompt = """
46
+ How to treat severe obesity?
47
+ """
48
+ model_input = tokenizer(prompt, return_tensors="pt").to(model_device)
49
+
50
+ with torch.no_grad():
51
+ output = model.generate(**model_input, max_new_tokens=100)
52
+ answer = tokenizer.decode(output[0], skip_special_tokens=True)
53
+ print(answer)
54
+ ```
55
  ## Model Details
56
 
57
  ### Model Description
58
 
 
 
 
 
 
 
 
 
 
 
 
 
59
 
60