pradhaph commited on
Commit
10f53dc
1 Parent(s): 424bc3d

readme updated

Browse files
Files changed (1) hide show
  1. README.md +81 -0
README.md CHANGED
@@ -1,3 +1,84 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - medical
9
+ - clinical
10
  ---
11
+ # Model Card for Model ID
12
+
13
+ ## Model Details
14
+
15
+ ### Model Description
16
+
17
+ This model is a fine-tuned version of [TheBloke/samantha-falcon-7B-GPTQ](https://huggingface.co/TheBloke/samantha-falcon-7B-GPTQ) for text generation tasks in the medical domain.
18
+
19
+ - **Developed by:** Pradhaph
20
+ - **Model type:** Fine-tuned samantha-falcon-7B-GPTQ based model
21
+ - **Language(s) (NLP):** English
22
+ - **License:** MIT
23
+
24
+ ### Model Sources
25
+
26
+ - **Repository:** [👉Click here👈](https://huggingface.co/pradhaph/medical-falcon-7b)
27
+ - **Demo:** Available soon
28
+
29
+ ## Uses
30
+
31
+ ### Direct Use
32
+
33
+ This model can be used for text generation tasks in the medical domain, such as generating medical reports, answering medical queries, etc.
34
+
35
+ ### Downstream Use
36
+
37
+ This model can be fine-tuned for specific medical text generation tasks or integrated into larger healthcare systems.
38
+
39
+ ### Out-of-Scope Use
40
+
41
+ This model may not perform well on tasks outside the medical domain.
42
+
43
+ ## Bias, Risks, and Limitations
44
+
45
+ This model will requires more than 7.00GB GPU vram and 12.00GB CPU ram
46
+
47
+ ## How to Get Started with the Model
48
+
49
+ ```python
50
+ # Install dependencies
51
+ !pip install transformers==4.31.0 sentence_transformers==2.2.2
52
+ from transformers import AutoModelForCausalLM, AutoTokenizer
53
+
54
+ # 1. Load the model
55
+ loaded_model_path = r"path_to_downloaded_model"
56
+ model = AutoModelForCausalLM.from_pretrained(loaded_model_path)
57
+
58
+ # 2. Initialize the tokenizer
59
+ tokenizer = AutoTokenizer.from_pretrained(loaded_model_path)
60
+
61
+ # 3. Prepare input
62
+ context = "The context you want to provide to the model."
63
+ question = "The question you want to ask the model."
64
+ input_text = f"{context}\nQuestion: {question}\n"
65
+
66
+ # 4. Tokenize input
67
+ inputs = tokenizer(input_text, return_tensors="pt")
68
+
69
+ # 5. Model inference
70
+ with torch.no_grad():
71
+ outputs = model.generate(
72
+ **inputs,
73
+ max_length=512, # Adjust max_length as per your need
74
+ temperature=0.7, # Adjust temperature for randomness in sampling
75
+ top_p=0.9, # Adjust top_p for nucleus sampling
76
+ num_return_sequences=1 # Number of sequences to generate
77
+ )
78
+
79
+ # 6. Decode and print the output
80
+ generated_texts = [tokenizer.decode(output, skip_special_tokens=True) for output in outputs]
81
+ print("Generated Texts:")
82
+ for text in generated_texts:
83
+ print(text)
84
+