BeastGokul commited on
Commit
39fd134
·
verified ·
1 Parent(s): 2297046

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -77
README.md CHANGED
@@ -2,89 +2,13 @@
2
  license: mit
3
  language:
4
  - en
5
- base_model:
6
- - ContactDoctor/Bio-Medical-MultiModal-Llama-3-8B-V1
7
- pipeline_tag: text-generation
8
- tags:
9
- - biology
10
- - medical
11
- ---
12
-
13
- # Model Card for Bio-Medical-Llama-3-8B-V1
14
-
15
- This model is a fine-tuned version of **Bio-Medical-Llama-3-8B** for generating text related to biomedical knowledge. It is designed to assist in answering health and medical queries, serving as a robust tool for both healthcare professionals and general users.
16
-
17
- ---
18
-
19
- ## Model Details
20
-
21
- ### Model Description
22
-
23
- - **Developed by:** ContactDoctor
24
- - **Funded by:** ContactDoctor Research Lab
25
- - **Model type:** Text Generation
26
- - **Language(s) (NLP):** English
27
- - **License:** MIT
28
- - **Finetuned from model:** Bio-Medical-MultiModal-Llama-3-8B
29
-
30
- This model was created to address the need for accurate, conversational assistance in healthcare, biology, and medical science.
31
-
32
- ---
33
-
34
- ## Uses
35
-
36
- ### Direct Use
37
-
38
- Users can employ the model to generate responses to biomedical questions, explanations of medical concepts, and general healthcare advice.
39
-
40
- ### Downstream Use
41
-
42
- This model can be further fine-tuned for specific tasks, such as diagnosis support, clinical decision-making, and patient education.
43
-
44
- ### Out-of-Scope Use
45
-
46
- The model should not be used as a substitute for professional medical advice, emergency assistance, or detailed medical diagnoses.
47
-
48
- ---
49
-
50
- ## Bias, Risks, and Limitations
51
-
52
- While the model is trained on extensive biomedical data, it might not cover every condition or the latest advancements. Users are advised to treat responses as informational rather than authoritative.
53
-
54
- ### Recommendations
55
-
56
- - Use this model for general guidance, not as a substitute for professional advice.
57
- - Regularly review updates and improvements for the latest accuracy enhancements.
58
-
59
- ---
60
-
61
- ## How to Get Started with the Model
62
-
63
- You can use the model through the Hugging Face API or locally as shown in the example below.
64
-
65
- ```python
66
- from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
67
-
68
- # Load the model and tokenizer
69
- tokenizer = AutoTokenizer.from_pretrained("ContactDoctor/Bio-Medical-Llama-3-8B-V1")
70
- model = AutoModelForCausalLM.from_pretrained("ContactDoctor/Bio-Medical-Llama-3-8B-V1")
71
-
72
- # Initialize the pipeline
73
- generator = pipeline("text-generation", model=model, tokenizer=tokenizer)
74
-
75
- # Generate a response
76
- response = generator("What is hypertension?", max_length=100)
77
- print(response[0]["generated_text"])
78
- ---
79
- license: mit
80
- language:
81
- - en
82
  base_model: ContactDoctor/Bio-Medical-MultiModal-Llama-3-8B-V1
83
  pipeline_tag: text-generation
84
  tags:
85
  - biology
86
  - medical
87
  - fine-tuning
 
88
  ---
89
 
90
  # Model Card for Fine-Tuned Bio-Medical-Llama-3-8B
 
2
  license: mit
3
  language:
4
  - en
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  base_model: ContactDoctor/Bio-Medical-MultiModal-Llama-3-8B-V1
6
  pipeline_tag: text-generation
7
  tags:
8
  - biology
9
  - medical
10
  - fine-tuning
11
+ library_name: transformers
12
  ---
13
 
14
  # Model Card for Fine-Tuned Bio-Medical-Llama-3-8B