Update README.md
Browse files
README.md
CHANGED
@@ -45,8 +45,8 @@ Use the code below to get started with the model.
|
|
45 |
```python
|
46 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
47 |
|
48 |
-
tokenizer = AutoTokenizer.from_pretrained("
|
49 |
-
model = AutoModelForCausalLM.from_pretrained("
|
50 |
|
51 |
input_text = "Your input text here"
|
52 |
inputs = tokenizer(input_text, return_tensors="pt")
|
@@ -86,10 +86,6 @@ The evaluation considered various subpopulations and domains within the medical
|
|
86 |
|
87 |
The evaluation metrics included accuracy, precision, recall, and F1 score, chosen for their relevance in assessing the model's performance in text classification tasks.
|
88 |
|
89 |
-
### Results
|
90 |
-
|
91 |
-
The model achieved an accuracy of [X]%, precision of [Y]%, recall of [Z]%, and F1 score of [W]%.
|
92 |
-
|
93 |
#### Summary
|
94 |
|
95 |
The model demonstrates strong performance in mapping Chinese medicine concepts to evidence-based medicine, with high accuracy and balanced precision and recall.
|
|
|
45 |
```python
|
46 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
47 |
|
48 |
+
tokenizer = AutoTokenizer.from_pretrained("2billionbeats/DM-QWEN-2-7B-AVOCADO")
|
49 |
+
model = AutoModelForCausalLM.from_pretrained("2billionbeats/DM-QWEN-2-7B-AVOCADO")
|
50 |
|
51 |
input_text = "Your input text here"
|
52 |
inputs = tokenizer(input_text, return_tensors="pt")
|
|
|
86 |
|
87 |
The evaluation metrics included accuracy, precision, recall, and F1 score, chosen for their relevance in assessing the model's performance in text classification tasks.
|
88 |
|
|
|
|
|
|
|
|
|
89 |
#### Summary
|
90 |
|
91 |
The model demonstrates strong performance in mapping Chinese medicine concepts to evidence-based medicine, with high accuracy and balanced precision and recall.
|