duyntnet commited on
Commit
0c1a400
1 Parent(s): a8be088

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +55 -0
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ inference: false
7
+ tags:
8
+ - transformers
9
+ - gguf
10
+ - imatrix
11
+ - Asclepius-Llama2-13B
12
+ ---
13
+ Quantizations of https://huggingface.co/starmpcc/Asclepius-Llama2-13B
14
+
15
+
16
+ ### Inference Clients/UIs
17
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp)
18
+ * [JanAI](https://github.com/janhq/jan)
19
+ * [KoboldCPP](https://github.com/LostRuins/koboldcpp)
20
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
21
+ * [ollama](https://github.com/ollama/ollama)
22
+ * [GPT4All](https://github.com/nomic-ai/gpt4all)
23
+
24
+ ---
25
+
26
+ # From original readme
27
+
28
+ ## How to Get Started with the Model
29
+
30
+ ```python
31
+ prompt = """You are an intelligent clinical languge model.
32
+ Below is a snippet of patient's discharge summary and a following instruction from healthcare professional.
33
+ Write a response that appropriately completes the instruction.
34
+ The response should provide the accurate answer to the instruction, while being concise.
35
+
36
+ [Discharge Summary Begin]
37
+ {note}
38
+ [Discharge Summary End]
39
+
40
+ [Instruction Begin]
41
+ {question}
42
+ [Instruction End]
43
+ """
44
+ from transformers import AutoTokenizer, AutoModelForCausalLM
45
+ tokenizer = AutoTokenizer.from_pretrained("starmpcc/Asclepius-Llama2-13B", use_fast=False)
46
+ model = AutoModelForCausalLM.from_pretrained("starmpcc/Asclepius-Llama13-7B")
47
+
48
+ note = "This is a sample note"
49
+ question = "What is the diagnosis?"
50
+
51
+ model_input = prompt.format(note=note, question=question)
52
+ input_ids = tokenizer(model_input, return_tensors="pt").input_ids
53
+ output = model.generate(input_ids)
54
+ print(tokenizer.decode(output[0]))
55
+ ```