AdaptLLM commited on
Commit
b2a3c31
1 Parent(s): c8ac95c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -9
README.md CHANGED
@@ -38,18 +38,18 @@ For example, to chat with the finance model:
38
  ```python
39
  from transformers import AutoModelForCausalLM, AutoTokenizer
40
 
41
- model = AutoModelForCausalLM.from_pretrained("AdaptLLM/finance-chat")
42
- tokenizer = AutoTokenizer.from_pretrained("AdaptLLM/finance-chat", use_fast=False)
43
 
44
  # Put your input here:
45
- user_input = '''Use this fact to answer the question: Title of each class Trading Symbol(s) Name of each exchange on which registered
46
- Common Stock, Par Value $.01 Per Share MMM New York Stock Exchange
47
- MMM Chicago Stock Exchange, Inc.
48
- 1.500% Notes due 2026 MMM26 New York Stock Exchange
49
- 1.750% Notes due 2030 MMM30 New York Stock Exchange
50
- 1.500% Notes due 2031 MMM31 New York Stock Exchange
51
 
52
- Which debt securities are registered to trade on a national securities exchange under 3M's name as of Q2 of 2023?'''
53
 
54
  # We use the prompt template of LLaMA-2-Chat demo
55
  prompt = f"<s>[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n\n{user_input} [/INST]"
@@ -62,6 +62,7 @@ pred = tokenizer.decode(outputs[answer_start:], skip_special_tokens=True)
62
 
63
  print(f'### User Input:\n{user_input}\n\n### Assistant Output:\n{pred}')
64
  ```
 
65
  ## Domain-Specific Tasks
66
  To easily reproduce our results, we have uploaded the filled-in zero/few-shot input instructions and output completions of each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks).
67
 
 
38
  ```python
39
  from transformers import AutoModelForCausalLM, AutoTokenizer
40
 
41
+ model = AutoModelForCausalLM.from_pretrained("AdaptLLM/medicine-chat")
42
+ tokenizer = AutoTokenizer.from_pretrained("AdaptLLM/medicine-chat", use_fast=False)
43
 
44
  # Put your input here:
45
+ user_input = '''Question: Which of the following is an example of monosomy?
46
+ Options:
47
+ - 46,XX
48
+ - 47,XXX
49
+ - 69,XYY
50
+ - 45,X
51
 
52
+ Please provide your choice first and then provide explanations if possible.'''
53
 
54
  # We use the prompt template of LLaMA-2-Chat demo
55
  prompt = f"<s>[INST] <<SYS>>\nYou are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\n\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\n<</SYS>>\n\n{user_input} [/INST]"
 
62
 
63
  print(f'### User Input:\n{user_input}\n\n### Assistant Output:\n{pred}')
64
  ```
65
+
66
  ## Domain-Specific Tasks
67
  To easily reproduce our results, we have uploaded the filled-in zero/few-shot input instructions and output completions of each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks).
68