m4r1 commited on
Commit
5f851b6
1 Parent(s): 93b06dd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -46
README.md CHANGED
@@ -3,52 +3,6 @@ license: mit
3
  ---
4
  4 bit quantization of: https://huggingface.co/selfrag/selfrag_llama2_7b
5
 
6
- ## Usage
7
- Here, we show an easy way to quickly download our model from HuggingFace and run with `vllm` with pre-given passages. Make sure to install dependencies listed at [self-rag/requirements.txt](https://github.com/AkariAsai/self-rag/blob/main/requirements.txt).
8
- To run our full inference pipeline with a retrieval system and fine-grained tree decoding, please use [our code](https://github.com/AkariAsai/self-rag).
9
-
10
- ```py
11
- from transformers import AutoTokenizer, AutoModelForCausalLM
12
- from vllm import LLM, SamplingParams
13
-
14
- model = LLM("selfrag/selfrag_llama2_7b", download_dir="/gscratch/h2lab/akari/model_cache", dtype="half")
15
- sampling_params = SamplingParams(temperature=0.0, top_p=1.0, max_tokens=100, skip_special_tokens=False)
16
-
17
- def format_prompt(input, paragraph=None):
18
- prompt = "### Instruction:\n{0}\n\n### Response:\n".format(input)
19
- if paragraph is not None:
20
- prompt += "[Retrieval]<paragraph>{0}</paragraph>".format(paragraph)
21
- return prompt
22
-
23
- query_1 = "Leave odd one out: twitter, instagram, whatsapp."
24
- query_2 = "Can you tell me the difference between llamas and alpacas?"
25
- queries = [query_1, query_2]
26
-
27
- preds = model.generate([format_prompt(query) for query in queries], sampling_params)
28
- for pred in preds:
29
- print("Model prediction: {0}".format(pred.outputs[0].text))
30
- # Model prediction: Twitter, Instagram, and WhatsApp are all social media platforms.[No Retrieval]WhatsApp is the odd one out because it is a messaging app, while Twitter and # Instagram are primarily used for sharing photos and videos.[Utility:5]</s> (this query doesn't require factual grounding; just skip retrieval and do normal instruction-following generation)
31
- # Model prediction: Sure![Retrieval]<paragraph> ... (this query requires factual grounding, call a retriever)
32
-
33
- # generate with retrieved passage
34
- prompt = format_prompt("Can you tell me the difference between llamas and alpacas?", paragraph="The alpaca (Lama pacos) is a species of South American camelid mammal. It is similar to, and often confused with, the llama. Alpacas are considerably smaller than llamas, and unlike llamas, they were not bred to be working animals, but were bred specifically for their fiber.")
35
- preds = model.generate([prompt], sampling_params)
36
- print([pred.outputs[0].text for pred in preds])
37
- # ['[Relevant]Alpacas are considerably smaller than llamas, and unlike llamas, they were not bred to be working animals, but were bred specifically for their fiber.[Fully supported][Utility:5]</s>']
38
- ```
39
-
40
- ## Input Format
41
- As described in the `format_prompt` function, your input should be formed as
42
- ```
43
- ### Instruction:\n{instruction}\n\n### Response:\n".format(instruction)
44
- ```
45
- or
46
- ```
47
- ### Instruction:\n{instruction}\n\n### Input:\n{input}\n\n### Response:\n"
48
- ```
49
- If you have additional input.
50
- You can insert paragraphs anywhere after `### Response:\n"`, but make sure to mark paragraphs as paragraph tokens (i.e., `<paragraph>{0}</paragraph>`).
51
-
52
  ## Citation and contact
53
  If you use this model, please cite our work:
54
  ```
 
3
  ---
4
  4 bit quantization of: https://huggingface.co/selfrag/selfrag_llama2_7b
5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  ## Citation and contact
7
  If you use this model, please cite our work:
8
  ```