ddellapietra commited on
Commit
9bf125a
1 Parent(s): 9cbb2dd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -4,13 +4,13 @@
4
 
5
  # Sensei-7B-V1 Model Card
6
 
7
- Sensei-7B-V1 is a Large Language Model (LLM) fine-tuned from Mistral-7B-v0.1. This model was fine tuned with a fully synthetic dataset to specialize at performing retrieval-augmented generation (RAG) over detailed web search results. This model strives to specialize in using search, such as [AgentSearch](https://huggingface.co/datasets/SciPhi/AgentSearch-V1), to generate accurate and well-cited summaries from a range of search results, providing more accurate answers to user queries. Please refer to the [docs here](https://agent-search.readthedocs.io/en/latest/) for more information on how to run Sensei end-to-end.
8
 
9
  Currently, Sensei is available via hosted api at https://www.sciphi.ai. You can try a demonstration [here](https://search.sciphi.ai/).
10
 
11
  ## Model Architecture
12
 
13
- Base Model: Mistral-7B-v0.1
14
 
15
  **Architecture Features:**
16
  - Transformer-based model
@@ -33,17 +33,17 @@ python -m agent_search.scripts.run_rag run --query="What is Fermat's last theore
33
  Alternatively, you may provide your own search context directly to the model by adhereing to the following format:
34
 
35
  ```
36
- ### Instruction:
37
- Your task is to perform retrieval augmented generation (RAG) over the given query and search results. Return your answer with three sections `My Work`, `My Answer`, and `My Further Considerations`.
38
 
39
  Query:
40
- {query}
41
-
42
  Search Results:
43
- {search_results}
44
-
45
  Query:
46
- {query}
47
 
48
  ### Response:
49
  {"summary":
@@ -55,4 +55,4 @@ __Note__: The inclusion of the text '{"summary":' following the Response footer
55
 
56
  ## References
57
 
58
- 1. Mistral AI. (2023). Model Card for Mistral-7B-v0.1. The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks tested. For full details, please refer to the paper and release blog post. Model Architecture: Transformer with Grouped-Query Attention, Sliding-Window Attention, and Byte-fallback BPE tokenizer. [Link](https://huggingface.co/mistralai/Mistral-7B-v0.1)
 
4
 
5
  # Sensei-7B-V1 Model Card
6
 
7
+ Sensei-7B-V1 is a Large Language Model (LLM) fine-tuned from OpenPipe's mistral-ft-optimized-1218, which is based on Mistral-7B. Sensei-7B-V1 was was fine-tuned with a fully synthetic dataset to specialize at performing retrieval-augmented generation (RAG) over detailed web search results. This model strives to specialize in using search, such as [AgentSearch](https://huggingface.co/datasets/SciPhi/AgentSearch-V1), to generate accurate and well-cited summaries from a range of search results, providing more accurate answers to user queries. Please refer to the [docs here](https://agent-search.readthedocs.io/en/latest/) for more information on how to run Sensei end-to-end.
8
 
9
  Currently, Sensei is available via hosted api at https://www.sciphi.ai. You can try a demonstration [here](https://search.sciphi.ai/).
10
 
11
  ## Model Architecture
12
 
13
+ Base Model: mistral-ft-optimized-1218
14
 
15
  **Architecture Features:**
16
  - Transformer-based model
 
33
  Alternatively, you may provide your own search context directly to the model by adhereing to the following format:
34
 
35
  ```
36
+ ### Instruction:
37
+ Your task is to perform retrieval augmented generation (RAG) over the given query and search results. Return your answer in a json format that includes a summary of the search results and a list of related queries.
38
 
39
  Query:
40
+ {prompt}
41
+ \n\n
42
  Search Results:
43
+ {context}
44
+ \n\n
45
  Query:
46
+ {prompt}
47
 
48
  ### Response:
49
  {"summary":
 
55
 
56
  ## References
57
 
58
+ 1. OpenPipe AI. (2023). Model Card for mistral-ft-optimized-1218. The mistral-ft-1218 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters optimized for downstream fine-tuning on a variety of tasks. For full details, please refer to the release blog post. Model Architecture: Transformer with Grouped-Query Attention, Sliding-Window Attention, and Byte-fallback BPE tokenizer. [Link](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218)