ddellapietra commited on
Commit
4b5a955
1 Parent(s): 40d5853

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -0
README.md ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+
5
+ # SciPhi-SearchAgent-Alpha-7B Model Card
6
+
7
+ The SciPhi-SearchAgent-Alpha-7B is a Large Language Model (LLM) fine-tuned from Mistral-7B-v0.1. This model underwent a fine-tuning process using retrieval-augmented generation (RAG) over search with a fully synthetic dataset. The objective of this work is to generate accurate and well-cited summaries from a range of search results, providing more accurate answers to user queries. For best results, follow the prompting guidelines below.
8
+
9
+ SciPhi-AI is available via a free hosted API, though the exposed model can vary. Currently, SciPhi-SearchAgent-Alpha-7B is available. More details can be found in the docs [here](https://sciphi.readthedocs.io/en/latest/setup/quickstart.html).
10
+
11
+ ## Model Architecture
12
+
13
+ Base Model: Mistral-7B-v0.1
14
+
15
+ **Architecture Features:**
16
+ - Transformer-based model
17
+ - Grouped-Query Attention
18
+ - Sliding-Window Attention
19
+ - Byte-fallback BPE tokenizer
20
+
21
+
22
+ ## Using the Model
23
+
24
+ It is recommended to use a single search query. The model will return an answer using search results as context.
25
+
26
+ In order to use the model, you can go to the website https://search.sciphi.ai/, or you can run it locally using the following simple command:
27
+
28
+ ```
29
+ export SCIPHI_API_KEY=MY_SCIPHI_API_KEY
30
+ # Use the SciPhi `SearchAgent` for LLM RAG w/ AgentSearch
31
+ python -m agent_search.scripts.run_rag run --query="What is Fermat's last theorem?"
32
+ ```
33
+
34
+ See the documentation, linked above, for more information.
35
+
36
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
37
+
38
+ ## References
39
+
40
+ 1. Lian, W., Goodson, B., Wang, G., Pentland, E., Cook, A., Vong, C., & Teknium. (2023). MistralOrca: Mistral-7B Model Instruct-tuned on Filtered OpenOrcaV1 GPT-4 Dataset. *HuggingFace repository*. [Link](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
41
+ 2. Mukherjee, S., Mitra, A., Jawahar, G., Agarwal, S., Palangi, H., & Awadallah, A. (2023). Orca: Progressive Learning from Complex Explanation Traces of GPT-4. *arXiv preprint arXiv:2306.02707*.
42
+ 3. Longpre, S., Hou, L., Vu, T., Webson, A., Chung, H. W., Tay, Y., Zhou, D., Le, Q. V., Zoph, B., Wei, J., & Roberts, A. (2023). The Flan Collection: Designing Data and Methods for Effective Instruction Tuning. *arXiv preprint arXiv:2301.13688*.
43
+ 4. Mistral AI. (2023). Model Card for Mistral-7B-v0.1. The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks tested. For full details, please refer to the paper and release blog post. Model Architecture: Transformer with Grouped-Query Attention, Sliding-Window Attention, and Byte-fallback BPE tokenizer. [Link](https://huggingface.co/mistralai/Mistral-7B-v0.1)
44
+
45
+
46
+ ## Acknowledgements
47
+
48
+ Thank you to the [AI Alignment Lab](https://huggingface.co/Alignment-Lab-AI), [vikp](https://huggingface.co/vikp), [jph00](https://huggingface.co/jph00) and others who contributed to this work.