Sensei-7B-V1 / README.md
ddellapietra's picture
Update README.md
8f60d8e
|
raw
history blame
No virus
2.24 kB
metadata
license: mit

SciPhi-SearchAgent-Alpha-7B Model Card

The SciPhi-SearchAgent-Alpha-7B is a Large Language Model (LLM) fine-tuned from Mistral-7B-v0.1. This model underwent a fine-tuning process using retrieval-augmented generation (RAG) over search with a fully synthetic dataset. The objective of this work is to generate accurate and well-cited summaries from a range of search results, providing more accurate answers to user queries. For best results, follow the prompting guidelines below.

SciPhi-AI is available via a free hosted API, though the exposed model can vary. Currently, SciPhi-SearchAgent-Alpha-7B is available. More details can be found in the docs here.

The search can be accessed directly here.

Model Architecture

Base Model: Mistral-7B-v0.1

Architecture Features:

  • Transformer-based model
  • Grouped-Query Attention
  • Sliding-Window Attention
  • Byte-fallback BPE tokenizer

Using the Model

It is recommended to use a single search query. The model will return an answer using search results as context.

In order to use the model, you can go to the website https://search.sciphi.ai/, or you can run it locally using the following simple command:

export SCIPHI_API_KEY=MY_SCIPHI_API_KEY
# Use the SciPhi `SearchAgent` for LLM RAG w/ AgentSearch
python -m agent_search.scripts.run_rag run --query="What is Fermat's last theorem?"

See the documentation, linked above, for more information.

Built with Axolotl

References

  1. Mistral AI. (2023). Model Card for Mistral-7B-v0.1. The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks tested. For full details, please refer to the paper and release blog post. Model Architecture: Transformer with Grouped-Query Attention, Sliding-Window Attention, and Byte-fallback BPE tokenizer. Link