This model is a fine-tuned LLaMA-2 (7B) model. Please accept the LLaMA-2 license agreement before downloading this model. This model works with WikiChat v1.0.

Refer to the following for more information:

GitHub repository: https://github.com/stanford-oval/WikiChat

Paper: WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia

WikiChat Logo

WikiChat
arXiv Github Stars

Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia

Online demo: https://wikichat.genie.stanford.edu

WikiChat Pipeline

Downloads last month
1,117
Safetensors
Model size
6.74B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for stanford-oval/Llama-2-7b-WikiChat

Merges
1 model

Collection including stanford-oval/Llama-2-7b-WikiChat