Edit model card

Llama-se-rl-adapter

Adapter weights of an RL fine-tuned model based on LLaMa. Authored by Edward Beeching, Younes Belkada, Kashiv Rasul, Lewis Tunstall and Leandro von Werra.

Model Description

Llama-se-rl is a Llama-based model that has been first fine-tuned on the Stack Exchange dataset and then RL fine-tuned using a Stack Exchange Reward Model. This dataset consists of questions and answers from various domains in Stack Exchange, such as programming, mathematics, physics, and more. The model is designed to generate human-like responses to questions in these domains. The model has been training to respond to prompts with the following template:

Question: <Query> 

Answer: <Response>

Intended Uses & Limitations

Llama-se-rl is intended for use in generating responses to questions related to the Stack Exchange dataset. It is suitable for generating answers to questions in the domains covered by the dataset, such as programming, mathematics, and physics. However, the model may not perform well on questions outside these domains or on questions requiring highly specific or technical knowledge.

Limitations and Bias

The Llama-se-rl model inherits limitations and biases from the Llama model and also those contained in the Stack Exchange dataset. The Stack Exchange dataset may contain biases in terms of the topics it covers and the users who contribute to it. It may not include all possible domains, and the quality of answers may vary. Additionally, the model may generate answers that are incorrect or misleading due to biases in the training data or the inherent limitations of the Llama architecture.

BibTeX entry and citation info

@misc{beeching2023llama,
  title={StackLLaMa: An RL Fine-tuned LLaMa Model for Stack Exchange Question and Answering},
  author={Beeching, Edward and Belkada, Younes and Rasul, Kashiv and Tunstall, Lewis and von Werra, Leandro},
  year={2023}
}
Downloads last month
0
Inference Examples
Unable to determine this model's library. Check the docs .