--- license: mit datasets: - openai/summarize_from_feedback - openai/webgpt_comparisons - Dahoas/instruct-synthetic-prompt-responses - Anthropic/hh-rlhf - lmsys/chatbot_arena_conversations - openbmb/UltraFeedback metrics: - accuracy tags: - pair-ranker - pair_ranker - reward_model - reward-model - pairrm - pair-rm - RLHF language: - en --- Inspired by [DeBERTa Reward Model Series](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-large-v2) `llm-blender/PairRM` is pairranker version finetuned specifically as a reward model using deberta-v3-large. - Github: [https://github.com/yuchenlin/LLM-Blender](https://github.com/yuchenlin/LLM-Blender) - Paper: [https://arxiv.org/abs/2306.02561](https://arxiv.org/abs/2306.02561) ## Statistics ### Context length | PairRanker type | Source max length | Candidate max length | Total max length | |:-----------------:|:-----------------:|----------------------|------------------| | [pair-ranker](https://huggingface.co/llm-blender/pair-ranker) | 128 | 128 | 384 | | [PairRM](https://huggingface.co/llm-blender/pair-reward-model/) (This model) | 1224 | 412 | 2048 | ### Performance ## Usage Example ### Installation Since PairRanker contains some custom layers and tokens. We recommend use PairRM with our llm-blender code API. - First install `llm-blender` ```bash pip install git+https://github.com/yuchenlin/LLM-Blender.git ``` - Then load pairranker with the following code: ```python import llm_blender blender = llm_blender.Blender() blender.loadranker("llm-blender/PairRM") # load PairRM ``` ### Use case 1: Compare responses (Quality Evaluator) - Then you can rank candidate responses with the following function ```python inputs = ["input1", "input2"] candidates_texts = [["candidate1 for input1", "candidatefor input1"], ["candidate1 for input2", "candidate2 for input2"]] ranks = blender.rank(inputs, candidates_texts, return_scores=False, batch_size=2) # ranks is a list of ranks where ranks[i][j] represents the ranks of candidate-j for input-i ``` - Directly compare two candidate responses ```python candidates_A = [cands[0] for cands in candidates] candidates_B = [cands[1] for cands in candidates] comparison_results = blender.compare(inputs, candidates_A, candidates_B) # comparison_results is a list of bool, where element[i] denotes whether candidates_A[i] is better than candidates_B[i] for inputs[i] ``` - Directly compare two multi-turn conversations given that user's query in each turn are fiexed and responses are different. ```python conv1 = [ { "content": "hello", "role": "USER" }, { "content": "", "role": "ASSISTANT" }, ... ] conv2 = [ { "content": "hello", "role": "USER" }, { "content": "", "role": "ASSISTANT" }, ... ] comparison_results = blender.compare_conversations([conv1], [conv2]) # comparison_results is a list of bool, where each element denotes whether all the responses in conv1 together is better than that of conv2 ``` ### Use case 2: Best-of-n sampling (Decoding Enhancing) **Best-of-n Sampling**, aka, rejection sampling, is a strategy to enhance the response quality by selecting the one that was ranked highest by the reward model (Learn more at[OpenAI WebGPT section 3.2](https://arxiv.org/pdf/2112.09332.pdf) and [OpenAI Blog](https://openai.com/research/measuring-goodharts-law)). Best-of-n sampling is a easy way to imporve your llm power with just a few lines of code. An example of applying on zephyr is as follows. ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("HuggingFaceH4/zephyr-7b-beta") model = AutoModelForCausalLM.from_pretrained("HuggingFaceH4/zephyr-7b-beta", device_map="auto") inputs = [...] # your list of inputs system_message = { "role": "system", "content": "You are a friendly chatbot who always responds in the style of a pirate", } messages = [ [ system_message, {"role": "user", "content": _input}, ] for _input in zip(inputs) ] prompts = [tokenizer.apply_chat_template(m, tokenize=False, add_generation_prompt=True) for m in messages] outputs = blender.best_of_n_generate(model, tokenizer, prompts, n=10) print("### Prompt:") print(prompts[0]) print("### best-of-n generations:") print(outputs[0]) ``` ### Use case 3: RLHF PairRM has been trained on various high-quality and large-scale dataset with human preference annotations and exhibits great correlation with human preferences with an extremly small model size (0.4B), approching the performance of GPT-4. (See detailed comparison in 🤗[PairRM](https://huggingface.co/llm-blender/PairRM)) With a `blender.compare()` function, you can easily apply PairRM to poopular RLHF toolkits like [trl](https://huggingface.co/docs/trl/index). **🔥 Check more details on our example jupyter notebook usage: [`blender_usage.ipynb`](https://github.com/yuchenlin/LLM-Blender/blob/main/blender_usage.ipynb)** Learn more in our LLM-Blender Github [README.md](https://github.com/yuchenlin/LLM-Blender#rank-and-fusion) ## Citation If you are using PairRM in your research, please cite LLM-blender. ```bibtex @inproceedings{llm-blender-2023, title = "LLM-Blender: Ensembling Large Language Models with Pairwise Comparison and Generative Fusion", author = "Jiang, Dongfu and Ren, Xiang and Lin, Bill Yuchen", booktitle = "Proceedings of the 61th Annual Meeting of the Association for Computational Linguistics (ACL 2023)", year = "2023" } ```