JudgeLM-33B-v1.0 / README.md
LianghuiZhu's picture
Create README.md
35060de
metadata
inference: false
language:
  - en
tags:
  - instruction-finetuning
pretty_name: JudgeLM-100K
task_categories:
  - text-generation

JudgeLM Model Card

Model Details

JudgeLM is a judge model trained by fine-tuning Vicuna on JudgeLM-100K dataset.

  • Developed by: HUST, BAAI
  • Model type: An auto-regressive language model based on the transformer architecture.
  • License: Non-commercial license
  • Finetuned from model: Vicuna.

Model Sources

Uses

The primary use of JudgeLM is research on evaluating the performance of large language models and chatbots. The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.

How to Get Started with the Model

Training Details

JudgeLM v1.0 is fine-tuned from Vicuna-v1.3 with supervised instruction fine-tuning. The training data is around 200K judge samples from JudgeLM-100K dataset. See more details in the "Fine-tuning Settings" section in the appendix of this paper.

Evaluation

JudgeLM is evaluated on JudgeLM val set, with judgements produced by GPT-4 teacher. See more details in this paper and try it with code.

Additional Information

Citation Information

@article{zhu2023judgelm,  
    title={JudgeLM: Fine-tuned Large Language Models are Scalable Judges},  
    author={Lianghui Zhu and Xinggang Wang and Xinlong Wang},  
    year={2023},  
    eprint={2310.17631},  
    archivePrefix={arXiv},  
    primaryClass={cs.CL}  
}