Automated Peer Reviewing in Paper SEA: Standardization, Evaluation, and Analysis

Paper Link: https://arxiv.org/abs/2407.12857

Project Page: https://ecnu-sea.github.io/

πŸ”₯ News

  • πŸ”₯πŸ”₯πŸ”₯ SEA is accepted by EMNLP2024 !
  • πŸ”₯πŸ”₯πŸ”₯ We have made SEA series models (7B) public !

Model Description

⚠️ This is the SEA-S model for content standardization, and the review model SEA-E can be found here.

The SEA-S model aims to integrate all reviews for each paper into one to eliminate redundancy and errors, focusing on the major advantages and disadvantages of the paper. Specifically, we first utilize GPT-4 to integrate multiple reviews of a paper into one (From ECNU-SEA/SEA_data) that is in a unified format and criterion with constructive contents, and form an instruction dataset for SFT. After that, we fine-tune Mistral-7B-Instruct-v0.2 to distill the knowledge of GPT-4. Therefore, SEA-S provides a novel paradigm for integrating peer review data in an unified format across various conferences.

@inproceedings{yu2024automated,
  title={Automated Peer Reviewing in Paper SEA: Standardization, Evaluation, and Analysis},
  author={Yu, Jianxiang and Ding, Zichen and Tan, Jiaqi and Luo, Kangyang and Weng, Zhenmin and Gong, Chenghua and Zeng, Long and Cui, RenJing and Han, Chengcheng and Sun, Qiushi and others},
  booktitle={Findings of the Association for Computational Linguistics: EMNLP 2024},
  pages={10164--10184},
  year={2024}
}
Downloads last month
11
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train ECNU-SEA/SEA-S