Edit model card

Model Card for devloverumar/chatgpt-content-detector

This model is trained on the mix of full-text and splitted sentences of answers from Hello-SimpleAI/HC3.

More details refer to arxiv: 2301.07597 and Gtihub project Hello-SimpleAI/chatgpt-comparison-detection.

The base checkpoint is roberta-base. We train it with all Hello-SimpleAI/HC3 data (without held-out) for 1 epoch.

(1-epoch is consistent with the experiments in our paper.)

Citation

Checkout this papaer arxiv: 2301.07597

@article{guo-etal-2023-hc3,
    title = "How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection",
    author = "Guo, Biyang  and
      Zhang, Xin  and
      Wang, Ziyuan  and
      Jiang, Minqi  and
      Nie, Jinran  and
      Ding, Yuxuan  and
      Yue, Jianwei  and
      Wu, Yupeng",
    journal={arXiv preprint arxiv:2301.07597}
    year = "2023",
}
Downloads last month
8
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train devloverumar/chatgpt-content-detector

Space using devloverumar/chatgpt-content-detector 1