Edit model card

Model Details

These are the trained models for LoGiPT from NAACL'24 paper: "Language Models can be Deductive Solvers".

  • LoGiPT-[A]-[B]: The specific model version of LoGiPT
    • [A]: The backbone model, which can be 'vicuna-13b-v1.5-16k', 'CodeLlama-13b-hf' or 'CodeLlama-13b-Instruct-hf'.
    • [B]: The training data, which can be 'proofwriter' or 'prontoqa'.

All models are organised in Vicuna-style and trained by FastChat-0.2.30.

All training examples are organised in Json-format and Vicuna-style in jzfeng/LoGiPT-data.

If you find these models helpful, please cite our NAACL'24 paper: (or Arxiv version: https://arxiv.org/abs/2311.06158)

@inproceedings{feng2024language,
  title={Language Models can be Deductive Solvers},
  author={Feng, Jiazhan and Xu, Ruochen and Hao, Junheng and Sharma, Hiteshi and Shen, Yelong and Zhao, Dongyan and Chen, Weizhu},
  booktitle={Findings of the Association for Computational Linguistics: NAACL 2024},
  pages={4026--4042},
  year={2024}
}
Downloads last month
2
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train jzfeng/LoGiPT-CodeLlama-13b-Instruct-hf-prontoqa