Interactive Evolution: A Neural-Symbolic Self-Training Framework for Large Language Models

Paper Link: https://arxiv.org/abs/2406.11736

Code Repo: https://github.com/xufangzhi/ENVISIONS

πŸ”₯ News

  • πŸ”₯πŸ”₯πŸ”₯ We make public the final checkpoints after self-training ! ! !

Note

The self-training process is based on LLaMA2-Chat model serieses and powered by ENVISIONS. The work is still under review.

Prompt for Zero-shot Evaluation

Generate the logical representation for the given context and question.
The context is: <context>
The question is: <question>
The logical representation is:

Citation

If you find it helpful, please kindly cite the paper.

@misc{xu2024interactive,
      title={Interactive Evolution: A Neural-Symbolic Self-Training Framework For Large Language Models}, 
      author={Fangzhi Xu and Qiushi Sun and Kanzhi Cheng and Jun Liu and Yu Qiao and Zhiyong Wu},
      year={2024},
      eprint={2406.11736},
      archivePrefix={arXiv},
}
Downloads last month
18
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.