IPM-Llama-2-13b
This model is a fine-tuned version of Llama-2-13b-hf, designed to support the inverse prompting process for the ACL2024 Findings paper "Towards Better Question Generation in QA-Based Event Extraction."
Paper Link: https://arxiv.org/abs/2405.10517
GitHub Repository: https://github.com/Rcrossmeister/RLQG
This Hugging Face repository only provides LoRA weights. Please merge them with the backbone model before use.
Backbone Model: https://huggingface.co/meta-llama/Llama-2-13b-hf
- Downloads last month
- 0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The HF Inference API does not support summarization models for peft
library.