


"""
[[**Multi-Modal Hallucination Control by Visual Information Grounding**](https://arxiv.org/abs/2403.14003)](https://www.notion.so/Multi-Modal-Hallucination-Control-by-Visual-Information-Grounding-6be8e66da2a846c385fd5ea689d17f22?pvs=21) 

[[**OPERA: Alleviating Hallucination in Multi-Modal Large Language Models via Over-Trust Penalty and Retrospection-Allocation**](https://arxiv.org/pdf/2311.17911.pdf)](https://www.notion.so/OPERA-Alleviating-Hallucination-in-Multi-Modal-Large-Language-Models-via-Over-Trust-Penalty-and-Ret-fb992fedc6194bc6a6ad6dc4ce916190?pvs=21) 

[[**Mitigating Hallucination in Visual-Language Models via Re-Balancing Contrastive Decoding**](https://arxiv.org/abs/2409.06485v1)](https://www.notion.so/Mitigating-Hallucination-in-Visual-Language-Models-via-Re-Balancing-Contrastive-Decoding-10493269f8b28095aa5edcf4c40a048e?pvs=21) 

[[**Enhancing Reliability in Large Language Models: Self-Detection of Hallucinations With Spontaneous Self-Checks**](https://www.techrxiv.org/users/829447/articles/1223513-enhancing-reliability-in-large-language-models-self-detection-of-hallucinations-with-spontaneous-self-checks)](https://www.notion.so/Enhancing-Reliability-in-Large-Language-Models-Self-Detection-of-Hallucinations-With-Spontaneous-Se-10693269f8b28025a92bd8187dcb0fb7?pvs=21) 

[[**Unlocking Anticipatory Text Generation: A Constrained Approach for Large Language Models Decoding**](https://www.semanticscholar.org/paper/Unlocking-Anticipatory-Text-Generation%3A-A-Approach-Tu-Yavuz/daa171a25956b537b222a564c1488b2b6cfbb6bb)](https://www.notion.so/Unlocking-Anticipatory-Text-Generation-A-Constrained-Approach-for-Large-Language-Models-Decoding-61e8d6e55f5342c3980da2d0db872234?pvs=21) 

[[**Transferable and Efficient Non-Factual Content Detection via Probe Training with Offline Consistency Checking**](https://papers.cool/venue/2024.acl-long.668@ACL)](https://www.notion.so/Transferable-and-Efficient-Non-Factual-Content-Detection-via-Probe-Training-with-Offline-Consistency-a7c12ac0b0dd4d9ebc5d7dd6a6191c60?pvs=21)

============

[**Mitigating Entity-Level Hallucination in Large Language Models**](https://arxiv.org/abs/2407.09417)

[**Simple Token-Level Confidence Improves Caption Correctness**](https://openaccess.thecvf.com/content/WACV2024/html/Petryk_Simple_Token-Level_Confidence_Improves_Caption_Correctness_WACV_2024_paper.html)

[**MetaToken: Detecting Hallucination in Image Descriptions by Meta Classification**](https://arxiv.org/abs/2405.19186)

[**INSIDE: LLMs' Internal States Retain the Power of Hallucination Detection**](https://arxiv.org/abs/2402.03744)

[**On-Policy Fine-grained Knowledge Feedback for Hallucination Mitigation**](https://arxiv.org/abs/2406.12221)

[**Exploiting Semantic Reconstruction to Mitigate Hallucinations in Vision-Language Models**](https://arxiv.org/abs/2403.16167)

[**Seeing is Believing: Mitigating Hallucination in Large Vision-Language Models via CLIP-Guided Decoding**](https://arxiv.org/abs/2402.15300)

[**Truth-Aware Context Selection: Mitigating the Hallucinations of Large Language Models Being Misled by Untruthful Contexts**](https://arxiv.org/abs/2403.07556)

[**Fact-checking the output of large language models via token-level uncertainty quantification**](https://arxiv.org/abs/2403.04696)

[**Developing a Reliable, General-Purpose Hallucination Detection and Mitigation Service: Insights and Lessons Learned**](https://arxiv.org/abs/2407.15441)

[**Reefknot: A Comprehensive Benchmark for Relation Hallucination Evaluation, Analysis and Mitigation in Multimodal Large Language Models**](https://arxiv.org/abs/2408.09429)

[**Mitigating Object Hallucination via Data Augmented Contrastive Tuning**](https://arxiv.org/abs/2405.18654)
"""



lis = [
    {
        "arxiv": "2104.08704",
        "name": "A Token-level Reference-free Hallucination Detection Benchmark for Free-form Text Generation",
        "url": "https://arxiv.org/pdf/2104.08704",
    },
    {
        "name": "Unsupervised Token-level Hallucination Detection from Summary Generation By-products",
        "url": "https://aclanthology.org/2022.gem-1.21.pdf",
    },
    {
        "arxiv": "2407.09417",
        "name": "Mitigating Entity-Level Hallucination in Large Language Models",
        "url": "https://arxiv.org/pdf/2407.09417",
    },
    {
        "arxiv": "2305.07021",
        "name": "Simple Token-Level Confidence Improves Caption Correctness",
        "url": "https://arxiv.org/pdf/2305.07021",
    }
]

def test_singlepdf(title:str,url:str):
    from app.singlepdf import SinglePdfRun
    from app.singlepdf.pline import title_list,pt_list

    import app.singlepdf
    app.singlepdf.single.GLOBAL_PDF_URL = str(url)
    app.singlepdf.single.GLOBAL_TITLE_LIST = title_list
    app.singlepdf.single.GLOBAL_PROMPT_LIST = pt_list

    out_file_path = f"./output/singlepdf_md/{title}.md"
    history = SinglePdfRun(out_file_path)
    print(history)


for item in lis:
    test_singlepdf(item["name"],item["url"])
    print("==========\n")

