Add Batch 517c841b-f750-4661-b64e-73158365548e data
Browse filesThis view is limited to 50 files because it contains too many changes. See raw diff
- .gitattributes +64 -0
- 2024/AFLoRA_ Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models/cedf799c-cbb4-4ca7-91c5-53457a9975a6_content_list.json +1277 -0
- 2024/AFLoRA_ Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models/cedf799c-cbb4-4ca7-91c5-53457a9975a6_model.json +1380 -0
- 2024/AFLoRA_ Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models/cedf799c-cbb4-4ca7-91c5-53457a9975a6_origin.pdf +3 -0
- 2024/AFLoRA_ Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models/full.md +234 -0
- 2024/AFLoRA_ Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models/images.zip +3 -0
- 2024/AFLoRA_ Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models/layout.json +0 -0
- 2024/AGR_ Reinforced Causal Agent-Guided Self-explaining Rationalization/19a5d2a6-e0ae-4231-a1c8-5819b81e5755_content_list.json +1814 -0
- 2024/AGR_ Reinforced Causal Agent-Guided Self-explaining Rationalization/19a5d2a6-e0ae-4231-a1c8-5819b81e5755_model.json +2143 -0
- 2024/AGR_ Reinforced Causal Agent-Guided Self-explaining Rationalization/19a5d2a6-e0ae-4231-a1c8-5819b81e5755_origin.pdf +3 -0
- 2024/AGR_ Reinforced Causal Agent-Guided Self-explaining Rationalization/full.md +380 -0
- 2024/AGR_ Reinforced Causal Agent-Guided Self-explaining Rationalization/images.zip +3 -0
- 2024/AGR_ Reinforced Causal Agent-Guided Self-explaining Rationalization/layout.json +0 -0
- 2024/ATLAS_ Improving Lay Summarisation with Attribute-based Control/dabae923-8653-40d5-a808-ef509db0f2cc_content_list.json +1236 -0
- 2024/ATLAS_ Improving Lay Summarisation with Attribute-based Control/dabae923-8653-40d5-a808-ef509db0f2cc_model.json +1395 -0
- 2024/ATLAS_ Improving Lay Summarisation with Attribute-based Control/dabae923-8653-40d5-a808-ef509db0f2cc_origin.pdf +3 -0
- 2024/ATLAS_ Improving Lay Summarisation with Attribute-based Control/full.md +196 -0
- 2024/ATLAS_ Improving Lay Summarisation with Attribute-based Control/images.zip +3 -0
- 2024/ATLAS_ Improving Lay Summarisation with Attribute-based Control/layout.json +0 -0
- 2024/Aligning Large Language Models via Fine-grained Supervision/d2093850-e28f-43df-a51c-a0b07b648dd6_content_list.json +1541 -0
- 2024/Aligning Large Language Models via Fine-grained Supervision/d2093850-e28f-43df-a51c-a0b07b648dd6_model.json +1778 -0
- 2024/Aligning Large Language Models via Fine-grained Supervision/d2093850-e28f-43df-a51c-a0b07b648dd6_origin.pdf +3 -0
- 2024/Aligning Large Language Models via Fine-grained Supervision/full.md +271 -0
- 2024/Aligning Large Language Models via Fine-grained Supervision/images.zip +3 -0
- 2024/Aligning Large Language Models via Fine-grained Supervision/layout.json +0 -0
- 2024/Uni-Dubbing_ Zero-Shot Speech Synthesis from Visual Articulation/fa9f71e0-943b-4156-8d6d-614ac6caf2ba_content_list.json +0 -0
- 2024/Uni-Dubbing_ Zero-Shot Speech Synthesis from Visual Articulation/fa9f71e0-943b-4156-8d6d-614ac6caf2ba_model.json +0 -0
- 2024/Uni-Dubbing_ Zero-Shot Speech Synthesis from Visual Articulation/fa9f71e0-943b-4156-8d6d-614ac6caf2ba_origin.pdf +3 -0
- 2024/Uni-Dubbing_ Zero-Shot Speech Synthesis from Visual Articulation/full.md +399 -0
- 2024/Uni-Dubbing_ Zero-Shot Speech Synthesis from Visual Articulation/images.zip +3 -0
- 2024/Uni-Dubbing_ Zero-Shot Speech Synthesis from Visual Articulation/layout.json +0 -0
- 2024/UniBridge_ A Unified Approach to Cross-Lingual Transfer Learning for Low-Resource Languages/31fceca4-760f-4c7d-9a2a-9207221cfedf_content_list.json +0 -0
- 2024/UniBridge_ A Unified Approach to Cross-Lingual Transfer Learning for Low-Resource Languages/31fceca4-760f-4c7d-9a2a-9207221cfedf_model.json +0 -0
- 2024/UniBridge_ A Unified Approach to Cross-Lingual Transfer Learning for Low-Resource Languages/31fceca4-760f-4c7d-9a2a-9207221cfedf_origin.pdf +3 -0
- 2024/UniBridge_ A Unified Approach to Cross-Lingual Transfer Learning for Low-Resource Languages/full.md +473 -0
- 2024/UniBridge_ A Unified Approach to Cross-Lingual Transfer Learning for Low-Resource Languages/images.zip +3 -0
- 2024/UniBridge_ A Unified Approach to Cross-Lingual Transfer Learning for Low-Resource Languages/layout.json +0 -0
- 2024/UniCoder_ Scaling Code Large Language Model via Universal Code/6625637b-146d-4630-b379-0fc1c52e41b0_content_list.json +1758 -0
- 2024/UniCoder_ Scaling Code Large Language Model via Universal Code/6625637b-146d-4630-b379-0fc1c52e41b0_model.json +0 -0
- 2024/UniCoder_ Scaling Code Large Language Model via Universal Code/6625637b-146d-4630-b379-0fc1c52e41b0_origin.pdf +3 -0
- 2024/UniCoder_ Scaling Code Large Language Model via Universal Code/full.md +356 -0
- 2024/UniCoder_ Scaling Code Large Language Model via Universal Code/images.zip +3 -0
- 2024/UniCoder_ Scaling Code Large Language Model via Universal Code/layout.json +0 -0
- 2024/Unified Hallucination Detection for Multimodal Large Language Models/ce735746-6b05-466e-b5fd-fd1af150fdd1_content_list.json +0 -0
- 2024/Unified Hallucination Detection for Multimodal Large Language Models/ce735746-6b05-466e-b5fd-fd1af150fdd1_model.json +0 -0
- 2024/Unified Hallucination Detection for Multimodal Large Language Models/ce735746-6b05-466e-b5fd-fd1af150fdd1_origin.pdf +3 -0
- 2024/Unified Hallucination Detection for Multimodal Large Language Models/full.md +744 -0
- 2024/Unified Hallucination Detection for Multimodal Large Language Models/images.zip +3 -0
- 2024/Unified Hallucination Detection for Multimodal Large Language Models/layout.json +0 -0
- 2024/Unintended Impacts of LLM Alignment on Global Representation/cd8d179d-1771-4da7-8090-05c4b67830c2_content_list.json +0 -0
.gitattributes
CHANGED
|
@@ -4862,3 +4862,67 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 4862 |
2024/Uncovering[[:space:]]the[[:space:]]Full[[:space:]]Potential[[:space:]]of[[:space:]]Visual[[:space:]]Grounding[[:space:]]Methods[[:space:]]in[[:space:]]VQA/49ac657d-85be-41b1-8b8c-8df6eef82006_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4863 |
2024/Understanding[[:space:]]Retrieval[[:space:]]Robustness[[:space:]]for[[:space:]]Retrieval-augmented[[:space:]]Image[[:space:]]Captioning/9b7473ff-f22a-4c66-b360-2e80dd7b2e03_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4864 |
2024/Understanding[[:space:]]and[[:space:]]Addressing[[:space:]]the[[:space:]]Under-Translation[[:space:]]Problem[[:space:]]from[[:space:]]the[[:space:]]Perspective[[:space:]]of[[:space:]]Decoding[[:space:]]Objective/1075b938-2636-4bce-8a2d-1a489fc900a0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4862 |
2024/Uncovering[[:space:]]the[[:space:]]Full[[:space:]]Potential[[:space:]]of[[:space:]]Visual[[:space:]]Grounding[[:space:]]Methods[[:space:]]in[[:space:]]VQA/49ac657d-85be-41b1-8b8c-8df6eef82006_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4863 |
2024/Understanding[[:space:]]Retrieval[[:space:]]Robustness[[:space:]]for[[:space:]]Retrieval-augmented[[:space:]]Image[[:space:]]Captioning/9b7473ff-f22a-4c66-b360-2e80dd7b2e03_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4864 |
2024/Understanding[[:space:]]and[[:space:]]Addressing[[:space:]]the[[:space:]]Under-Translation[[:space:]]Problem[[:space:]]from[[:space:]]the[[:space:]]Perspective[[:space:]]of[[:space:]]Decoding[[:space:]]Objective/1075b938-2636-4bce-8a2d-1a489fc900a0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4865 |
+
2024/AFLoRA_[[:space:]]Adaptive[[:space:]]Freezing[[:space:]]of[[:space:]]Low[[:space:]]Rank[[:space:]]Adaptation[[:space:]]in[[:space:]]Parameter[[:space:]]Efficient[[:space:]]Fine-Tuning[[:space:]]of[[:space:]]Large[[:space:]]Models/cedf799c-cbb4-4ca7-91c5-53457a9975a6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4866 |
+
2024/AGR_[[:space:]]Reinforced[[:space:]]Causal[[:space:]]Agent-Guided[[:space:]]Self-explaining[[:space:]]Rationalization/19a5d2a6-e0ae-4231-a1c8-5819b81e5755_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4867 |
+
2024/ATLAS_[[:space:]]Improving[[:space:]]Lay[[:space:]]Summarisation[[:space:]]with[[:space:]]Attribute-based[[:space:]]Control/dabae923-8653-40d5-a808-ef509db0f2cc_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4868 |
+
2024/Aligning[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]via[[:space:]]Fine-grained[[:space:]]Supervision/d2093850-e28f-43df-a51c-a0b07b648dd6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4869 |
+
2024/Uni-Dubbing_[[:space:]]Zero-Shot[[:space:]]Speech[[:space:]]Synthesis[[:space:]]from[[:space:]]Visual[[:space:]]Articulation/fa9f71e0-943b-4156-8d6d-614ac6caf2ba_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4870 |
+
2024/UniBridge_[[:space:]]A[[:space:]]Unified[[:space:]]Approach[[:space:]]to[[:space:]]Cross-Lingual[[:space:]]Transfer[[:space:]]Learning[[:space:]]for[[:space:]]Low-Resource[[:space:]]Languages/31fceca4-760f-4c7d-9a2a-9207221cfedf_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4871 |
+
2024/UniCoder_[[:space:]]Scaling[[:space:]]Code[[:space:]]Large[[:space:]]Language[[:space:]]Model[[:space:]]via[[:space:]]Universal[[:space:]]Code/6625637b-146d-4630-b379-0fc1c52e41b0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4872 |
+
2024/Unified[[:space:]]Hallucination[[:space:]]Detection[[:space:]]for[[:space:]]Multimodal[[:space:]]Large[[:space:]]Language[[:space:]]Models/ce735746-6b05-466e-b5fd-fd1af150fdd1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4873 |
+
2024/Unintended[[:space:]]Impacts[[:space:]]of[[:space:]]LLM[[:space:]]Alignment[[:space:]]on[[:space:]]Global[[:space:]]Representation/cd8d179d-1771-4da7-8090-05c4b67830c2_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4874 |
+
2024/Unity[[:space:]]in[[:space:]]Diversity_[[:space:]]Collaborative[[:space:]]Pre-training[[:space:]]Across[[:space:]]Multimodal[[:space:]]Medical[[:space:]]Sources/4b557841-ecfa-4af2-a424-6b18b998e4c3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4875 |
+
2024/Unlearning[[:space:]]Traces[[:space:]]the[[:space:]]Influential[[:space:]]Training[[:space:]]Data[[:space:]]of[[:space:]]Language[[:space:]]Models/c6f9d8ae-4c3d-4877-ad15-6f4ad225115a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4876 |
+
2024/Unlocking[[:space:]]Data-free[[:space:]]Low-bit[[:space:]]Quantization[[:space:]]with[[:space:]]Matrix[[:space:]]Decomposition[[:space:]]for[[:space:]]KV[[:space:]]Cache[[:space:]]Compression/02d2b8f6-5d86-4bf5-b61e-66370220bd22_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4877 |
+
2024/Unlocking[[:space:]]the[[:space:]]Power[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]for[[:space:]]Entity[[:space:]]Alignment/d99602aa-2333-4adc-aa97-9d5930d827ba_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4878 |
+
2024/Unsupervised[[:space:]]Information[[:space:]]Refinement[[:space:]]Training[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]for[[:space:]]Retrieval-Augmented[[:space:]]Generation/5238593e-e349-4b3b-9f5b-91a4694891f4_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4879 |
+
2024/Unsupervised[[:space:]]Multimodal[[:space:]]Clustering[[:space:]]for[[:space:]]Semantics[[:space:]]Discovery[[:space:]]in[[:space:]]Multimodal[[:space:]]Utterances/7e3fb09a-87f7-4084-920d-8634347ebfe9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4880 |
+
2024/Unveiling[[:space:]]Linguistic[[:space:]]Regions[[:space:]]in[[:space:]]Large[[:space:]]Language[[:space:]]Models/cbdd20a4-6c14-4824-b688-047f4d70573b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4881 |
+
2024/Using[[:space:]]Natural[[:space:]]Language[[:space:]]Explanations[[:space:]]to[[:space:]]Improve[[:space:]]Robustness[[:space:]]of[[:space:]]In-context[[:space:]]Learning/741c7161-f067-49d4-9999-cb66f4dfd726_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4882 |
+
2024/Using[[:space:]]Synchronic[[:space:]]Definitions[[:space:]]and[[:space:]]Semantic[[:space:]]Relations[[:space:]]to[[:space:]]Classify[[:space:]]Semantic[[:space:]]Change[[:space:]]Types/f064da67-015b-4a70-ac57-1baf8cfc3d4b_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4883 |
+
2024/VIEScore_[[:space:]]Towards[[:space:]]Explainable[[:space:]]Metrics[[:space:]]for[[:space:]]Conditional[[:space:]]Image[[:space:]]Synthesis[[:space:]]Evaluation/5b317405-9204-44d7-bad9-4138293f58f9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4884 |
+
2024/VISTA_[[:space:]]Visualized[[:space:]]Text[[:space:]]Embedding[[:space:]]For[[:space:]]Universal[[:space:]]Multi-Modal[[:space:]]Retrieval/2bd336a1-3a96-41b3-8aca-47dbfdcf0b79_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4885 |
+
2024/ValueBench_[[:space:]]Towards[[:space:]]Comprehensively[[:space:]]Evaluating[[:space:]]Value[[:space:]]Orientations[[:space:]]and[[:space:]]Understanding[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models/83b6e4a7-29cc-40a8-8774-5c9aae0e7ddf_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4886 |
+
2024/VariErr[[:space:]]NLI_[[:space:]]Separating[[:space:]]Annotation[[:space:]]Error[[:space:]]from[[:space:]]Human[[:space:]]Label[[:space:]]Variation/e15b9061-9332-4c88-b8cc-ba24e76b3275_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4887 |
+
2024/VerifiNER_[[:space:]]Verification-augmented[[:space:]]NER[[:space:]]via[[:space:]]Knowledge-grounded[[:space:]]Reasoning[[:space:]]with[[:space:]]Large[[:space:]]Language[[:space:]]Models/0fecaa99-5ba0-4b2d-8f45-0ea5f92f5adc_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4888 |
+
2024/ViSAGe_[[:space:]]A[[:space:]]Global-Scale[[:space:]]Analysis[[:space:]]of[[:space:]]Visual[[:space:]]Stereotypes[[:space:]]in[[:space:]]Text-to-Image[[:space:]]Generation/79c9abf9-bf67-4aad-8f1f-fd3d56ec29f0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4889 |
+
2024/Video-ChatGPT_[[:space:]]Towards[[:space:]]Detailed[[:space:]]Video[[:space:]]Understanding[[:space:]]via[[:space:]]Large[[:space:]]Vision[[:space:]]and[[:space:]]Language[[:space:]]Models/2ceaa057-9200-4195-96f9-e70e68b852b6_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4890 |
+
2024/Virtual[[:space:]]Compiler[[:space:]]Is[[:space:]]All[[:space:]]You[[:space:]]Need[[:space:]]For[[:space:]]Assembly[[:space:]]Code[[:space:]]Search/93290b10-6c44-423b-b4e5-dde1c2b3801a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4891 |
+
2024/VisDiaHalBench_[[:space:]]A[[:space:]]Visual[[:space:]]Dialogue[[:space:]]Benchmark[[:space:]]For[[:space:]]Diagnosing[[:space:]]Hallucination[[:space:]]in[[:space:]]Large[[:space:]]Vision-Language[[:space:]]Models/19db181b-97b8-43ba-b204-071f0d3c4f17_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4892 |
+
2024/VisualWebArena_[[:space:]]Evaluating[[:space:]]Multimodal[[:space:]]Agents[[:space:]]on[[:space:]]Realistic[[:space:]]Visual[[:space:]]Web[[:space:]]Tasks/08912c7e-432e-4a71-b949-181f5307a145_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4893 |
+
2024/Visualization[[:space:]]Recommendation[[:space:]]with[[:space:]]Prompt-based[[:space:]]Reprogramming[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models/531b94ed-f372-497e-be5e-9b546585ca20_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4894 |
+
2024/VoiceCraft_[[:space:]]Zero-Shot[[:space:]]Speech[[:space:]]Editing[[:space:]]and[[:space:]]Text-to-Speech[[:space:]]in[[:space:]]the[[:space:]]Wild/7951da9f-b08e-4f26-9da4-4dfc7e282198_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4895 |
+
2024/VulLibGen_[[:space:]]Generating[[:space:]]Names[[:space:]]of[[:space:]]Vulnerability-Affected[[:space:]]Packages[[:space:]]via[[:space:]]a[[:space:]]Large[[:space:]]Language[[:space:]]Model/7524b644-fb0b-4ab9-884e-9061dc3f1403_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4896 |
+
2024/WARDEN_[[:space:]]Multi-Directional[[:space:]]Backdoor[[:space:]]Watermarks[[:space:]]for[[:space:]]Embedding-as-a-Service[[:space:]]Copyright[[:space:]]Protection/ae17bc02-bda3-4863-b9a8-12b550ab8af3_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4897 |
+
2024/WRP_[[:space:]]Weight[[:space:]]Recover[[:space:]]Prune[[:space:]]for[[:space:]]Structured[[:space:]]Sparsity/c1dba89c-88be-4bc7-a598-77f48f7ed329_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4898 |
+
2024/WatME_[[:space:]]Towards[[:space:]]Lossless[[:space:]]Watermarking[[:space:]]Through[[:space:]]Lexical[[:space:]]Redundancy/f56ee9bf-c769-469c-bcb9-1f619848588e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4899 |
+
2024/WaterBench_[[:space:]]Towards[[:space:]]Holistic[[:space:]]Evaluation[[:space:]]of[[:space:]]Watermarks[[:space:]]for[[:space:]]Large[[:space:]]Language[[:space:]]Models/eea72323-e15c-47cc-8375-2844ae9890c0_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4900 |
+
2024/Wav2Gloss_[[:space:]]Generating[[:space:]]Interlinear[[:space:]]Glossed[[:space:]]Text[[:space:]]from[[:space:]]Speech/39b0ea5d-3e2e-445f-8900-a184a0914aad_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4901 |
+
2024/WaveCoder_[[:space:]]Widespread[[:space:]]And[[:space:]]Versatile[[:space:]]Enhancement[[:space:]]For[[:space:]]Code[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]By[[:space:]]Instruction[[:space:]]Tuning/23a7e2d2-264e-4262-8816-a868d9c1bb38_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4902 |
+
2024/WebCiteS_[[:space:]]Attributed[[:space:]]Query-Focused[[:space:]]Summarization[[:space:]]on[[:space:]]Chinese[[:space:]]Web[[:space:]]Search[[:space:]]Results[[:space:]]with[[:space:]]Citations/9ecb915a-3af2-4d8b-b66f-0aa607e9bbda_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4903 |
+
2024/WebVoyager_[[:space:]]Building[[:space:]]an[[:space:]]End-to-End[[:space:]]Web[[:space:]]Agent[[:space:]]with[[:space:]]Large[[:space:]]Multimodal[[:space:]]Models/606920ad-3c90-4af2-a9b9-d8633ee4633d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4904 |
+
2024/What[[:space:]]Do[[:space:]]Language[[:space:]]Models[[:space:]]Hear_[[:space:]]Probing[[:space:]]for[[:space:]]Auditory[[:space:]]Representations[[:space:]]in[[:space:]]Language[[:space:]]Models/b900d2e5-1c0d-4c40-8234-72e05a523e15_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4905 |
+
2024/What[[:space:]]Do[[:space:]]Language[[:space:]]Models[[:space:]]Learn[[:space:]]in[[:space:]]Context_[[:space:]]The[[:space:]]Structured[[:space:]]Task[[:space:]]Hypothesis./ce6acf0b-8d02-46ef-9378-6b836b2224f9_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4906 |
+
2024/What[[:space:]]Does[[:space:]]the[[:space:]]Bot[[:space:]]Say_[[:space:]]Opportunities[[:space:]]and[[:space:]]Risks[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Models[[:space:]]in[[:space:]]Social[[:space:]]Media[[:space:]]Bot[[:space:]]Detection/18fbb2aa-c68c-4e0e-8bd0-82a7eb8fb840_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4907 |
+
2024/What[[:space:]]Evidence[[:space:]]Do[[:space:]]Language[[:space:]]Models[[:space:]]Find[[:space:]]Convincing_/b2f77042-ae7a-4956-ba3b-dfa75a1be6e2_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4908 |
+
2024/What[[:space:]]Languages[[:space:]]are[[:space:]]Easy[[:space:]]to[[:space:]]Language-Model_[[:space:]]A[[:space:]]Perspective[[:space:]]from[[:space:]]Learning[[:space:]]Probabilistic[[:space:]]Regular[[:space:]]Languages/05cd1274-45a0-4006-9bc0-5d3f2c93a71a_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4909 |
+
2024/What[[:space:]]is[[:space:]]the[[:space:]]Best[[:space:]]Way[[:space:]]for[[:space:]]ChatGPT[[:space:]]to[[:space:]]Translate[[:space:]]Poetry_/da2a545d-e601-492b-ac51-5d7ffa209e29_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4910 |
+
2024/When[[:space:]]Benchmarks[[:space:]]are[[:space:]]Targets_[[:space:]]Revealing[[:space:]]the[[:space:]]Sensitivity[[:space:]]of[[:space:]]Large[[:space:]]Language[[:space:]]Model[[:space:]]Leaderboards/72e2d524-731e-4e6f-bdcc-81a1e8e5094c_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4911 |
+
2024/When[[:space:]]Good[[:space:]]and[[:space:]]Reproducible[[:space:]]Results[[:space:]]are[[:space:]]a[[:space:]]Giant[[:space:]]with[[:space:]]Feet[[:space:]]of[[:space:]]Clay_[[:space:]]The[[:space:]]Importance[[:space:]]of[[:space:]]Software[[:space:]]Quality[[:space:]]in[[:space:]]NLP/e15572d7-f91b-404d-85e2-acf30e0adf2e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4912 |
+
2024/When[[:space:]]Only[[:space:]]Time[[:space:]]Will[[:space:]]Tell_[[:space:]]Interpreting[[:space:]]How[[:space:]]Transformers[[:space:]]Process[[:space:]]Local[[:space:]]Ambiguities[[:space:]]Through[[:space:]]the[[:space:]]Lens[[:space:]]of[[:space:]]Restart-Incrementality/07603766-a5a2-4f86-b737-d27bc3efadc4_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4913 |
+
2024/When[[:space:]]Phrases[[:space:]]Meet[[:space:]]Probabilities_[[:space:]]Enabling[[:space:]]Open[[:space:]]Relation[[:space:]]Extraction[[:space:]]with[[:space:]]Cooperating[[:space:]]Large[[:space:]]Language[[:space:]]Models/d2bb0609-3502-4a62-a4ad-8b3b4fb72bfb_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4914 |
+
2024/When[[:space:]]is[[:space:]]Tree[[:space:]]Search[[:space:]]Useful[[:space:]]for[[:space:]]LLM[[:space:]]Planning_[[:space:]]It[[:space:]]Depends[[:space:]]on[[:space:]]the[[:space:]]Discriminator/a98e8e6f-f539-4154-b1ed-ade63e749f55_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4915 |
+
2024/Where[[:space:]]Do[[:space:]]People[[:space:]]Tell[[:space:]]Stories[[:space:]]Online_[[:space:]]Story[[:space:]]Detection[[:space:]]Across[[:space:]]Online[[:space:]]Communities/6962bf85-8a07-4166-bd4e-634ddc5a8f32_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4916 |
+
2024/Who[[:space:]]Wrote[[:space:]]this[[:space:]]Code_[[:space:]]Watermarking[[:space:]]for[[:space:]]Code[[:space:]]Generation/f00a57cf-9b3d-4c97-aac9-54fa869507af_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4917 |
+
2024/Whose[[:space:]]Preferences_[[:space:]]Differences[[:space:]]in[[:space:]]Fairness[[:space:]]Preferences[[:space:]]and[[:space:]]Their[[:space:]]Impact[[:space:]]on[[:space:]]the[[:space:]]Fairness[[:space:]]of[[:space:]]AI[[:space:]]Utilizing[[:space:]]Human[[:space:]]Feedback/34846998-f712-45ee-924c-a7ee48f09693_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4918 |
+
2024/Why[[:space:]]Don’t[[:space:]]Prompt-Based[[:space:]]Fairness[[:space:]]Metrics[[:space:]]Correlate_/f3d35bdc-85bd-4a2c-80b4-216b6ecbe9f1_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4919 |
+
2024/Why[[:space:]]are[[:space:]]Sensitive[[:space:]]Functions[[:space:]]Hard[[:space:]]for[[:space:]]Transformers_/380a68e6-5093-4d48-a8a9-48278be0ef97_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4920 |
+
2024/Word[[:space:]]Embeddings[[:space:]]Are[[:space:]]Steers[[:space:]]for[[:space:]]Language[[:space:]]Models/eb9ae937-278e-481f-b95c-75f0dbcdadb4_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4921 |
+
2024/Word[[:space:]]Matters_[[:space:]]What[[:space:]]Influences[[:space:]]Domain[[:space:]]Adaptation[[:space:]]in[[:space:]]Summarization_/e80e50fc-e139-4df8-a6c5-bc9af3eaa924_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4922 |
+
2024/XCodeEval_[[:space:]]An[[:space:]]Execution-based[[:space:]]Large[[:space:]]Scale[[:space:]]Multilingual[[:space:]]Multitask[[:space:]]Benchmark[[:space:]]for[[:space:]]Code[[:space:]]Understanding,[[:space:]]Generation,[[:space:]]Translation[[:space:]]and[[:space:]]Retrieval/0b595b28-3668-40aa-ab1b-df883875cb98_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4923 |
+
2024/XFT_[[:space:]]Unlocking[[:space:]]the[[:space:]]Power[[:space:]]of[[:space:]]Code[[:space:]]Instruction[[:space:]]Tuning[[:space:]]by[[:space:]]Simply[[:space:]]Merging[[:space:]]Upcycled[[:space:]]Mixture-of-Experts/b51e45c6-3515-4a47-825f-5a60538cc008_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4924 |
+
2024/XLAVS-R_[[:space:]]Cross-Lingual[[:space:]]Audio-Visual[[:space:]]Speech[[:space:]]Representation[[:space:]]Learning[[:space:]]for[[:space:]]Noise-Robust[[:space:]]Speech[[:space:]]Perception/63d15722-829c-4d23-a3ae-3adfa6089384_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4925 |
+
2024/Your[[:space:]]Transformer[[:space:]]is[[:space:]]Secretly[[:space:]]Linear/6f7e7318-e29a-49eb-a56d-c61df3d4097e_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4926 |
+
2024/Zero-Shot[[:space:]]Cross-Domain[[:space:]]Dialogue[[:space:]]State[[:space:]]Tracking[[:space:]]via[[:space:]]Dual[[:space:]]Low-Rank[[:space:]]Adaptation/40ea9bcc-d7a7-418b-83e1-a8726224eb7d_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4927 |
+
2024/mCoT_[[:space:]]Multilingual[[:space:]]Instruction[[:space:]]Tuning[[:space:]]for[[:space:]]Reasoning[[:space:]]Consistency[[:space:]]in[[:space:]]Language[[:space:]]Models/63cd5fa2-f63a-4380-9a54-d27d86002907_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
| 4928 |
+
2024/∞Bench_[[:space:]]Extending[[:space:]]Long[[:space:]]Context[[:space:]]Evaluation[[:space:]]Beyond[[:space:]]100K[[:space:]]Tokens/b08182fe-073f-497a-ae40-7297c04f8b98_origin.pdf filter=lfs diff=lfs merge=lfs -text
|
2024/AFLoRA_ Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models/cedf799c-cbb4-4ca7-91c5-53457a9975a6_content_list.json
ADDED
|
@@ -0,0 +1,1277 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
144,
|
| 8 |
+
89,
|
| 9 |
+
852,
|
| 10 |
+
130
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Zeyu Liu $^{1,1}$ Souvik Kundu $^{1,2}$ Anni Li $^{1}$ Junrui Wan $^{1}$ Lianghao Jiang $^{1}$ Peter A. Beerel $^{1}$",
|
| 17 |
+
"bbox": [
|
| 18 |
+
119,
|
| 19 |
+
148,
|
| 20 |
+
880,
|
| 21 |
+
165
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "<sup>1</sup> University of Southern California, USA <sup>2</sup> Intel Labs, San Diego, USA",
|
| 28 |
+
"bbox": [
|
| 29 |
+
206,
|
| 30 |
+
166,
|
| 31 |
+
793,
|
| 32 |
+
181
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "{liuzeyu, annili, junruiwa, ljiang40, pabeerel}@usc.edu souvikk.kundu@intel.com \n†Equally contributing authors",
|
| 39 |
+
"bbox": [
|
| 40 |
+
102,
|
| 41 |
+
183,
|
| 42 |
+
900,
|
| 43 |
+
217
|
| 44 |
+
],
|
| 45 |
+
"page_idx": 0
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"text": "Abstract",
|
| 50 |
+
"text_level": 1,
|
| 51 |
+
"bbox": [
|
| 52 |
+
260,
|
| 53 |
+
252,
|
| 54 |
+
339,
|
| 55 |
+
267
|
| 56 |
+
],
|
| 57 |
+
"page_idx": 0
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"type": "text",
|
| 61 |
+
"text": "We present a novel parameter-efficient finetuning (PEFT) method, dubbed as adaptive freezing of low rank adaptation (AFLoRA). Specifically, for each pre-trained frozen weight tensor, we add a parallel path of trainable low-rank matrices, namely a down-projection and an up-projection matrix, each of which is followed by a feature transformation vector. Based on a novel freezing score, we then incrementally freeze these projection matrices during fine-tuning to reduce the computation and alleviate over-fitting. Our experimental results demonstrate that we can achieve state-of-the-art performance with an average improvement of up to $1.09\\%$ as evaluated on the GLUE and GSM8k benchmark while yielding up to $9.5 \\times$ fewer average trainable parameters. While compared in terms of runtime, AFLoRA can yield up to $1.86 \\times$ improvement as opposed to similar PEFT alternatives. Besides the practical utility of our approach, we provide insights on the trainability requirements of LoRA paths at different modules and the freezing schedule for the different projection matrices. Code is released at: https://github.com/ zeyuliu1037/AFLoRA/tree/main.",
|
| 62 |
+
"bbox": [
|
| 63 |
+
139,
|
| 64 |
+
282,
|
| 65 |
+
460,
|
| 66 |
+
652
|
| 67 |
+
],
|
| 68 |
+
"page_idx": 0
|
| 69 |
+
},
|
| 70 |
+
{
|
| 71 |
+
"type": "text",
|
| 72 |
+
"text": "1 Introduction",
|
| 73 |
+
"text_level": 1,
|
| 74 |
+
"bbox": [
|
| 75 |
+
114,
|
| 76 |
+
665,
|
| 77 |
+
260,
|
| 78 |
+
682
|
| 79 |
+
],
|
| 80 |
+
"page_idx": 0
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"type": "text",
|
| 84 |
+
"text": "Pre-trained language models such as BERT (Devlin et al., 2018), GPT-3 (Brown et al., 2020), and LLaMA2 (Touvron et al., 2023) have demonstrated commendable performance on various natural language processing (NLP) tasks (Kang et al., 2024). However, their zero-shot performance on many downstream tasks often falls short of expectations. One possible solution is full fine-tuning (FFT) of the model on the downstream dataset. However, the large model parameter size makes this process prohibitively costly.",
|
| 85 |
+
"bbox": [
|
| 86 |
+
112,
|
| 87 |
+
693,
|
| 88 |
+
489,
|
| 89 |
+
870
|
| 90 |
+
],
|
| 91 |
+
"page_idx": 0
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"type": "text",
|
| 95 |
+
"text": "To address this challenge, various parameter-efficient fine-tuning (PEFT) methods including low",
|
| 96 |
+
"bbox": [
|
| 97 |
+
112,
|
| 98 |
+
887,
|
| 99 |
+
489,
|
| 100 |
+
919
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "image",
|
| 106 |
+
"img_path": "images/a6b776f7442f789b18c2e098076e23f8b779d79eaa729ab1e8c91888b0ec7258.jpg",
|
| 107 |
+
"image_caption": [
|
| 108 |
+
"Figure 1: Schematic comparison of LoRA (Hu et al., 2021), ELoRA (Kopiczko et al., 2024), and AFLoRA and their associated advantages and disadvantages in terms of various metrics. $r_{L}$ and $r_{V}$ , represent the rank of the low-rank path used in LoRA and ELoRA methods, respectively. FT and KU refer to fine-tuned weights and the Kaiming uniform initialization, respectively."
|
| 109 |
+
],
|
| 110 |
+
"image_footnote": [],
|
| 111 |
+
"bbox": [
|
| 112 |
+
510,
|
| 113 |
+
252,
|
| 114 |
+
884,
|
| 115 |
+
380
|
| 116 |
+
],
|
| 117 |
+
"page_idx": 0
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"type": "text",
|
| 121 |
+
"text": "rank adaptation (LoRA) (Hu et al., 2021), adapter tuning (He et al., 2021), and prompt tuning (Lester et al., 2021) are proposed. These methods add parameters to the trained model for fine-tuning, bypassing the need to adjust the weights of the pre-trained model. In particular, LoRA (Hu et al., 2021) and its variants (Zhang et al., 2023) add a trainable low-rank path consisting of down-projection and up-projection matrices to the model, inspired by (Aghajanyan et al., 2020) which showed that such low-rank paths can effectively approximate the trained weight tensors. ELoRA (Kopiczko et al., 2024) extends LoRA by adding trainable feature transformation vectors to the output of each project matrix. They showed that SoTA accuracy can be achieved with the projection matrices frozen after random initialization while keeping the two feature transformation vectors trainable. This approach significantly reduces the number of trainable parameters. However, compared to LoRA, ELoRA incurs higher computation costs due to the higher rank needed for the frozen projection matrices. Fig. 1 illustrates LoRA and ELoRA, contrasting them to our proposed method AFLoRA.",
|
| 122 |
+
"bbox": [
|
| 123 |
+
507,
|
| 124 |
+
517,
|
| 125 |
+
884,
|
| 126 |
+
917
|
| 127 |
+
],
|
| 128 |
+
"page_idx": 0
|
| 129 |
+
},
|
| 130 |
+
{
|
| 131 |
+
"type": "page_number",
|
| 132 |
+
"text": "161",
|
| 133 |
+
"bbox": [
|
| 134 |
+
485,
|
| 135 |
+
928,
|
| 136 |
+
514,
|
| 137 |
+
940
|
| 138 |
+
],
|
| 139 |
+
"page_idx": 0
|
| 140 |
+
},
|
| 141 |
+
{
|
| 142 |
+
"type": "footer",
|
| 143 |
+
"text": "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 161-167 August 11-16, 2024 ©2024 Association for Computational Linguistics",
|
| 144 |
+
"bbox": [
|
| 145 |
+
95,
|
| 146 |
+
945,
|
| 147 |
+
900,
|
| 148 |
+
973
|
| 149 |
+
],
|
| 150 |
+
"page_idx": 0
|
| 151 |
+
},
|
| 152 |
+
{
|
| 153 |
+
"type": "text",
|
| 154 |
+
"text": "Our contributions. To reduce the trainable parameter count and computation costs of fine-tuning, we present Adaptive Freezing of Low Rank Adaptation (AFLoRA). More specifically, we first investigate the rank needed for the frozen LoRA path in ELoRA and observe that reducing the rank of the frozen projection matrices (PM) causes a drop in fine-tuning performance.",
|
| 155 |
+
"bbox": [
|
| 156 |
+
112,
|
| 157 |
+
102,
|
| 158 |
+
489,
|
| 159 |
+
230
|
| 160 |
+
],
|
| 161 |
+
"page_idx": 1
|
| 162 |
+
},
|
| 163 |
+
{
|
| 164 |
+
"type": "text",
|
| 165 |
+
"text": "Based on this insight, we present AFLoRA, which starts with a low-rank trainable path that includes projection matrices and feature transformation vectors and trains the path for some epochs. We then gradually freeze the projection matrices based on a novel freezing score that acts as a proxy for the trainability requirement of a LoRA tensor. In this way, we not only help alleviate the over-fitting issue but also, improve the computation efficiency. To evaluate the benefit of AFLoRA, we perform extensive evaluations on multiple NLP benchmark datasets and compare accuracy, FLOPs, and training time with several existing alternatives. Specifically, compared to ELoRA we yield $1.86 \\times$ and $2.96 \\times$ improvement in runtime and FLOPs, respectively, while remaining comparable as LoRA on these two metrics. Compared to LoRA we require $9.5 \\times$ fewer average trainable parameters to yield similar or improved performance.",
|
| 166 |
+
"bbox": [
|
| 167 |
+
115,
|
| 168 |
+
231,
|
| 169 |
+
489,
|
| 170 |
+
537
|
| 171 |
+
],
|
| 172 |
+
"page_idx": 1
|
| 173 |
+
},
|
| 174 |
+
{
|
| 175 |
+
"type": "text",
|
| 176 |
+
"text": "2 Related Works",
|
| 177 |
+
"text_level": 1,
|
| 178 |
+
"bbox": [
|
| 179 |
+
112,
|
| 180 |
+
552,
|
| 181 |
+
278,
|
| 182 |
+
568
|
| 183 |
+
],
|
| 184 |
+
"page_idx": 1
|
| 185 |
+
},
|
| 186 |
+
{
|
| 187 |
+
"type": "text",
|
| 188 |
+
"text": "PEFT (Hu et al., 2021; Kundu et al., 2024; Sridhar et al., 2023; Yin et al., 2024) refers to a collection of methodologies that focus on allowing a small number of parameters to fine-tune to yield good performance on a downstream task. For example, prefix-tuning (Li and Liang, 2021) adds trainable prefix tokens to a model's input or hidden layers while adapter-tuning (Houlsby et al., 2019) inserts small neural network layers, known as adapters, within each layer of a pre-trained model. LoRA (Hu et al., 2021), on the other hand, adds low-rank tensors in parallel to the frozen pre-trained weights. AdaLoRA (Zhang et al., 2023) allows the rank of the LoRA path to be chosen in an adaptive way. Other variants like SoRA (Ding et al., 2023) and LoSparse (Li et al., 2023) have investigated the impact of sparsity in and alongside the low-rank path, respectively. Recently, efficient low-rank adaptation (ELoRA) (Kopiczko et al., 2024) has proposed to keep the LoRA path frozen, while introducing two trainable feature transformation",
|
| 189 |
+
"bbox": [
|
| 190 |
+
112,
|
| 191 |
+
580,
|
| 192 |
+
489,
|
| 193 |
+
917
|
| 194 |
+
],
|
| 195 |
+
"page_idx": 1
|
| 196 |
+
},
|
| 197 |
+
{
|
| 198 |
+
"type": "image",
|
| 199 |
+
"img_path": "images/bd8f2edccbe1313512cb444401da10456ef73ae16f1ef82677ff3c61a692384b.jpg",
|
| 200 |
+
"image_caption": [
|
| 201 |
+
"Figure 2: Performance of ELoRA with two different ranks of the frozen projection matrices."
|
| 202 |
+
],
|
| 203 |
+
"image_footnote": [],
|
| 204 |
+
"bbox": [
|
| 205 |
+
569,
|
| 206 |
+
80,
|
| 207 |
+
823,
|
| 208 |
+
197
|
| 209 |
+
],
|
| 210 |
+
"page_idx": 1
|
| 211 |
+
},
|
| 212 |
+
{
|
| 213 |
+
"type": "text",
|
| 214 |
+
"text": "vectors. Thus, this work only studies an extreme scenario of keeping the LoRA path frozen, and, to the best of our knowledge, no work has investigated the trainability requirement of the projection matrices.",
|
| 215 |
+
"bbox": [
|
| 216 |
+
507,
|
| 217 |
+
246,
|
| 218 |
+
884,
|
| 219 |
+
326
|
| 220 |
+
],
|
| 221 |
+
"page_idx": 1
|
| 222 |
+
},
|
| 223 |
+
{
|
| 224 |
+
"type": "text",
|
| 225 |
+
"text": "3 Motivational Case Study",
|
| 226 |
+
"text_level": 1,
|
| 227 |
+
"bbox": [
|
| 228 |
+
507,
|
| 229 |
+
353,
|
| 230 |
+
756,
|
| 231 |
+
370
|
| 232 |
+
],
|
| 233 |
+
"page_idx": 1
|
| 234 |
+
},
|
| 235 |
+
{
|
| 236 |
+
"type": "text",
|
| 237 |
+
"text": "To understand the high-rank requirement for the frozen projection matrices in ELoRA, we conduct two sets of fine-tuning on SST-2 and MRPC, with ELoRA having rank $(r)$ of 1024 and 4, respectively. As we can see in Fig. 2, the model with $r = 4$ , yields poorer performance, highlighting the need for high rank for the frozen tensors. This high rank causes ELoRA to potentially be FLOPs inefficient.",
|
| 238 |
+
"bbox": [
|
| 239 |
+
507,
|
| 240 |
+
379,
|
| 241 |
+
884,
|
| 242 |
+
507
|
| 243 |
+
],
|
| 244 |
+
"page_idx": 1
|
| 245 |
+
},
|
| 246 |
+
{
|
| 247 |
+
"type": "text",
|
| 248 |
+
"text": "4 AFLoRA: Methodology",
|
| 249 |
+
"text_level": 1,
|
| 250 |
+
"bbox": [
|
| 251 |
+
507,
|
| 252 |
+
535,
|
| 253 |
+
749,
|
| 254 |
+
552
|
| 255 |
+
],
|
| 256 |
+
"page_idx": 1
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "text",
|
| 260 |
+
"text": "Module Structure. Inspired by the framework proposed by Kopiczko et al. (2024), we design the LoRA module to encompass four components, namely, the down-projection linear layer $(lora_{A})$ , the up-projection linear layer $(lora_{B})$ , and two feature transform vectors $(s_{d},$ and $s_b)$ placed before and after $lora_{B}$ . However, unlike (Kopiczko et al., 2024), we keep both the projection matrices $(lora_{A}$ and $lora_{B})$ and vectors trainable at the beginning and keep the rank very low. The module processes a given input $X$ through these components to produce an output $Y$ . The complete operation for a layer $l$ can be described as follows:",
|
| 261 |
+
"bbox": [
|
| 262 |
+
507,
|
| 263 |
+
560,
|
| 264 |
+
882,
|
| 265 |
+
769
|
| 266 |
+
],
|
| 267 |
+
"page_idx": 1
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "equation",
|
| 271 |
+
"text": "\n$$\nY = W _ {0} ^ {l} X + \\Lambda_ {b} ^ {l} B ^ {l} \\Lambda_ {d} ^ {l} A ^ {l} X \\tag {1}\n$$\n",
|
| 272 |
+
"text_format": "latex",
|
| 273 |
+
"bbox": [
|
| 274 |
+
591,
|
| 275 |
+
777,
|
| 276 |
+
882,
|
| 277 |
+
796
|
| 278 |
+
],
|
| 279 |
+
"page_idx": 1
|
| 280 |
+
},
|
| 281 |
+
{
|
| 282 |
+
"type": "text",
|
| 283 |
+
"text": "Here, $A^l$ and $B^l$ are the trainable LoRA tensors of $lora_A^l$ and $lora_B^l$ , respectively. $\\Lambda_d$ and $\\Lambda_b$ are the vectors of $s_d$ , and $s_b$ , respectively. $W_0^l$ represents the frozen pre-trained weights. We use Kaiming Uniform initialization for $A^l$ and $B^l$ , and follow (Kopiczko et al., 2024) to initialize the vectors.",
|
| 284 |
+
"bbox": [
|
| 285 |
+
507,
|
| 286 |
+
804,
|
| 287 |
+
882,
|
| 288 |
+
903
|
| 289 |
+
],
|
| 290 |
+
"page_idx": 1
|
| 291 |
+
},
|
| 292 |
+
{
|
| 293 |
+
"type": "page_number",
|
| 294 |
+
"text": "162",
|
| 295 |
+
"bbox": [
|
| 296 |
+
485,
|
| 297 |
+
928,
|
| 298 |
+
515,
|
| 299 |
+
940
|
| 300 |
+
],
|
| 301 |
+
"page_idx": 1
|
| 302 |
+
},
|
| 303 |
+
{
|
| 304 |
+
"type": "table",
|
| 305 |
+
"img_path": "images/7fb60c6f94ff78d3711631c3b935a4dccfda947eeb2987a3588527f710df2c7e.jpg",
|
| 306 |
+
"table_caption": [
|
| 307 |
+
"Table 1: Comparison of different LoRA variants with DeBERTaV3 on the GLUE benchmark."
|
| 308 |
+
],
|
| 309 |
+
"table_footnote": [
|
| 310 |
+
"* The original paper has results with the RoBERTa, we generated the results with our implementation on DeBERTaV3 with the rank of 1024.",
|
| 311 |
+
"** As the number of trainable parameters is changed during training, we computed this by averaging over the whole training epochs over all datasets."
|
| 312 |
+
],
|
| 313 |
+
"table_body": "<table><tr><td>Method</td><td>#Params. ↓</td><td>CoLA ↑</td><td>SST-2 ↑</td><td>MRPC ↑</td><td>QNLI ↑</td><td>STS-B ↑</td><td>RTE ↑</td><td>MNLI ↑</td><td>QQP ↑</td><td>Avg. ↑</td></tr><tr><td>FFT</td><td>184M</td><td>69.21</td><td>95.64</td><td>89.22</td><td>93.78</td><td>91.59</td><td>82.49</td><td>89.98/89.95</td><td>92.05/89.31</td><td>87.82</td></tr><tr><td>LoRA (r = 8)</td><td>1.33M</td><td>69.73</td><td>95.57</td><td>89.71</td><td>93.76</td><td>91.86</td><td>85.32</td><td>90.47/90.46</td><td>91.95/89.26</td><td>88.38</td></tr><tr><td>AdaLoRA</td><td>1.27M</td><td>70.86</td><td>95.95</td><td>90.22</td><td>94.28</td><td>91.39</td><td>87.36</td><td>90.27/90.30</td><td>92.13/88.41</td><td>88.83</td></tr><tr><td>SoRA (r = 4)</td><td>0.47M</td><td>71.05</td><td>95.57</td><td>90.20</td><td>93.92</td><td>91.76</td><td>86.04</td><td>90.38/90.43</td><td>92.06/89.44</td><td>88.71</td></tr><tr><td>ELoRA*</td><td>0.16M</td><td>70.74</td><td>95.18</td><td>90.93</td><td>93.58</td><td>91.08</td><td>87.36</td><td>90.11/90.22</td><td>90.69/87.63</td><td>88.53</td></tr><tr><td>AFLoRA (r = 4)</td><td>0.14M**</td><td>72.01</td><td>96.22</td><td>91.91</td><td>94.42</td><td>91.84</td><td>88.09</td><td>89.88/90.17</td><td>90.81/87.77</td><td>89.23</td></tr></table>",
|
| 314 |
+
"bbox": [
|
| 315 |
+
188,
|
| 316 |
+
107,
|
| 317 |
+
811,
|
| 318 |
+
168
|
| 319 |
+
],
|
| 320 |
+
"page_idx": 2
|
| 321 |
+
},
|
| 322 |
+
{
|
| 323 |
+
"type": "text",
|
| 324 |
+
"text": "Adaptive Freezing. In pruning literature (Han et al., 2015; Molchanov et al., 2019; Zhang et al., 2022; Yin et al., 2024; Kundu et al., 2021, 2022), sensitivity is gauged to reflect weight variability, necessitating consideration of both the weights' magnitudes and their gradients. Small weight values suggest minimal impact, while minor gradient values indicate stability. Taking inspiration from this idea, here we introduce the concept of a \"freezing score\". However, unlike pruning where both magnitude and gradient play a critical role in identifying insignificant weight, we leverage only gradient as a proxy to compute the freezing score. This is because, we assume large magnitude weights with negligible change has the same priority to be frozen as that for small magnitude weights. This score quantifies the degree to which weights vary throughout the training process. Consequently, when the expected changes to the weights become negligible, we may consider them to be frozen, thereby saving computational resources and energy. The following equation describes the freezing score evaluation steps for a low-rank tensor $A^l$ .",
|
| 325 |
+
"bbox": [
|
| 326 |
+
110,
|
| 327 |
+
205,
|
| 328 |
+
489,
|
| 329 |
+
575
|
| 330 |
+
],
|
| 331 |
+
"page_idx": 2
|
| 332 |
+
},
|
| 333 |
+
{
|
| 334 |
+
"type": "equation",
|
| 335 |
+
"text": "\n$$\nI _ {A ^ {l}} = \\left| \\nabla \\mathcal {L} (\\boldsymbol {\\theta}) \\right|, \\bar {I} _ {A ^ {l}} ^ {(t)} = \\beta_ {1} \\bar {I} _ {A ^ {l}} ^ {(t - 1)} + (1 - \\beta_ {1}) I _ {A ^ {l}} ^ {(t)} \\tag {2}\n$$\n",
|
| 336 |
+
"text_format": "latex",
|
| 337 |
+
"bbox": [
|
| 338 |
+
146,
|
| 339 |
+
581,
|
| 340 |
+
487,
|
| 341 |
+
602
|
| 342 |
+
],
|
| 343 |
+
"page_idx": 2
|
| 344 |
+
},
|
| 345 |
+
{
|
| 346 |
+
"type": "equation",
|
| 347 |
+
"text": "\n$$\nU _ {A ^ {l}} ^ {(t)} = \\left| I _ {A ^ {l}} ^ {(t)} - \\bar {I} _ {A ^ {l}} ^ {(t)} \\right|, \\bar {U} _ {A ^ {l}} ^ {(t)} = \\beta_ {2} \\bar {U} _ {A ^ {l}} ^ {(t - 1)} + (1 - \\beta_ {2}) U _ {A ^ {l}} ^ {(t)} \\tag {3}\n$$\n",
|
| 348 |
+
"text_format": "latex",
|
| 349 |
+
"bbox": [
|
| 350 |
+
122,
|
| 351 |
+
609,
|
| 352 |
+
487,
|
| 353 |
+
632
|
| 354 |
+
],
|
| 355 |
+
"page_idx": 2
|
| 356 |
+
},
|
| 357 |
+
{
|
| 358 |
+
"type": "equation",
|
| 359 |
+
"text": "\n$$\ns _ {A ^ {l}} ^ {(t)} = \\operatorname {m e a n} \\left(\\bar {I} _ {A ^ {l}} ^ {(t)} \\circ \\bar {U} _ {A ^ {l}} ^ {(t)}\\right) \\tag {4}\n$$\n",
|
| 360 |
+
"text_format": "latex",
|
| 361 |
+
"bbox": [
|
| 362 |
+
218,
|
| 363 |
+
640,
|
| 364 |
+
487,
|
| 365 |
+
659
|
| 366 |
+
],
|
| 367 |
+
"page_idx": 2
|
| 368 |
+
},
|
| 369 |
+
{
|
| 370 |
+
"type": "text",
|
| 371 |
+
"text": "Here, for each projection tensor at iteration $t$ , we compute a smoothed gradient $(\\overline{I}_{A^l}^{(t)})$ and uncertainly tensor $(\\overline{U}_{A^l}^{(t)})$ , as shown in Eq. 2 and 3, respectively. We then evaluate the freezing score $s_{A^l}^{(t)}$ , as the mean of the tensor generated via Hadamard product (○) between $\\overline{I}_{A^l}^{(t)}$ and $\\overline{U}_{A^l}^{(t)}$ .",
|
| 372 |
+
"bbox": [
|
| 373 |
+
112,
|
| 374 |
+
665,
|
| 375 |
+
487,
|
| 376 |
+
775
|
| 377 |
+
],
|
| 378 |
+
"page_idx": 2
|
| 379 |
+
},
|
| 380 |
+
{
|
| 381 |
+
"type": "text",
|
| 382 |
+
"text": "To apply thresholding on the LoRA freezing scores, we use the cubic schedule as (Zhang et al., 2022). In specific, we keep the projection matrices trainable for the initial $t_i$ training steps, and then progressively freeze them by calculating the freezing fraction $r(t)$ as shown in Eq. 5. Finally, all the projection matrices freeze beyond $T - t_f$ steps. Note, at step $t$ , for a computed freezing fraction $k$ , we freeze the lowest $k\\%$ projection matrices.",
|
| 383 |
+
"bbox": [
|
| 384 |
+
110,
|
| 385 |
+
775,
|
| 386 |
+
489,
|
| 387 |
+
919
|
| 388 |
+
],
|
| 389 |
+
"page_idx": 2
|
| 390 |
+
},
|
| 391 |
+
{
|
| 392 |
+
"type": "image",
|
| 393 |
+
"img_path": "images/cbea3fc70ef506dc7929f99fe0bd63b8ebe91b4128d2684dfff6ab220a124f68.jpg",
|
| 394 |
+
"image_caption": [
|
| 395 |
+
"Figure 3: A comparison of various system performances between LoRA, ELoRA, and AFLoRA."
|
| 396 |
+
],
|
| 397 |
+
"image_footnote": [],
|
| 398 |
+
"bbox": [
|
| 399 |
+
505,
|
| 400 |
+
200,
|
| 401 |
+
630,
|
| 402 |
+
284
|
| 403 |
+
],
|
| 404 |
+
"page_idx": 2
|
| 405 |
+
},
|
| 406 |
+
{
|
| 407 |
+
"type": "image",
|
| 408 |
+
"img_path": "images/419769e68fccf33c6d9e14ddf9f5bb0fae0f1a0f361761eb150067d136d8108c.jpg",
|
| 409 |
+
"image_caption": [],
|
| 410 |
+
"image_footnote": [],
|
| 411 |
+
"bbox": [
|
| 412 |
+
633,
|
| 413 |
+
200,
|
| 414 |
+
757,
|
| 415 |
+
282
|
| 416 |
+
],
|
| 417 |
+
"page_idx": 2
|
| 418 |
+
},
|
| 419 |
+
{
|
| 420 |
+
"type": "image",
|
| 421 |
+
"img_path": "images/a873ef0d801d22d7ea4ba40bae8a25900aac8e691a6b99695079e68cd71ed79e.jpg",
|
| 422 |
+
"image_caption": [],
|
| 423 |
+
"image_footnote": [],
|
| 424 |
+
"bbox": [
|
| 425 |
+
763,
|
| 426 |
+
202,
|
| 427 |
+
885,
|
| 428 |
+
282
|
| 429 |
+
],
|
| 430 |
+
"page_idx": 2
|
| 431 |
+
},
|
| 432 |
+
{
|
| 433 |
+
"type": "equation",
|
| 434 |
+
"text": "\n$$\nr (t) = \\left\\{ \\begin{array}{l l} 0 & 0 \\leq t < t _ {i} \\\\ 1 - \\left(1 - \\frac {t - t _ {i}}{T - t _ {i} - t _ {f}}\\right) ^ {3} & t _ {i} \\leq t < T - t _ {f} \\\\ 1 & \\text {o t h e r w i s e} \\end{array} \\right. \\tag {5}\n$$\n",
|
| 435 |
+
"text_format": "latex",
|
| 436 |
+
"bbox": [
|
| 437 |
+
519,
|
| 438 |
+
351,
|
| 439 |
+
882,
|
| 440 |
+
411
|
| 441 |
+
],
|
| 442 |
+
"page_idx": 2
|
| 443 |
+
},
|
| 444 |
+
{
|
| 445 |
+
"type": "text",
|
| 446 |
+
"text": "where $t$ refers to current #step, $T$ is the total number of fine-tuning steps. We set $t_i$ to the steps corresponding to one epoch and set $t_f$ to 70% of the total training steps.",
|
| 447 |
+
"bbox": [
|
| 448 |
+
507,
|
| 449 |
+
426,
|
| 450 |
+
882,
|
| 451 |
+
491
|
| 452 |
+
],
|
| 453 |
+
"page_idx": 2
|
| 454 |
+
},
|
| 455 |
+
{
|
| 456 |
+
"type": "text",
|
| 457 |
+
"text": "5 Experiments",
|
| 458 |
+
"text_level": 1,
|
| 459 |
+
"bbox": [
|
| 460 |
+
507,
|
| 461 |
+
504,
|
| 462 |
+
655,
|
| 463 |
+
521
|
| 464 |
+
],
|
| 465 |
+
"page_idx": 2
|
| 466 |
+
},
|
| 467 |
+
{
|
| 468 |
+
"type": "text",
|
| 469 |
+
"text": "Models & Datasets. We use the PEFT framework of (Mangrulkar et al., 2022) and evaluate the fine-tuning performance of DeBERTaV3-base (He et al., 2020) to fine-tune on our framework on the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018). The details of the hyperparameter settings for each dataset are listed in Appendix A.2.",
|
| 470 |
+
"bbox": [
|
| 471 |
+
507,
|
| 472 |
+
531,
|
| 473 |
+
882,
|
| 474 |
+
659
|
| 475 |
+
],
|
| 476 |
+
"page_idx": 2
|
| 477 |
+
},
|
| 478 |
+
{
|
| 479 |
+
"type": "text",
|
| 480 |
+
"text": "Performance Comparison. We benchmark the performance with AFLoRA and present a comparison with LoRA and its variants. For ELoRA, we reproduce the results at our end while the results for other methods are sourced from (Ding et al., 2023). As shown in Table 1, AFLoRA can achieve SoTA performance on the majority of datasets and on average while requiring similar and $9.5 \\times$ fewer average trainable parameters as compared to ELoRA and LoRA, respectively.",
|
| 481 |
+
"bbox": [
|
| 482 |
+
507,
|
| 483 |
+
677,
|
| 484 |
+
882,
|
| 485 |
+
839
|
| 486 |
+
],
|
| 487 |
+
"page_idx": 2
|
| 488 |
+
},
|
| 489 |
+
{
|
| 490 |
+
"type": "text",
|
| 491 |
+
"text": "Runtime & FLOPs Comparison. Fig. 3 shows the comparison of the normalized average training runtime, normalized FLOPs, and normalized trainable parameters. For AFLoRA, we average",
|
| 492 |
+
"bbox": [
|
| 493 |
+
507,
|
| 494 |
+
854,
|
| 495 |
+
882,
|
| 496 |
+
919
|
| 497 |
+
],
|
| 498 |
+
"page_idx": 2
|
| 499 |
+
},
|
| 500 |
+
{
|
| 501 |
+
"type": "page_number",
|
| 502 |
+
"text": "163",
|
| 503 |
+
"bbox": [
|
| 504 |
+
485,
|
| 505 |
+
928,
|
| 506 |
+
515,
|
| 507 |
+
940
|
| 508 |
+
],
|
| 509 |
+
"page_idx": 2
|
| 510 |
+
},
|
| 511 |
+
{
|
| 512 |
+
"type": "table",
|
| 513 |
+
"img_path": "images/b5b653bf0e702596267cd029d0f4ef3f22f902714c85fd289699e3ef30e3dd7d.jpg",
|
| 514 |
+
"table_caption": [
|
| 515 |
+
"Table 2: Results on auto-regressive complex reasoning task using LLM."
|
| 516 |
+
],
|
| 517 |
+
"table_footnote": [],
|
| 518 |
+
"table_body": "<table><tr><td>Method</td><td>Model</td><td>Low-rank val.</td><td># Params.</td><td>GSM8k Acc (%)</td></tr><tr><td>LoRA</td><td>LLaMA-7B</td><td>32</td><td>56.1M</td><td>37.50</td></tr><tr><td>ALFoRA (Ours)</td><td>LLaMA-7B</td><td>32</td><td>17.8M</td><td>38.59</td></tr></table>",
|
| 519 |
+
"bbox": [
|
| 520 |
+
131,
|
| 521 |
+
121,
|
| 522 |
+
467,
|
| 523 |
+
149
|
| 524 |
+
],
|
| 525 |
+
"page_idx": 3
|
| 526 |
+
},
|
| 527 |
+
{
|
| 528 |
+
"type": "table",
|
| 529 |
+
"img_path": "images/5a976d711dddad0c5b4e261b79b0baf63914de54fe95150e12e9b590517ea882.jpg",
|
| 530 |
+
"table_caption": [
|
| 531 |
+
"Table 3: Results on summarizing task using LLM. We use rouge 1 (R1) and rouge 2 (R2) scores to measure the summarization quality."
|
| 532 |
+
],
|
| 533 |
+
"table_footnote": [],
|
| 534 |
+
"table_body": "<table><tr><td>Method</td><td>Model</td><td>Low-rank val.</td><td># Params.</td><td>CNN/DailyMail (R1/R2)</td></tr><tr><td>LoRA</td><td>BART-Large</td><td>16</td><td>8.65M</td><td>43.96/21.06</td></tr><tr><td>ALFloRA (Ours)</td><td>BART-Large</td><td>16</td><td>5.10M</td><td>44.31/21.32</td></tr></table>",
|
| 535 |
+
"bbox": [
|
| 536 |
+
115,
|
| 537 |
+
211,
|
| 538 |
+
487,
|
| 539 |
+
241
|
| 540 |
+
],
|
| 541 |
+
"page_idx": 3
|
| 542 |
+
},
|
| 543 |
+
{
|
| 544 |
+
"type": "text",
|
| 545 |
+
"text": "the training time, FLOPs, and trainable parameters over six GLUE datasets (except the MNLI and QQP datasets). Note, that for LoRA and ELoRA, the trainable parameters and FLOPs remain fixed for all the datasets. We compute their average runtime the same way as ours. Compared to ELoRA we can yield up to $1.86 \\times$ and $2.96 \\times$ runtime and FLOPs improvement while remaining comparable with LoRA in these two metrics. Compared to LoRA we yield $9.5 \\times$ parameter reduction while remaining comparable with ELoRA. These results clearly demonstrate AFLoRA as a PEFT method that can yield similar parameter efficiency as ELoRA while costing no training overhead in FLOPs or time.",
|
| 546 |
+
"bbox": [
|
| 547 |
+
112,
|
| 548 |
+
266,
|
| 549 |
+
487,
|
| 550 |
+
506
|
| 551 |
+
],
|
| 552 |
+
"page_idx": 3
|
| 553 |
+
},
|
| 554 |
+
{
|
| 555 |
+
"type": "text",
|
| 556 |
+
"text": "Results with Large Language Models (LLMs). We now demonstrate the AFLoRA fine-tuning performance with two popular LLM variants, namely, LLaMA-7B (Touvron et al., 2023) and BARTLarge (Lewis et al., 2019) on GSM8k complex reasoning and CNN/Daily mail summarizing task, respectively. As demonstrated in Table 2, on GSM8k, AFLoRA yields improved accuracy of $1.09\\%$ while requiring $3.15 \\times$ fewer trainable parameters as compared to that with LoRA. On the CNN/DailyMail Summarizing task (Table 3), AFLoRA requires $1.69 \\times$ fewer trainable parameters to reach similar or improved rouge score values.",
|
| 557 |
+
"bbox": [
|
| 558 |
+
112,
|
| 559 |
+
524,
|
| 560 |
+
489,
|
| 561 |
+
733
|
| 562 |
+
],
|
| 563 |
+
"page_idx": 3
|
| 564 |
+
},
|
| 565 |
+
{
|
| 566 |
+
"type": "text",
|
| 567 |
+
"text": "6 Ablations and Discussions",
|
| 568 |
+
"text_level": 1,
|
| 569 |
+
"bbox": [
|
| 570 |
+
112,
|
| 571 |
+
747,
|
| 572 |
+
371,
|
| 573 |
+
762
|
| 574 |
+
],
|
| 575 |
+
"page_idx": 3
|
| 576 |
+
},
|
| 577 |
+
{
|
| 578 |
+
"type": "text",
|
| 579 |
+
"text": "We conducted our ablation studies on six GLUE benchmark datasets, omitting QQP and MNLI, the two most computationally demanding datasets.",
|
| 580 |
+
"bbox": [
|
| 581 |
+
112,
|
| 582 |
+
774,
|
| 583 |
+
487,
|
| 584 |
+
822
|
| 585 |
+
],
|
| 586 |
+
"page_idx": 3
|
| 587 |
+
},
|
| 588 |
+
{
|
| 589 |
+
"type": "text",
|
| 590 |
+
"text": "Do we really need adaptive freezing? We conducted experiments with all the LoRA PMs frozen (same as ELoRA), all the LoRA PMs trainable, and with our adaptive training of LoRA PMs. We use, $r = 4$ for the LoRA path,",
|
| 591 |
+
"bbox": [
|
| 592 |
+
112,
|
| 593 |
+
839,
|
| 594 |
+
489,
|
| 595 |
+
917
|
| 596 |
+
],
|
| 597 |
+
"page_idx": 3
|
| 598 |
+
},
|
| 599 |
+
{
|
| 600 |
+
"type": "table",
|
| 601 |
+
"img_path": "images/05fd56bedae331efed6cfeb5ecd064421a5d565c1fc675c65e585bd55e47b69b.jpg",
|
| 602 |
+
"table_caption": [
|
| 603 |
+
"Table 4: Ablation study on the trainability impact of the projection matrices (PM) of the AFLoRA module. We keep the vectors trainable throughout for all."
|
| 604 |
+
],
|
| 605 |
+
"table_footnote": [],
|
| 606 |
+
"table_body": "<table><tr><td>PM</td><td>#Params.</td><td>CoLA</td><td>SST-2</td><td>MRPC</td><td>QNLI</td><td>STS-B</td><td>RTE</td><td>Avg.</td></tr><tr><td>Trainable</td><td>0.45M</td><td>70.15</td><td>95.99</td><td>92.4</td><td>94.16</td><td>89.90</td><td>88.45</td><td>88.51</td></tr><tr><td>Frozen</td><td>0.08M</td><td>70.36</td><td>94.95</td><td>89.22</td><td>93.61</td><td>91.17</td><td>85.92</td><td>87.54</td></tr><tr><td>AFLoRA (Ours)</td><td>0.14M</td><td>72.01</td><td>96.22</td><td>91.91</td><td>94.42</td><td>91.84</td><td>88.09</td><td>89.23</td></tr></table>",
|
| 607 |
+
"bbox": [
|
| 608 |
+
512,
|
| 609 |
+
130,
|
| 610 |
+
885,
|
| 611 |
+
168
|
| 612 |
+
],
|
| 613 |
+
"page_idx": 3
|
| 614 |
+
},
|
| 615 |
+
{
|
| 616 |
+
"type": "image",
|
| 617 |
+
"img_path": "images/3243282b3874c92cbead3be0c719e14a829950bc5f79f3fd268469eb5355ee66.jpg",
|
| 618 |
+
"image_caption": [
|
| 619 |
+
"Figure 4: A comparison of performance outcomes utilizing three distinct freezing score methodologies."
|
| 620 |
+
],
|
| 621 |
+
"image_footnote": [],
|
| 622 |
+
"bbox": [
|
| 623 |
+
552,
|
| 624 |
+
179,
|
| 625 |
+
842,
|
| 626 |
+
288
|
| 627 |
+
],
|
| 628 |
+
"page_idx": 3
|
| 629 |
+
},
|
| 630 |
+
{
|
| 631 |
+
"type": "table",
|
| 632 |
+
"img_path": "images/b03da0bda8c4922cb06082fab91fb41a95a51b7fca3a6ccbe6f71de4a6e08cae.jpg",
|
| 633 |
+
"table_caption": [
|
| 634 |
+
"Table 5: Ablation study on making the PMs for different layer-types trainable."
|
| 635 |
+
],
|
| 636 |
+
"table_footnote": [],
|
| 637 |
+
"table_body": "<table><tr><td>FFN</td><td>Attn</td><td>CoLA</td><td>SST-2</td><td>MRPC</td><td>QNLI</td><td>STS-B</td><td>RTE</td><td>Avg.</td></tr><tr><td>✓</td><td>✓</td><td>70.33</td><td>95.76</td><td>90.93</td><td>94.36</td><td>91.44</td><td>87.37</td><td>88.48</td></tr><tr><td></td><td></td><td>0.15M</td><td>0.19M</td><td>0.18M</td><td>0.19M</td><td>0.16M</td><td>0.17M</td><td>0.17M</td></tr><tr><td>X</td><td>✓</td><td>71.118</td><td>95.986</td><td>89.951</td><td>94.12</td><td>91.39</td><td>86.28</td><td>88.14</td></tr><tr><td></td><td></td><td>0.11M</td><td>0.13M</td><td>0.12M</td><td>0.13M</td><td>0.12M</td><td>0.12M</td><td>0.12M</td></tr><tr><td>✓</td><td>X</td><td>72.01</td><td>96.22</td><td>91.91</td><td>94.42</td><td>91.84</td><td>88.09</td><td>89.02</td></tr><tr><td></td><td></td><td>0.13M</td><td>0.18M</td><td>0.13M</td><td>0.13M</td><td>0.13M</td><td>0.13M</td><td>0.14M</td></tr></table>",
|
| 638 |
+
"bbox": [
|
| 639 |
+
510,
|
| 640 |
+
380,
|
| 641 |
+
878,
|
| 642 |
+
444
|
| 643 |
+
],
|
| 644 |
+
"page_idx": 3
|
| 645 |
+
},
|
| 646 |
+
{
|
| 647 |
+
"type": "text",
|
| 648 |
+
"text": "for all. As we can see in Table 4, keeping the projection matrices trainable yields better average performance compared to keeping them frozen throughout. However, AFLoRA with adaptive freezing yields even better performance than keeping them trainable throughout, potentially highlighting its ability to regularize the fine-tuning against overfitting.",
|
| 649 |
+
"bbox": [
|
| 650 |
+
507,
|
| 651 |
+
468,
|
| 652 |
+
882,
|
| 653 |
+
596
|
| 654 |
+
],
|
| 655 |
+
"page_idx": 3
|
| 656 |
+
},
|
| 657 |
+
{
|
| 658 |
+
"type": "text",
|
| 659 |
+
"text": "Do we need to keep the PMs trainable for all layer types? There are two major layer types, FFN and the attention layers. We place the PMs in both along with the feature transformation vectors. We then study the necessity of keeping the PMs trainable in these two layer types. Note, here, we keep the vectors trainable for all throughout. As shown in Table 5, keeping the PMs trainable (and then adaptive freezing) in the FFN yields better performance compared to the alternatives. Note we keep the PMs in the attention layers frozen to random values. Interestingly, allowing all PMs to initially train and then adaptively freeze yields poorer performance than allowing them only in MLP. This may hint at the FFN weights to play a more important role in fine-tuning performance.",
|
| 660 |
+
"bbox": [
|
| 661 |
+
507,
|
| 662 |
+
613,
|
| 663 |
+
882,
|
| 664 |
+
870
|
| 665 |
+
],
|
| 666 |
+
"page_idx": 3
|
| 667 |
+
},
|
| 668 |
+
{
|
| 669 |
+
"type": "text",
|
| 670 |
+
"text": "Ablation with sensitivity choices. Fig. 4 presents ablation with three sensitivity scores based",
|
| 671 |
+
"bbox": [
|
| 672 |
+
507,
|
| 673 |
+
887,
|
| 674 |
+
880,
|
| 675 |
+
917
|
| 676 |
+
],
|
| 677 |
+
"page_idx": 3
|
| 678 |
+
},
|
| 679 |
+
{
|
| 680 |
+
"type": "page_number",
|
| 681 |
+
"text": "164",
|
| 682 |
+
"bbox": [
|
| 683 |
+
485,
|
| 684 |
+
928,
|
| 685 |
+
515,
|
| 686 |
+
940
|
| 687 |
+
],
|
| 688 |
+
"page_idx": 3
|
| 689 |
+
},
|
| 690 |
+
{
|
| 691 |
+
"type": "text",
|
| 692 |
+
"text": "on three different sensitivity choices, namely, $|grad(p)|$ (adopted in AFLoRA), $|p * grad(p)|$ and $|grad(p)/p|$ . On average, the freezing score adopted in AFLoRA, consistently yields better accuracy over the other two.",
|
| 693 |
+
"bbox": [
|
| 694 |
+
112,
|
| 695 |
+
84,
|
| 696 |
+
489,
|
| 697 |
+
165
|
| 698 |
+
],
|
| 699 |
+
"page_idx": 4
|
| 700 |
+
},
|
| 701 |
+
{
|
| 702 |
+
"type": "text",
|
| 703 |
+
"text": "Discussion on Freezing Trend. We use the RTE dataset as a case study, to understand the freezing trend of the PMs across different layers. Specifically, we illustrate the specific number of iterations required before freezing each component in Fig. 5. Interestingly, as can be seen from the figure, analysis reveals that the down-projection matrix parallel to the intermediate linear layer requires longer training duration prior to being frozen, as compared to the other PMs. This may potentially hint at the low approximation ability of the intermediate layer as compared to the second MLP in the FFN.",
|
| 704 |
+
"bbox": [
|
| 705 |
+
112,
|
| 706 |
+
181,
|
| 707 |
+
489,
|
| 708 |
+
388
|
| 709 |
+
],
|
| 710 |
+
"page_idx": 4
|
| 711 |
+
},
|
| 712 |
+
{
|
| 713 |
+
"type": "text",
|
| 714 |
+
"text": "7 Conclusions",
|
| 715 |
+
"text_level": 1,
|
| 716 |
+
"bbox": [
|
| 717 |
+
112,
|
| 718 |
+
404,
|
| 719 |
+
253,
|
| 720 |
+
420
|
| 721 |
+
],
|
| 722 |
+
"page_idx": 4
|
| 723 |
+
},
|
| 724 |
+
{
|
| 725 |
+
"type": "text",
|
| 726 |
+
"text": "In this paper, we presented AFLoRA, adaptive freezing of LoRA adapters that allow near-optimal trainability of the LoRA projection matrices and freezes them driven by a \"freezing score\" after certain fine-tuning steps. Compared to LoRA, AFLoRA can reduce the trainable parameters by up to $9.5\\times$ while yielding $0.85\\%$ average improved performance as evaluated on the GLUE benchmark.",
|
| 727 |
+
"bbox": [
|
| 728 |
+
112,
|
| 729 |
+
432,
|
| 730 |
+
489,
|
| 731 |
+
561
|
| 732 |
+
],
|
| 733 |
+
"page_idx": 4
|
| 734 |
+
},
|
| 735 |
+
{
|
| 736 |
+
"type": "text",
|
| 737 |
+
"text": "8 Limitation",
|
| 738 |
+
"text_level": 1,
|
| 739 |
+
"bbox": [
|
| 740 |
+
112,
|
| 741 |
+
574,
|
| 742 |
+
240,
|
| 743 |
+
589
|
| 744 |
+
],
|
| 745 |
+
"page_idx": 4
|
| 746 |
+
},
|
| 747 |
+
{
|
| 748 |
+
"type": "text",
|
| 749 |
+
"text": "In the ablation study with various freezing score metrics, we discovered that alternative scoring methods outperform ours on certain datasets, suggesting possible room for research in refining the freezing scores. This can further improve performance with AFLoRA. Additionally, the integration of AFLoRA in the adaptive rank evaluation framework can potentially open a new direction for PEFT that we consider as future research.",
|
| 750 |
+
"bbox": [
|
| 751 |
+
112,
|
| 752 |
+
601,
|
| 753 |
+
489,
|
| 754 |
+
745
|
| 755 |
+
],
|
| 756 |
+
"page_idx": 4
|
| 757 |
+
},
|
| 758 |
+
{
|
| 759 |
+
"type": "text",
|
| 760 |
+
"text": "References",
|
| 761 |
+
"text_level": 1,
|
| 762 |
+
"bbox": [
|
| 763 |
+
114,
|
| 764 |
+
775,
|
| 765 |
+
213,
|
| 766 |
+
790
|
| 767 |
+
],
|
| 768 |
+
"page_idx": 4
|
| 769 |
+
},
|
| 770 |
+
{
|
| 771 |
+
"type": "text",
|
| 772 |
+
"text": "Armen Aghajanyan, Luke Zettlemoyer, and Sonal Gupta. 2020. Intrinsic dimensionality explains the effectiveness of language model fine-tuning. arXiv preprint arXiv:2012.13255.",
|
| 773 |
+
"bbox": [
|
| 774 |
+
112,
|
| 775 |
+
799,
|
| 776 |
+
487,
|
| 777 |
+
853
|
| 778 |
+
],
|
| 779 |
+
"page_idx": 4
|
| 780 |
+
},
|
| 781 |
+
{
|
| 782 |
+
"type": "text",
|
| 783 |
+
"text": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot",
|
| 784 |
+
"bbox": [
|
| 785 |
+
112,
|
| 786 |
+
865,
|
| 787 |
+
487,
|
| 788 |
+
917
|
| 789 |
+
],
|
| 790 |
+
"page_idx": 4
|
| 791 |
+
},
|
| 792 |
+
{
|
| 793 |
+
"type": "image",
|
| 794 |
+
"img_path": "images/443fbe5f03e3208bee5e01efe9f37900ea92116145a10702a501d8a05eeec175.jpg",
|
| 795 |
+
"image_caption": [
|
| 796 |
+
"Figure 5: Visualization of freezing iterations for each layer. 'out' and 'inter' refer to the second and the first MLP layer of the FFN, respectively. 'A' and 'B' represent the down-projection and up-projection matrix, respectively. The darker the color, the more iterations the matrix has to go through before freezing."
|
| 797 |
+
],
|
| 798 |
+
"image_footnote": [],
|
| 799 |
+
"bbox": [
|
| 800 |
+
542,
|
| 801 |
+
80,
|
| 802 |
+
853,
|
| 803 |
+
168
|
| 804 |
+
],
|
| 805 |
+
"page_idx": 4
|
| 806 |
+
},
|
| 807 |
+
{
|
| 808 |
+
"type": "text",
|
| 809 |
+
"text": "learners. Advances in neural information processing systems, 33:1877-1901.",
|
| 810 |
+
"bbox": [
|
| 811 |
+
507,
|
| 812 |
+
279,
|
| 813 |
+
882,
|
| 814 |
+
306
|
| 815 |
+
],
|
| 816 |
+
"page_idx": 4
|
| 817 |
+
},
|
| 818 |
+
{
|
| 819 |
+
"type": "text",
|
| 820 |
+
"text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.",
|
| 821 |
+
"bbox": [
|
| 822 |
+
507,
|
| 823 |
+
315,
|
| 824 |
+
882,
|
| 825 |
+
369
|
| 826 |
+
],
|
| 827 |
+
"page_idx": 4
|
| 828 |
+
},
|
| 829 |
+
{
|
| 830 |
+
"type": "text",
|
| 831 |
+
"text": "Ning Ding, Xingtai Lv, Qiaosen Wang, Yulin Chen, Bowen Zhou, Zhiyuan Liu, and Maosong Sun. 2023. Sparse low-rank adaptation of pre-trained language models. arXiv preprint arXiv:2311.11696.",
|
| 832 |
+
"bbox": [
|
| 833 |
+
507,
|
| 834 |
+
376,
|
| 835 |
+
882,
|
| 836 |
+
431
|
| 837 |
+
],
|
| 838 |
+
"page_idx": 4
|
| 839 |
+
},
|
| 840 |
+
{
|
| 841 |
+
"type": "text",
|
| 842 |
+
"text": "Song Han, Jeff Pool, John Tran, and William Dally. 2015. Learning both weights and connections for efficient neural network. Advances in neural information processing systems, 28.",
|
| 843 |
+
"bbox": [
|
| 844 |
+
507,
|
| 845 |
+
439,
|
| 846 |
+
882,
|
| 847 |
+
494
|
| 848 |
+
],
|
| 849 |
+
"page_idx": 4
|
| 850 |
+
},
|
| 851 |
+
{
|
| 852 |
+
"type": "text",
|
| 853 |
+
"text": "Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654.",
|
| 854 |
+
"bbox": [
|
| 855 |
+
507,
|
| 856 |
+
502,
|
| 857 |
+
882,
|
| 858 |
+
555
|
| 859 |
+
],
|
| 860 |
+
"page_idx": 4
|
| 861 |
+
},
|
| 862 |
+
{
|
| 863 |
+
"type": "text",
|
| 864 |
+
"text": "Ruidan He, Linlin Liu, Hai Ye, Qingyu Tan, Bosheng Ding, Liying Cheng, Jia-Wei Low, Lidong Bing, and Luo Si. 2021. On the effectiveness of adapter-based tuning for pretrained language model adaptation. arXiv preprint arXiv:2106.03164.",
|
| 865 |
+
"bbox": [
|
| 866 |
+
507,
|
| 867 |
+
564,
|
| 868 |
+
882,
|
| 869 |
+
631
|
| 870 |
+
],
|
| 871 |
+
"page_idx": 4
|
| 872 |
+
},
|
| 873 |
+
{
|
| 874 |
+
"type": "text",
|
| 875 |
+
"text": "Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In International Conference on Machine Learning, pages 2790-2799. PMLR.",
|
| 876 |
+
"bbox": [
|
| 877 |
+
507,
|
| 878 |
+
639,
|
| 879 |
+
882,
|
| 880 |
+
717
|
| 881 |
+
],
|
| 882 |
+
"page_idx": 4
|
| 883 |
+
},
|
| 884 |
+
{
|
| 885 |
+
"type": "text",
|
| 886 |
+
"text": "Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685.",
|
| 887 |
+
"bbox": [
|
| 888 |
+
507,
|
| 889 |
+
727,
|
| 890 |
+
882,
|
| 891 |
+
781
|
| 892 |
+
],
|
| 893 |
+
"page_idx": 4
|
| 894 |
+
},
|
| 895 |
+
{
|
| 896 |
+
"type": "text",
|
| 897 |
+
"text": "Hao Kang, Qingru Zhang, Souvik Kundu, Geonhwa Jeong, Zaoxing Liu, Tushar Krishna, and Tuo Zhao. 2024. Gear: An efficient kv cache compression recipe for near-lossless generative inference of llm. arXiv preprint arXiv:2403.05527.",
|
| 898 |
+
"bbox": [
|
| 899 |
+
507,
|
| 900 |
+
789,
|
| 901 |
+
882,
|
| 902 |
+
856
|
| 903 |
+
],
|
| 904 |
+
"page_idx": 4
|
| 905 |
+
},
|
| 906 |
+
{
|
| 907 |
+
"type": "text",
|
| 908 |
+
"text": "Dawid Jan Kopiczko, Tijmen Blankevoort, and Yuki M Asano. 2024. ELoRA: Efficient low-rank adaptation with random matrices. In The Twelfth International Conference on Learning Representations.",
|
| 909 |
+
"bbox": [
|
| 910 |
+
507,
|
| 911 |
+
865,
|
| 912 |
+
882,
|
| 913 |
+
917
|
| 914 |
+
],
|
| 915 |
+
"page_idx": 4
|
| 916 |
+
},
|
| 917 |
+
{
|
| 918 |
+
"type": "page_number",
|
| 919 |
+
"text": "165",
|
| 920 |
+
"bbox": [
|
| 921 |
+
485,
|
| 922 |
+
928,
|
| 923 |
+
515,
|
| 924 |
+
940
|
| 925 |
+
],
|
| 926 |
+
"page_idx": 4
|
| 927 |
+
},
|
| 928 |
+
{
|
| 929 |
+
"type": "text",
|
| 930 |
+
"text": "Souvik Kundu, Mahdi Nazemi, Peter A Beerel, and Massoud Pedram. 2021. Dnr: A tunable robust pruning framework through dynamic network rewiring of dnns. In Proceedings of the 26th Asia and South Pacific Design Automation Conference, pages 344-350.",
|
| 931 |
+
"bbox": [
|
| 932 |
+
112,
|
| 933 |
+
85,
|
| 934 |
+
487,
|
| 935 |
+
151
|
| 936 |
+
],
|
| 937 |
+
"page_idx": 5
|
| 938 |
+
},
|
| 939 |
+
{
|
| 940 |
+
"type": "text",
|
| 941 |
+
"text": "Souvik Kundu, Sharath Sridhar Nittur, Maciej Szankin, and Sairam Sundaresan. 2024. Sensi-bert: Towards sensitivity driven fine-tuning for parameter-efficient bert. ICASSP.",
|
| 942 |
+
"bbox": [
|
| 943 |
+
112,
|
| 944 |
+
161,
|
| 945 |
+
489,
|
| 946 |
+
212
|
| 947 |
+
],
|
| 948 |
+
"page_idx": 5
|
| 949 |
+
},
|
| 950 |
+
{
|
| 951 |
+
"type": "text",
|
| 952 |
+
"text": "Souvik Kundu, Shikai Wang, Qirui Sun, Peter A Beerel, and Massoud Pedram. 2022. Bmpq: bit-gradient sensitivity-driven mixed-precision quantization of dnns from scratch. In 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE), pages 588-591. IEEE.",
|
| 953 |
+
"bbox": [
|
| 954 |
+
112,
|
| 955 |
+
223,
|
| 956 |
+
487,
|
| 957 |
+
300
|
| 958 |
+
],
|
| 959 |
+
"page_idx": 5
|
| 960 |
+
},
|
| 961 |
+
{
|
| 962 |
+
"type": "text",
|
| 963 |
+
"text": "Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691.",
|
| 964 |
+
"bbox": [
|
| 965 |
+
112,
|
| 966 |
+
311,
|
| 967 |
+
487,
|
| 968 |
+
351
|
| 969 |
+
],
|
| 970 |
+
"page_idx": 5
|
| 971 |
+
},
|
| 972 |
+
{
|
| 973 |
+
"type": "text",
|
| 974 |
+
"text": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.",
|
| 975 |
+
"bbox": [
|
| 976 |
+
112,
|
| 977 |
+
361,
|
| 978 |
+
487,
|
| 979 |
+
439
|
| 980 |
+
],
|
| 981 |
+
"page_idx": 5
|
| 982 |
+
},
|
| 983 |
+
{
|
| 984 |
+
"type": "text",
|
| 985 |
+
"text": "Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190.",
|
| 986 |
+
"bbox": [
|
| 987 |
+
112,
|
| 988 |
+
449,
|
| 989 |
+
487,
|
| 990 |
+
488
|
| 991 |
+
],
|
| 992 |
+
"page_idx": 5
|
| 993 |
+
},
|
| 994 |
+
{
|
| 995 |
+
"type": "text",
|
| 996 |
+
"text": "Yixiao Li, Yifan Yu, Qingru Zhang, Chen Liang, Pengcheng He, Weizhu Chen, and Tuo Zhao. 2023. Losparse: Structured compression of large language models based on low-rank and sparse approximation. arXiv preprint arXiv:2306.11222.",
|
| 997 |
+
"bbox": [
|
| 998 |
+
112,
|
| 999 |
+
499,
|
| 1000 |
+
487,
|
| 1001 |
+
563
|
| 1002 |
+
],
|
| 1003 |
+
"page_idx": 5
|
| 1004 |
+
},
|
| 1005 |
+
{
|
| 1006 |
+
"type": "text",
|
| 1007 |
+
"text": "Sourab Mangrulkar, Sylvain Gugger, Lysandre Debut, Younes Belkada, Sayak Paul, and Benjamin Bossan. 2022. Peft: State-of-the-art parameter-efficient fine-tuning methods. https://github.com/huggingface/peft.",
|
| 1008 |
+
"bbox": [
|
| 1009 |
+
112,
|
| 1010 |
+
574,
|
| 1011 |
+
487,
|
| 1012 |
+
639
|
| 1013 |
+
],
|
| 1014 |
+
"page_idx": 5
|
| 1015 |
+
},
|
| 1016 |
+
{
|
| 1017 |
+
"type": "text",
|
| 1018 |
+
"text": "Pavlo Molchanov, Arun Mallya, Stephen Tyree, Iuri Frosio, and Jan Kautz. 2019. Importance estimation for neural network pruning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11264-11272.",
|
| 1019 |
+
"bbox": [
|
| 1020 |
+
112,
|
| 1021 |
+
649,
|
| 1022 |
+
487,
|
| 1023 |
+
715
|
| 1024 |
+
],
|
| 1025 |
+
"page_idx": 5
|
| 1026 |
+
},
|
| 1027 |
+
{
|
| 1028 |
+
"type": "text",
|
| 1029 |
+
"text": "Sharath Nittur Sridhar, Souvik Kundu, Sairam Sundaresan, Maciej Szankin, and Anthony Sarah. 2023. Instatute: Instantaneous neural architecture search during fine-tuning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1523-1527.",
|
| 1030 |
+
"bbox": [
|
| 1031 |
+
112,
|
| 1032 |
+
724,
|
| 1033 |
+
487,
|
| 1034 |
+
802
|
| 1035 |
+
],
|
| 1036 |
+
"page_idx": 5
|
| 1037 |
+
},
|
| 1038 |
+
{
|
| 1039 |
+
"type": "text",
|
| 1040 |
+
"text": "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan,",
|
| 1041 |
+
"bbox": [
|
| 1042 |
+
112,
|
| 1043 |
+
813,
|
| 1044 |
+
487,
|
| 1045 |
+
917
|
| 1046 |
+
],
|
| 1047 |
+
"page_idx": 5
|
| 1048 |
+
},
|
| 1049 |
+
{
|
| 1050 |
+
"type": "text",
|
| 1051 |
+
"text": "Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and fine-tuned chat models.",
|
| 1052 |
+
"bbox": [
|
| 1053 |
+
507,
|
| 1054 |
+
85,
|
| 1055 |
+
882,
|
| 1056 |
+
267
|
| 1057 |
+
],
|
| 1058 |
+
"page_idx": 5
|
| 1059 |
+
},
|
| 1060 |
+
{
|
| 1061 |
+
"type": "text",
|
| 1062 |
+
"text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.",
|
| 1063 |
+
"bbox": [
|
| 1064 |
+
507,
|
| 1065 |
+
278,
|
| 1066 |
+
882,
|
| 1067 |
+
342
|
| 1068 |
+
],
|
| 1069 |
+
"page_idx": 5
|
| 1070 |
+
},
|
| 1071 |
+
{
|
| 1072 |
+
"type": "text",
|
| 1073 |
+
"text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierrick Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.",
|
| 1074 |
+
"bbox": [
|
| 1075 |
+
507,
|
| 1076 |
+
354,
|
| 1077 |
+
882,
|
| 1078 |
+
497
|
| 1079 |
+
],
|
| 1080 |
+
"page_idx": 5
|
| 1081 |
+
},
|
| 1082 |
+
{
|
| 1083 |
+
"type": "text",
|
| 1084 |
+
"text": "Lu Yin, Ajay Jaiswal, Shiwei Liu, Souvik Kundu, and Zhangyang Wang. 2024. Pruning small pre-trained weights irreversibly and monotonically impairs \"difficult\" downstream tasks in llms.",
|
| 1085 |
+
"bbox": [
|
| 1086 |
+
507,
|
| 1087 |
+
508,
|
| 1088 |
+
882,
|
| 1089 |
+
558
|
| 1090 |
+
],
|
| 1091 |
+
"page_idx": 5
|
| 1092 |
+
},
|
| 1093 |
+
{
|
| 1094 |
+
"type": "text",
|
| 1095 |
+
"text": "Qingru Zhang, Minshuo Chen, Alexander Bukharin, Pengcheng He, Yu Cheng, Weizhu Chen, and Tuo Zhao. 2023. Adaptive budget allocation for parameter-efficient fine-tuning. In The Eleventh International Conference on Learning Representations.",
|
| 1096 |
+
"bbox": [
|
| 1097 |
+
507,
|
| 1098 |
+
571,
|
| 1099 |
+
882,
|
| 1100 |
+
636
|
| 1101 |
+
],
|
| 1102 |
+
"page_idx": 5
|
| 1103 |
+
},
|
| 1104 |
+
{
|
| 1105 |
+
"type": "text",
|
| 1106 |
+
"text": "Qingru Zhang, Simiao Zuo, Chen Liang, Alexander Bukharin, Pengcheng He, Weizhu Chen, and Tuo Zhao. 2022. Platon: Pruning large transformer models with upper confidence bound of weight importance. In International Conference on Machine Learning, pages 26809-26823. PMLR.",
|
| 1107 |
+
"bbox": [
|
| 1108 |
+
507,
|
| 1109 |
+
646,
|
| 1110 |
+
882,
|
| 1111 |
+
724
|
| 1112 |
+
],
|
| 1113 |
+
"page_idx": 5
|
| 1114 |
+
},
|
| 1115 |
+
{
|
| 1116 |
+
"type": "page_number",
|
| 1117 |
+
"text": "166",
|
| 1118 |
+
"bbox": [
|
| 1119 |
+
487,
|
| 1120 |
+
928,
|
| 1121 |
+
515,
|
| 1122 |
+
939
|
| 1123 |
+
],
|
| 1124 |
+
"page_idx": 5
|
| 1125 |
+
},
|
| 1126 |
+
{
|
| 1127 |
+
"type": "text",
|
| 1128 |
+
"text": "A Appendix",
|
| 1129 |
+
"text_level": 1,
|
| 1130 |
+
"bbox": [
|
| 1131 |
+
114,
|
| 1132 |
+
84,
|
| 1133 |
+
238,
|
| 1134 |
+
99
|
| 1135 |
+
],
|
| 1136 |
+
"page_idx": 6
|
| 1137 |
+
},
|
| 1138 |
+
{
|
| 1139 |
+
"type": "text",
|
| 1140 |
+
"text": "A.1 Dataset",
|
| 1141 |
+
"text_level": 1,
|
| 1142 |
+
"bbox": [
|
| 1143 |
+
114,
|
| 1144 |
+
109,
|
| 1145 |
+
225,
|
| 1146 |
+
122
|
| 1147 |
+
],
|
| 1148 |
+
"page_idx": 6
|
| 1149 |
+
},
|
| 1150 |
+
{
|
| 1151 |
+
"type": "text",
|
| 1152 |
+
"text": "The details of train/test/dev splits and the evaluation metric of the GLUE (Wang et al., 2018) dataset are reported in Table 6. We use the Huggingface Transformers library (Wolf et al., 2020) to source all the datasets.",
|
| 1153 |
+
"bbox": [
|
| 1154 |
+
112,
|
| 1155 |
+
130,
|
| 1156 |
+
487,
|
| 1157 |
+
209
|
| 1158 |
+
],
|
| 1159 |
+
"page_idx": 6
|
| 1160 |
+
},
|
| 1161 |
+
{
|
| 1162 |
+
"type": "text",
|
| 1163 |
+
"text": "Table 6: Statistics of the GLUE benchmark datasets.\"Mcc\", \"Acc\", \"F1\" and \"Pear\" represent Matthews correlation coefficient, accuracy, the F1 score and the Pearson correlation coefficient respectively. And \"Acc\" for the MNLI dataset contains the accuracy for the matched and mismatched subset of the datasets.",
|
| 1164 |
+
"bbox": [
|
| 1165 |
+
112,
|
| 1166 |
+
219,
|
| 1167 |
+
485,
|
| 1168 |
+
303
|
| 1169 |
+
],
|
| 1170 |
+
"page_idx": 6
|
| 1171 |
+
},
|
| 1172 |
+
{
|
| 1173 |
+
"type": "table",
|
| 1174 |
+
"img_path": "images/17c8d8405e46adc6a4c8dc5ca944c9944b72d0862bb88f7ae69f205274bafbb0.jpg",
|
| 1175 |
+
"table_caption": [],
|
| 1176 |
+
"table_footnote": [],
|
| 1177 |
+
"table_body": "<table><tr><td>Dataset</td><td>#Train</td><td>#Valid</td><td>#Test</td><td>Metric</td></tr><tr><td>CoLA</td><td>8.5k</td><td>1,043</td><td>1,063</td><td>Mcc</td></tr><tr><td>SST-2</td><td>67k</td><td>872</td><td>1.8k</td><td>Acc</td></tr><tr><td>MRPC</td><td>3.7k</td><td>408</td><td>1.7k</td><td>Acc</td></tr><tr><td>QQP</td><td>364k</td><td>40.4k</td><td>391k</td><td>Acc/F1</td></tr><tr><td>STS-B</td><td>5.7k</td><td>1.5k</td><td>1.4k</td><td>Pear</td></tr><tr><td>MNLI</td><td>393k</td><td>9.8k/9.8k</td><td>9.8k/9.8k</td><td>Acc</td></tr><tr><td>QNLI</td><td>105k</td><td>5.5k</td><td>5.5k</td><td>Acc</td></tr><tr><td>RTE</td><td>2.5k</td><td>277</td><td>3k</td><td>Acc</td></tr></table>",
|
| 1178 |
+
"bbox": [
|
| 1179 |
+
188,
|
| 1180 |
+
305,
|
| 1181 |
+
411,
|
| 1182 |
+
385
|
| 1183 |
+
],
|
| 1184 |
+
"page_idx": 6
|
| 1185 |
+
},
|
| 1186 |
+
{
|
| 1187 |
+
"type": "text",
|
| 1188 |
+
"text": "A.2 Hyperparameter configuration",
|
| 1189 |
+
"text_level": 1,
|
| 1190 |
+
"bbox": [
|
| 1191 |
+
114,
|
| 1192 |
+
405,
|
| 1193 |
+
406,
|
| 1194 |
+
420
|
| 1195 |
+
],
|
| 1196 |
+
"page_idx": 6
|
| 1197 |
+
},
|
| 1198 |
+
{
|
| 1199 |
+
"type": "text",
|
| 1200 |
+
"text": "Table 7 shows the main hyper-parameter set up in this paper. Besides them, we use the same optimizer, warmup Ratio, and LR schedule as Hu et al. (2021). We use NVIDIA RTX A6000 (maximum GPU memory=49140MB) to measure the training runtime. For all experiments, we run 5 times using different random seeds and report the average results.",
|
| 1201 |
+
"bbox": [
|
| 1202 |
+
112,
|
| 1203 |
+
426,
|
| 1204 |
+
487,
|
| 1205 |
+
552
|
| 1206 |
+
],
|
| 1207 |
+
"page_idx": 6
|
| 1208 |
+
},
|
| 1209 |
+
{
|
| 1210 |
+
"type": "table",
|
| 1211 |
+
"img_path": "images/3a3f0a646c2ad05521f85808b58fd7a5818b8f37f568d876b61e75968ec0a3ff.jpg",
|
| 1212 |
+
"table_caption": [
|
| 1213 |
+
"Table 7: Hyperparameter setup for all eight datasets in GLUE benchmark"
|
| 1214 |
+
],
|
| 1215 |
+
"table_footnote": [
|
| 1216 |
+
"* \"Clf. Lr.* means the learning rate for the classification head."
|
| 1217 |
+
],
|
| 1218 |
+
"table_body": "<table><tr><td>Hyperparameter</td><td>CoLA</td><td>SST-2</td><td>MRPC</td><td>QNLI</td><td>STS-B</td><td>RTE</td><td>MNLI</td><td>QQP</td></tr><tr><td># epochs</td><td>20</td><td>10</td><td>20</td><td>10</td><td>20</td><td>20</td><td>10</td><td>10</td></tr><tr><td>Batch size</td><td></td><td></td><td></td><td>64</td><td></td><td></td><td></td><td></td></tr><tr><td>Max Seq. Len.</td><td></td><td></td><td></td><td>256</td><td></td><td></td><td></td><td></td></tr><tr><td>Clf. Lr.*</td><td>4E-2</td><td>4E-3</td><td>8E-2</td><td>4E-3</td><td>2E-2</td><td>4E-2</td><td>4E-3</td><td>4E-3</td></tr><tr><td>Learning rate</td><td>1E-2</td><td>4E-3</td><td>1E-2</td><td>1E-3</td><td>2E-3</td><td>1E-3</td><td>1E-3</td><td>4E-3</td></tr><tr><td>ti(epoch)</td><td></td><td></td><td></td><td>1</td><td></td><td></td><td></td><td></td></tr><tr><td>tf(epoch)</td><td>14</td><td>7</td><td>14</td><td>7</td><td>14</td><td>14</td><td>7</td><td>7</td></tr><tr><td>β1</td><td></td><td></td><td></td><td>0.85</td><td></td><td></td><td></td><td></td></tr><tr><td>β2</td><td></td><td></td><td></td><td>0.95</td><td></td><td></td><td></td><td></td></tr></table>",
|
| 1219 |
+
"bbox": [
|
| 1220 |
+
121,
|
| 1221 |
+
598,
|
| 1222 |
+
475,
|
| 1223 |
+
687
|
| 1224 |
+
],
|
| 1225 |
+
"page_idx": 6
|
| 1226 |
+
},
|
| 1227 |
+
{
|
| 1228 |
+
"type": "text",
|
| 1229 |
+
"text": "A.3 Ablation study on if freezing the two projection matrices in the same layer simultaneously",
|
| 1230 |
+
"text_level": 1,
|
| 1231 |
+
"bbox": [
|
| 1232 |
+
114,
|
| 1233 |
+
728,
|
| 1234 |
+
453,
|
| 1235 |
+
775
|
| 1236 |
+
],
|
| 1237 |
+
"page_idx": 6
|
| 1238 |
+
},
|
| 1239 |
+
{
|
| 1240 |
+
"type": "text",
|
| 1241 |
+
"text": "We study the value of freezing both projection matrices in the same layer simultaneously. The results, depicted in Table 8, demonstrate that freezing the projection matrices separately yields consistently superior performance compared to freezing them simultaneously.",
|
| 1242 |
+
"bbox": [
|
| 1243 |
+
112,
|
| 1244 |
+
781,
|
| 1245 |
+
487,
|
| 1246 |
+
876
|
| 1247 |
+
],
|
| 1248 |
+
"page_idx": 6
|
| 1249 |
+
},
|
| 1250 |
+
{
|
| 1251 |
+
"type": "table",
|
| 1252 |
+
"img_path": "images/6245669ecdece63901485b4ffddfa62df9a3887b41c73fcbe7affeb312972a9b.jpg",
|
| 1253 |
+
"table_caption": [
|
| 1254 |
+
"Table 8: Ablation study on whether freezing the two projection matrices in the same layer simultaneously or independently."
|
| 1255 |
+
],
|
| 1256 |
+
"table_footnote": [],
|
| 1257 |
+
"table_body": "<table><tr><td></td><td>Simultaneously</td><td>Independently</td></tr><tr><td>CoLA</td><td>67.90</td><td>72.01</td></tr><tr><td>SST-2</td><td>95.87</td><td>96.22</td></tr><tr><td>MRPC</td><td>91.67</td><td>91.91</td></tr><tr><td>STS-B</td><td>91.64</td><td>91.84</td></tr><tr><td>QNLI</td><td>94.20</td><td>94.42</td></tr><tr><td>RTE</td><td>87.00</td><td>88.09</td></tr><tr><td>Avg.</td><td>88.05</td><td>89.02</td></tr><tr><td>#Params</td><td>0.146M</td><td>0.138M</td></tr></table>",
|
| 1258 |
+
"bbox": [
|
| 1259 |
+
559,
|
| 1260 |
+
464,
|
| 1261 |
+
828,
|
| 1262 |
+
576
|
| 1263 |
+
],
|
| 1264 |
+
"page_idx": 6
|
| 1265 |
+
},
|
| 1266 |
+
{
|
| 1267 |
+
"type": "page_number",
|
| 1268 |
+
"text": "167",
|
| 1269 |
+
"bbox": [
|
| 1270 |
+
487,
|
| 1271 |
+
928,
|
| 1272 |
+
515,
|
| 1273 |
+
940
|
| 1274 |
+
],
|
| 1275 |
+
"page_idx": 6
|
| 1276 |
+
}
|
| 1277 |
+
]
|
2024/AFLoRA_ Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models/cedf799c-cbb4-4ca7-91c5-53457a9975a6_model.json
ADDED
|
@@ -0,0 +1,1380 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
[
|
| 3 |
+
{
|
| 4 |
+
"type": "title",
|
| 5 |
+
"bbox": [
|
| 6 |
+
0.146,
|
| 7 |
+
0.09,
|
| 8 |
+
0.853,
|
| 9 |
+
0.131
|
| 10 |
+
],
|
| 11 |
+
"angle": 0,
|
| 12 |
+
"content": "AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models"
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"bbox": [
|
| 17 |
+
0.121,
|
| 18 |
+
0.149,
|
| 19 |
+
0.881,
|
| 20 |
+
0.166
|
| 21 |
+
],
|
| 22 |
+
"angle": 0,
|
| 23 |
+
"content": "Zeyu Liu\\(^{1,1}\\) Souvik Kundu\\(^{1,2}\\) Anni Li\\(^{1}\\) Junrui Wan\\(^{1}\\) Lianghao Jiang\\(^{1}\\) Peter A. Beerel\\(^{1}\\)"
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"bbox": [
|
| 28 |
+
0.208,
|
| 29 |
+
0.167,
|
| 30 |
+
0.794,
|
| 31 |
+
0.183
|
| 32 |
+
],
|
| 33 |
+
"angle": 0,
|
| 34 |
+
"content": "<sup>1</sup> University of Southern California, USA <sup>2</sup> Intel Labs, San Diego, USA"
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"bbox": [
|
| 39 |
+
0.103,
|
| 40 |
+
0.184,
|
| 41 |
+
0.901,
|
| 42 |
+
0.218
|
| 43 |
+
],
|
| 44 |
+
"angle": 0,
|
| 45 |
+
"content": "{liuzeyu, annili, junruiwa, ljiang40, pabeerel}@usc.edu souvikk.kundu@intel.com \n†Equally contributing authors"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "title",
|
| 49 |
+
"bbox": [
|
| 50 |
+
0.261,
|
| 51 |
+
0.253,
|
| 52 |
+
0.341,
|
| 53 |
+
0.268
|
| 54 |
+
],
|
| 55 |
+
"angle": 0,
|
| 56 |
+
"content": "Abstract"
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "text",
|
| 60 |
+
"bbox": [
|
| 61 |
+
0.141,
|
| 62 |
+
0.283,
|
| 63 |
+
0.461,
|
| 64 |
+
0.653
|
| 65 |
+
],
|
| 66 |
+
"angle": 0,
|
| 67 |
+
"content": "We present a novel parameter-efficient finetuning (PEFT) method, dubbed as adaptive freezing of low rank adaptation (AFLoRA). Specifically, for each pre-trained frozen weight tensor, we add a parallel path of trainable low-rank matrices, namely a down-projection and an up-projection matrix, each of which is followed by a feature transformation vector. Based on a novel freezing score, we then incrementally freeze these projection matrices during fine-tuning to reduce the computation and alleviate over-fitting. Our experimental results demonstrate that we can achieve state-of-the-art performance with an average improvement of up to \\(1.09\\%\\) as evaluated on the GLUE and GSM8k benchmark while yielding up to \\(9.5 \\times\\) fewer average trainable parameters. While compared in terms of runtime, AFLoRA can yield up to \\(1.86 \\times\\) improvement as opposed to similar PEFT alternatives. Besides the practical utility of our approach, we provide insights on the trainability requirements of LoRA paths at different modules and the freezing schedule for the different projection matrices. Code is released at: https://github.com/ zeyuliu1037/AFLoRA/tree/main."
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "title",
|
| 71 |
+
"bbox": [
|
| 72 |
+
0.115,
|
| 73 |
+
0.667,
|
| 74 |
+
0.262,
|
| 75 |
+
0.683
|
| 76 |
+
],
|
| 77 |
+
"angle": 0,
|
| 78 |
+
"content": "1 Introduction"
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"bbox": [
|
| 83 |
+
0.113,
|
| 84 |
+
0.694,
|
| 85 |
+
0.49,
|
| 86 |
+
0.871
|
| 87 |
+
],
|
| 88 |
+
"angle": 0,
|
| 89 |
+
"content": "Pre-trained language models such as BERT (Devlin et al., 2018), GPT-3 (Brown et al., 2020), and LLaMA2 (Touvron et al., 2023) have demonstrated commendable performance on various natural language processing (NLP) tasks (Kang et al., 2024). However, their zero-shot performance on many downstream tasks often falls short of expectations. One possible solution is full fine-tuning (FFT) of the model on the downstream dataset. However, the large model parameter size makes this process prohibitively costly."
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "text",
|
| 93 |
+
"bbox": [
|
| 94 |
+
0.113,
|
| 95 |
+
0.888,
|
| 96 |
+
0.49,
|
| 97 |
+
0.92
|
| 98 |
+
],
|
| 99 |
+
"angle": 0,
|
| 100 |
+
"content": "To address this challenge, various parameter-efficient fine-tuning (PEFT) methods including low"
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "image",
|
| 104 |
+
"bbox": [
|
| 105 |
+
0.512,
|
| 106 |
+
0.253,
|
| 107 |
+
0.885,
|
| 108 |
+
0.381
|
| 109 |
+
],
|
| 110 |
+
"angle": 0,
|
| 111 |
+
"content": null
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"type": "image_caption",
|
| 115 |
+
"bbox": [
|
| 116 |
+
0.509,
|
| 117 |
+
0.39,
|
| 118 |
+
0.886,
|
| 119 |
+
0.492
|
| 120 |
+
],
|
| 121 |
+
"angle": 0,
|
| 122 |
+
"content": "Figure 1: Schematic comparison of LoRA (Hu et al., 2021), ELoRA (Kopiczko et al., 2024), and AFLoRA and their associated advantages and disadvantages in terms of various metrics. \\( r_{L} \\) and \\( r_{V} \\), represent the rank of the low-rank path used in LoRA and ELoRA methods, respectively. FT and KU refer to fine-tuned weights and the Kaiming uniform initialization, respectively."
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"type": "text",
|
| 126 |
+
"bbox": [
|
| 127 |
+
0.508,
|
| 128 |
+
0.518,
|
| 129 |
+
0.885,
|
| 130 |
+
0.919
|
| 131 |
+
],
|
| 132 |
+
"angle": 0,
|
| 133 |
+
"content": "rank adaptation (LoRA) (Hu et al., 2021), adapter tuning (He et al., 2021), and prompt tuning (Lester et al., 2021) are proposed. These methods add parameters to the trained model for fine-tuning, bypassing the need to adjust the weights of the pre-trained model. In particular, LoRA (Hu et al., 2021) and its variants (Zhang et al., 2023) add a trainable low-rank path consisting of down-projection and up-projection matrices to the model, inspired by (Aghajanyan et al., 2020) which showed that such low-rank paths can effectively approximate the trained weight tensors. ELoRA (Kopiczko et al., 2024) extends LoRA by adding trainable feature transformation vectors to the output of each project matrix. They showed that SoTA accuracy can be achieved with the projection matrices frozen after random initialization while keeping the two feature transformation vectors trainable. This approach significantly reduces the number of trainable parameters. However, compared to LoRA, ELoRA incurs higher computation costs due to the higher rank needed for the frozen projection matrices. Fig. 1 illustrates LoRA and ELoRA, contrasting them to our proposed method AFLoRA."
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"type": "page_number",
|
| 137 |
+
"bbox": [
|
| 138 |
+
0.487,
|
| 139 |
+
0.929,
|
| 140 |
+
0.515,
|
| 141 |
+
0.941
|
| 142 |
+
],
|
| 143 |
+
"angle": 0,
|
| 144 |
+
"content": "161"
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"type": "footer",
|
| 148 |
+
"bbox": [
|
| 149 |
+
0.097,
|
| 150 |
+
0.946,
|
| 151 |
+
0.901,
|
| 152 |
+
0.974
|
| 153 |
+
],
|
| 154 |
+
"angle": 0,
|
| 155 |
+
"content": "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 161-167 August 11-16, 2024 ©2024 Association for Computational Linguistics"
|
| 156 |
+
}
|
| 157 |
+
],
|
| 158 |
+
[
|
| 159 |
+
{
|
| 160 |
+
"type": "text",
|
| 161 |
+
"bbox": [
|
| 162 |
+
0.113,
|
| 163 |
+
0.103,
|
| 164 |
+
0.49,
|
| 165 |
+
0.231
|
| 166 |
+
],
|
| 167 |
+
"angle": 0,
|
| 168 |
+
"content": "Our contributions. To reduce the trainable parameter count and computation costs of fine-tuning, we present Adaptive Freezing of Low Rank Adaptation (AFLoRA). More specifically, we first investigate the rank needed for the frozen LoRA path in ELoRA and observe that reducing the rank of the frozen projection matrices (PM) causes a drop in fine-tuning performance."
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"type": "text",
|
| 172 |
+
"bbox": [
|
| 173 |
+
0.117,
|
| 174 |
+
0.232,
|
| 175 |
+
0.49,
|
| 176 |
+
0.538
|
| 177 |
+
],
|
| 178 |
+
"angle": 0,
|
| 179 |
+
"content": "Based on this insight, we present AFLoRA, which starts with a low-rank trainable path that includes projection matrices and feature transformation vectors and trains the path for some epochs. We then gradually freeze the projection matrices based on a novel freezing score that acts as a proxy for the trainability requirement of a LoRA tensor. In this way, we not only help alleviate the over-fitting issue but also, improve the computation efficiency. To evaluate the benefit of AFLoRA, we perform extensive evaluations on multiple NLP benchmark datasets and compare accuracy, FLOPs, and training time with several existing alternatives. Specifically, compared to ELoRA we yield \\(1.86 \\times\\) and \\(2.96 \\times\\) improvement in runtime and FLOPs, respectively, while remaining comparable as LoRA on these two metrics. Compared to LoRA we require \\(9.5 \\times\\) fewer average trainable parameters to yield similar or improved performance."
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"type": "title",
|
| 183 |
+
"bbox": [
|
| 184 |
+
0.114,
|
| 185 |
+
0.553,
|
| 186 |
+
0.279,
|
| 187 |
+
0.569
|
| 188 |
+
],
|
| 189 |
+
"angle": 0,
|
| 190 |
+
"content": "2 Related Works"
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"type": "text",
|
| 194 |
+
"bbox": [
|
| 195 |
+
0.113,
|
| 196 |
+
0.581,
|
| 197 |
+
0.49,
|
| 198 |
+
0.919
|
| 199 |
+
],
|
| 200 |
+
"angle": 0,
|
| 201 |
+
"content": "PEFT (Hu et al., 2021; Kundu et al., 2024; Sridhar et al., 2023; Yin et al., 2024) refers to a collection of methodologies that focus on allowing a small number of parameters to fine-tune to yield good performance on a downstream task. For example, prefix-tuning (Li and Liang, 2021) adds trainable prefix tokens to a model's input or hidden layers while adapter-tuning (Houlsby et al., 2019) inserts small neural network layers, known as adapters, within each layer of a pre-trained model. LoRA (Hu et al., 2021), on the other hand, adds low-rank tensors in parallel to the frozen pre-trained weights. AdaLoRA (Zhang et al., 2023) allows the rank of the LoRA path to be chosen in an adaptive way. Other variants like SoRA (Ding et al., 2023) and LoSparse (Li et al., 2023) have investigated the impact of sparsity in and alongside the low-rank path, respectively. Recently, efficient low-rank adaptation (ELoRA) (Kopiczko et al., 2024) has proposed to keep the LoRA path frozen, while introducing two trainable feature transformation"
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"type": "image",
|
| 205 |
+
"bbox": [
|
| 206 |
+
0.57,
|
| 207 |
+
0.082,
|
| 208 |
+
0.825,
|
| 209 |
+
0.198
|
| 210 |
+
],
|
| 211 |
+
"angle": 0,
|
| 212 |
+
"content": null
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"type": "image_caption",
|
| 216 |
+
"bbox": [
|
| 217 |
+
0.508,
|
| 218 |
+
0.207,
|
| 219 |
+
0.883,
|
| 220 |
+
0.237
|
| 221 |
+
],
|
| 222 |
+
"angle": 0,
|
| 223 |
+
"content": "Figure 2: Performance of ELoRA with two different ranks of the frozen projection matrices."
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"type": "text",
|
| 227 |
+
"bbox": [
|
| 228 |
+
0.508,
|
| 229 |
+
0.247,
|
| 230 |
+
0.885,
|
| 231 |
+
0.327
|
| 232 |
+
],
|
| 233 |
+
"angle": 0,
|
| 234 |
+
"content": "vectors. Thus, this work only studies an extreme scenario of keeping the LoRA path frozen, and, to the best of our knowledge, no work has investigated the trainability requirement of the projection matrices."
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"type": "title",
|
| 238 |
+
"bbox": [
|
| 239 |
+
0.509,
|
| 240 |
+
0.354,
|
| 241 |
+
0.757,
|
| 242 |
+
0.371
|
| 243 |
+
],
|
| 244 |
+
"angle": 0,
|
| 245 |
+
"content": "3 Motivational Case Study"
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "text",
|
| 249 |
+
"bbox": [
|
| 250 |
+
0.508,
|
| 251 |
+
0.38,
|
| 252 |
+
0.885,
|
| 253 |
+
0.508
|
| 254 |
+
],
|
| 255 |
+
"angle": 0,
|
| 256 |
+
"content": "To understand the high-rank requirement for the frozen projection matrices in ELoRA, we conduct two sets of fine-tuning on SST-2 and MRPC, with ELoRA having rank \\((r)\\) of 1024 and 4, respectively. As we can see in Fig. 2, the model with \\(r = 4\\), yields poorer performance, highlighting the need for high rank for the frozen tensors. This high rank causes ELoRA to potentially be FLOPs inefficient."
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "title",
|
| 260 |
+
"bbox": [
|
| 261 |
+
0.509,
|
| 262 |
+
0.536,
|
| 263 |
+
0.75,
|
| 264 |
+
0.554
|
| 265 |
+
],
|
| 266 |
+
"angle": 0,
|
| 267 |
+
"content": "4 AFLoRA: Methodology"
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "text",
|
| 271 |
+
"bbox": [
|
| 272 |
+
0.508,
|
| 273 |
+
0.561,
|
| 274 |
+
0.884,
|
| 275 |
+
0.77
|
| 276 |
+
],
|
| 277 |
+
"angle": 0,
|
| 278 |
+
"content": "Module Structure. Inspired by the framework proposed by Kopiczko et al. (2024), we design the LoRA module to encompass four components, namely, the down-projection linear layer \\((lora_{A})\\), the up-projection linear layer \\((lora_{B})\\), and two feature transform vectors \\((s_{d},\\) and \\(s_b)\\) placed before and after \\(lora_{B}\\). However, unlike (Kopiczko et al., 2024), we keep both the projection matrices \\((lora_{A}\\) and \\(lora_{B})\\) and vectors trainable at the beginning and keep the rank very low. The module processes a given input \\(X\\) through these components to produce an output \\(Y\\). The complete operation for a layer \\(l\\) can be described as follows:"
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"type": "equation",
|
| 282 |
+
"bbox": [
|
| 283 |
+
0.592,
|
| 284 |
+
0.778,
|
| 285 |
+
0.883,
|
| 286 |
+
0.797
|
| 287 |
+
],
|
| 288 |
+
"angle": 0,
|
| 289 |
+
"content": "\\[\nY = W _ {0} ^ {l} X + \\Lambda_ {b} ^ {l} B ^ {l} \\Lambda_ {d} ^ {l} A ^ {l} X \\tag {1}\n\\]"
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"type": "text",
|
| 293 |
+
"bbox": [
|
| 294 |
+
0.508,
|
| 295 |
+
0.806,
|
| 296 |
+
0.884,
|
| 297 |
+
0.904
|
| 298 |
+
],
|
| 299 |
+
"angle": 0,
|
| 300 |
+
"content": "Here, \\(A^l\\) and \\(B^l\\) are the trainable LoRA tensors of \\(lora_A^l\\) and \\(lora_B^l\\), respectively. \\(\\Lambda_d\\) and \\(\\Lambda_b\\) are the vectors of \\(s_d\\), and \\(s_b\\), respectively. \\(W_0^l\\) represents the frozen pre-trained weights. We use Kaiming Uniform initialization for \\(A^l\\) and \\(B^l\\), and follow (Kopiczko et al., 2024) to initialize the vectors."
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"type": "page_number",
|
| 304 |
+
"bbox": [
|
| 305 |
+
0.487,
|
| 306 |
+
0.929,
|
| 307 |
+
0.517,
|
| 308 |
+
0.941
|
| 309 |
+
],
|
| 310 |
+
"angle": 0,
|
| 311 |
+
"content": "162"
|
| 312 |
+
}
|
| 313 |
+
],
|
| 314 |
+
[
|
| 315 |
+
{
|
| 316 |
+
"type": "table_caption",
|
| 317 |
+
"bbox": [
|
| 318 |
+
0.182,
|
| 319 |
+
0.083,
|
| 320 |
+
0.816,
|
| 321 |
+
0.098
|
| 322 |
+
],
|
| 323 |
+
"angle": 0,
|
| 324 |
+
"content": "Table 1: Comparison of different LoRA variants with DeBERTaV3 on the GLUE benchmark."
|
| 325 |
+
},
|
| 326 |
+
{
|
| 327 |
+
"type": "table",
|
| 328 |
+
"bbox": [
|
| 329 |
+
0.189,
|
| 330 |
+
0.108,
|
| 331 |
+
0.812,
|
| 332 |
+
0.169
|
| 333 |
+
],
|
| 334 |
+
"angle": 0,
|
| 335 |
+
"content": "<table><tr><td>Method</td><td>#Params. ↓</td><td>CoLA ↑</td><td>SST-2 ↑</td><td>MRPC ↑</td><td>QNLI ↑</td><td>STS-B ↑</td><td>RTE ↑</td><td>MNLI ↑</td><td>QQP ↑</td><td>Avg. ↑</td></tr><tr><td>FFT</td><td>184M</td><td>69.21</td><td>95.64</td><td>89.22</td><td>93.78</td><td>91.59</td><td>82.49</td><td>89.98/89.95</td><td>92.05/89.31</td><td>87.82</td></tr><tr><td>LoRA (r = 8)</td><td>1.33M</td><td>69.73</td><td>95.57</td><td>89.71</td><td>93.76</td><td>91.86</td><td>85.32</td><td>90.47/90.46</td><td>91.95/89.26</td><td>88.38</td></tr><tr><td>AdaLoRA</td><td>1.27M</td><td>70.86</td><td>95.95</td><td>90.22</td><td>94.28</td><td>91.39</td><td>87.36</td><td>90.27/90.30</td><td>92.13/88.41</td><td>88.83</td></tr><tr><td>SoRA (r = 4)</td><td>0.47M</td><td>71.05</td><td>95.57</td><td>90.20</td><td>93.92</td><td>91.76</td><td>86.04</td><td>90.38/90.43</td><td>92.06/89.44</td><td>88.71</td></tr><tr><td>ELoRA*</td><td>0.16M</td><td>70.74</td><td>95.18</td><td>90.93</td><td>93.58</td><td>91.08</td><td>87.36</td><td>90.11/90.22</td><td>90.69/87.63</td><td>88.53</td></tr><tr><td>AFLoRA (r = 4)</td><td>0.14M**</td><td>72.01</td><td>96.22</td><td>91.91</td><td>94.42</td><td>91.84</td><td>88.09</td><td>89.88/90.17</td><td>90.81/87.77</td><td>89.23</td></tr></table>"
|
| 336 |
+
},
|
| 337 |
+
{
|
| 338 |
+
"type": "table_footnote",
|
| 339 |
+
"bbox": [
|
| 340 |
+
0.13,
|
| 341 |
+
0.171,
|
| 342 |
+
0.7,
|
| 343 |
+
0.179
|
| 344 |
+
],
|
| 345 |
+
"angle": 0,
|
| 346 |
+
"content": "* The original paper has results with the RoBERTa, we generated the results with our implementation on DeBERTaV3 with the rank of 1024."
|
| 347 |
+
},
|
| 348 |
+
{
|
| 349 |
+
"type": "table_footnote",
|
| 350 |
+
"bbox": [
|
| 351 |
+
0.13,
|
| 352 |
+
0.179,
|
| 353 |
+
0.734,
|
| 354 |
+
0.188
|
| 355 |
+
],
|
| 356 |
+
"angle": 0,
|
| 357 |
+
"content": "** As the number of trainable parameters is changed during training, we computed this by averaging over the whole training epochs over all datasets."
|
| 358 |
+
},
|
| 359 |
+
{
|
| 360 |
+
"type": "list",
|
| 361 |
+
"bbox": [
|
| 362 |
+
0.13,
|
| 363 |
+
0.171,
|
| 364 |
+
0.734,
|
| 365 |
+
0.188
|
| 366 |
+
],
|
| 367 |
+
"angle": 0,
|
| 368 |
+
"content": null
|
| 369 |
+
},
|
| 370 |
+
{
|
| 371 |
+
"type": "text",
|
| 372 |
+
"bbox": [
|
| 373 |
+
0.112,
|
| 374 |
+
0.206,
|
| 375 |
+
0.49,
|
| 376 |
+
0.576
|
| 377 |
+
],
|
| 378 |
+
"angle": 0,
|
| 379 |
+
"content": "Adaptive Freezing. In pruning literature (Han et al., 2015; Molchanov et al., 2019; Zhang et al., 2022; Yin et al., 2024; Kundu et al., 2021, 2022), sensitivity is gauged to reflect weight variability, necessitating consideration of both the weights' magnitudes and their gradients. Small weight values suggest minimal impact, while minor gradient values indicate stability. Taking inspiration from this idea, here we introduce the concept of a \"freezing score\". However, unlike pruning where both magnitude and gradient play a critical role in identifying insignificant weight, we leverage only gradient as a proxy to compute the freezing score. This is because, we assume large magnitude weights with negligible change has the same priority to be frozen as that for small magnitude weights. This score quantifies the degree to which weights vary throughout the training process. Consequently, when the expected changes to the weights become negligible, we may consider them to be frozen, thereby saving computational resources and energy. The following equation describes the freezing score evaluation steps for a low-rank tensor \\( A^l \\)."
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"type": "equation",
|
| 383 |
+
"bbox": [
|
| 384 |
+
0.147,
|
| 385 |
+
0.582,
|
| 386 |
+
0.488,
|
| 387 |
+
0.603
|
| 388 |
+
],
|
| 389 |
+
"angle": 0,
|
| 390 |
+
"content": "\\[\nI _ {A ^ {l}} = \\left| \\nabla \\mathcal {L} (\\boldsymbol {\\theta}) \\right|, \\bar {I} _ {A ^ {l}} ^ {(t)} = \\beta_ {1} \\bar {I} _ {A ^ {l}} ^ {(t - 1)} + (1 - \\beta_ {1}) I _ {A ^ {l}} ^ {(t)} \\tag {2}\n\\]"
|
| 391 |
+
},
|
| 392 |
+
{
|
| 393 |
+
"type": "equation",
|
| 394 |
+
"bbox": [
|
| 395 |
+
0.123,
|
| 396 |
+
0.611,
|
| 397 |
+
0.489,
|
| 398 |
+
0.633
|
| 399 |
+
],
|
| 400 |
+
"angle": 0,
|
| 401 |
+
"content": "\\[\nU _ {A ^ {l}} ^ {(t)} = \\left| I _ {A ^ {l}} ^ {(t)} - \\bar {I} _ {A ^ {l}} ^ {(t)} \\right|, \\bar {U} _ {A ^ {l}} ^ {(t)} = \\beta_ {2} \\bar {U} _ {A ^ {l}} ^ {(t - 1)} + (1 - \\beta_ {2}) U _ {A ^ {l}} ^ {(t)} \\tag {3}\n\\]"
|
| 402 |
+
},
|
| 403 |
+
{
|
| 404 |
+
"type": "equation",
|
| 405 |
+
"bbox": [
|
| 406 |
+
0.219,
|
| 407 |
+
0.641,
|
| 408 |
+
0.488,
|
| 409 |
+
0.661
|
| 410 |
+
],
|
| 411 |
+
"angle": 0,
|
| 412 |
+
"content": "\\[\ns _ {A ^ {l}} ^ {(t)} = \\operatorname {m e a n} \\left(\\bar {I} _ {A ^ {l}} ^ {(t)} \\circ \\bar {U} _ {A ^ {l}} ^ {(t)}\\right) \\tag {4}\n\\]"
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"type": "text",
|
| 416 |
+
"bbox": [
|
| 417 |
+
0.113,
|
| 418 |
+
0.666,
|
| 419 |
+
0.489,
|
| 420 |
+
0.776
|
| 421 |
+
],
|
| 422 |
+
"angle": 0,
|
| 423 |
+
"content": "Here, for each projection tensor at iteration \\(t\\), we compute a smoothed gradient \\((\\overline{I}_{A^l}^{(t)})\\) and uncertainly tensor \\((\\overline{U}_{A^l}^{(t)})\\), as shown in Eq. 2 and 3, respectively. We then evaluate the freezing score \\(s_{A^l}^{(t)}\\), as the mean of the tensor generated via Hadamard product (○) between \\(\\overline{I}_{A^l}^{(t)}\\) and \\(\\overline{U}_{A^l}^{(t)}\\)."
|
| 424 |
+
},
|
| 425 |
+
{
|
| 426 |
+
"type": "text",
|
| 427 |
+
"bbox": [
|
| 428 |
+
0.112,
|
| 429 |
+
0.776,
|
| 430 |
+
0.49,
|
| 431 |
+
0.92
|
| 432 |
+
],
|
| 433 |
+
"angle": 0,
|
| 434 |
+
"content": "To apply thresholding on the LoRA freezing scores, we use the cubic schedule as (Zhang et al., 2022). In specific, we keep the projection matrices trainable for the initial \\( t_i \\) training steps, and then progressively freeze them by calculating the freezing fraction \\( r(t) \\) as shown in Eq. 5. Finally, all the projection matrices freeze beyond \\( T - t_f \\) steps. Note, at step \\( t \\), for a computed freezing fraction \\( k \\), we freeze the lowest \\( k\\% \\) projection matrices."
|
| 435 |
+
},
|
| 436 |
+
{
|
| 437 |
+
"type": "image",
|
| 438 |
+
"bbox": [
|
| 439 |
+
0.506,
|
| 440 |
+
0.202,
|
| 441 |
+
0.631,
|
| 442 |
+
0.285
|
| 443 |
+
],
|
| 444 |
+
"angle": 0,
|
| 445 |
+
"content": null
|
| 446 |
+
},
|
| 447 |
+
{
|
| 448 |
+
"type": "image",
|
| 449 |
+
"bbox": [
|
| 450 |
+
0.635,
|
| 451 |
+
0.202,
|
| 452 |
+
0.759,
|
| 453 |
+
0.284
|
| 454 |
+
],
|
| 455 |
+
"angle": 0,
|
| 456 |
+
"content": null
|
| 457 |
+
},
|
| 458 |
+
{
|
| 459 |
+
"type": "image",
|
| 460 |
+
"bbox": [
|
| 461 |
+
0.764,
|
| 462 |
+
0.203,
|
| 463 |
+
0.886,
|
| 464 |
+
0.284
|
| 465 |
+
],
|
| 466 |
+
"angle": 0,
|
| 467 |
+
"content": null
|
| 468 |
+
},
|
| 469 |
+
{
|
| 470 |
+
"type": "image_caption",
|
| 471 |
+
"bbox": [
|
| 472 |
+
0.509,
|
| 473 |
+
0.295,
|
| 474 |
+
0.882,
|
| 475 |
+
0.324
|
| 476 |
+
],
|
| 477 |
+
"angle": 0,
|
| 478 |
+
"content": "Figure 3: A comparison of various system performances between LoRA, ELoRA, and AFLoRA."
|
| 479 |
+
},
|
| 480 |
+
{
|
| 481 |
+
"type": "equation",
|
| 482 |
+
"bbox": [
|
| 483 |
+
0.52,
|
| 484 |
+
0.352,
|
| 485 |
+
0.883,
|
| 486 |
+
0.412
|
| 487 |
+
],
|
| 488 |
+
"angle": 0,
|
| 489 |
+
"content": "\\[\nr (t) = \\left\\{ \\begin{array}{l l} 0 & 0 \\leq t < t _ {i} \\\\ 1 - \\left(1 - \\frac {t - t _ {i}}{T - t _ {i} - t _ {f}}\\right) ^ {3} & t _ {i} \\leq t < T - t _ {f} \\\\ 1 & \\text {o t h e r w i s e} \\end{array} \\right. \\tag {5}\n\\]"
|
| 490 |
+
},
|
| 491 |
+
{
|
| 492 |
+
"type": "text",
|
| 493 |
+
"bbox": [
|
| 494 |
+
0.508,
|
| 495 |
+
0.427,
|
| 496 |
+
0.884,
|
| 497 |
+
0.492
|
| 498 |
+
],
|
| 499 |
+
"angle": 0,
|
| 500 |
+
"content": "where \\( t \\) refers to current #step, \\( T \\) is the total number of fine-tuning steps. We set \\( t_i \\) to the steps corresponding to one epoch and set \\( t_f \\) to 70% of the total training steps."
|
| 501 |
+
},
|
| 502 |
+
{
|
| 503 |
+
"type": "title",
|
| 504 |
+
"bbox": [
|
| 505 |
+
0.509,
|
| 506 |
+
0.505,
|
| 507 |
+
0.656,
|
| 508 |
+
0.523
|
| 509 |
+
],
|
| 510 |
+
"angle": 0,
|
| 511 |
+
"content": "5 Experiments"
|
| 512 |
+
},
|
| 513 |
+
{
|
| 514 |
+
"type": "text",
|
| 515 |
+
"bbox": [
|
| 516 |
+
0.508,
|
| 517 |
+
0.532,
|
| 518 |
+
0.884,
|
| 519 |
+
0.661
|
| 520 |
+
],
|
| 521 |
+
"angle": 0,
|
| 522 |
+
"content": "Models & Datasets. We use the PEFT framework of (Mangrulkar et al., 2022) and evaluate the fine-tuning performance of DeBERTaV3-base (He et al., 2020) to fine-tune on our framework on the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018). The details of the hyperparameter settings for each dataset are listed in Appendix A.2."
|
| 523 |
+
},
|
| 524 |
+
{
|
| 525 |
+
"type": "text",
|
| 526 |
+
"bbox": [
|
| 527 |
+
0.508,
|
| 528 |
+
0.678,
|
| 529 |
+
0.884,
|
| 530 |
+
0.84
|
| 531 |
+
],
|
| 532 |
+
"angle": 0,
|
| 533 |
+
"content": "Performance Comparison. We benchmark the performance with AFLoRA and present a comparison with LoRA and its variants. For ELoRA, we reproduce the results at our end while the results for other methods are sourced from (Ding et al., 2023). As shown in Table 1, AFLoRA can achieve SoTA performance on the majority of datasets and on average while requiring similar and \\(9.5 \\times\\) fewer average trainable parameters as compared to ELoRA and LoRA, respectively."
|
| 534 |
+
},
|
| 535 |
+
{
|
| 536 |
+
"type": "text",
|
| 537 |
+
"bbox": [
|
| 538 |
+
0.508,
|
| 539 |
+
0.855,
|
| 540 |
+
0.884,
|
| 541 |
+
0.92
|
| 542 |
+
],
|
| 543 |
+
"angle": 0,
|
| 544 |
+
"content": "Runtime & FLOPs Comparison. Fig. 3 shows the comparison of the normalized average training runtime, normalized FLOPs, and normalized trainable parameters. For AFLoRA, we average"
|
| 545 |
+
},
|
| 546 |
+
{
|
| 547 |
+
"type": "page_number",
|
| 548 |
+
"bbox": [
|
| 549 |
+
0.487,
|
| 550 |
+
0.929,
|
| 551 |
+
0.516,
|
| 552 |
+
0.941
|
| 553 |
+
],
|
| 554 |
+
"angle": 0,
|
| 555 |
+
"content": "163"
|
| 556 |
+
}
|
| 557 |
+
],
|
| 558 |
+
[
|
| 559 |
+
{
|
| 560 |
+
"type": "table_caption",
|
| 561 |
+
"bbox": [
|
| 562 |
+
0.114,
|
| 563 |
+
0.083,
|
| 564 |
+
0.486,
|
| 565 |
+
0.111
|
| 566 |
+
],
|
| 567 |
+
"angle": 0,
|
| 568 |
+
"content": "Table 2: Results on auto-regressive complex reasoning task using LLM."
|
| 569 |
+
},
|
| 570 |
+
{
|
| 571 |
+
"type": "table",
|
| 572 |
+
"bbox": [
|
| 573 |
+
0.132,
|
| 574 |
+
0.122,
|
| 575 |
+
0.468,
|
| 576 |
+
0.151
|
| 577 |
+
],
|
| 578 |
+
"angle": 0,
|
| 579 |
+
"content": "<table><tr><td>Method</td><td>Model</td><td>Low-rank val.</td><td># Params.</td><td>GSM8k Acc (%)</td></tr><tr><td>LoRA</td><td>LLaMA-7B</td><td>32</td><td>56.1M</td><td>37.50</td></tr><tr><td>ALFoRA (Ours)</td><td>LLaMA-7B</td><td>32</td><td>17.8M</td><td>38.59</td></tr></table>"
|
| 580 |
+
},
|
| 581 |
+
{
|
| 582 |
+
"type": "table_caption",
|
| 583 |
+
"bbox": [
|
| 584 |
+
0.114,
|
| 585 |
+
0.159,
|
| 586 |
+
0.489,
|
| 587 |
+
0.202
|
| 588 |
+
],
|
| 589 |
+
"angle": 0,
|
| 590 |
+
"content": "Table 3: Results on summarizing task using LLM. We use rouge 1 (R1) and rouge 2 (R2) scores to measure the summarization quality."
|
| 591 |
+
},
|
| 592 |
+
{
|
| 593 |
+
"type": "table",
|
| 594 |
+
"bbox": [
|
| 595 |
+
0.116,
|
| 596 |
+
0.212,
|
| 597 |
+
0.489,
|
| 598 |
+
0.242
|
| 599 |
+
],
|
| 600 |
+
"angle": 0,
|
| 601 |
+
"content": "<table><tr><td>Method</td><td>Model</td><td>Low-rank val.</td><td># Params.</td><td>CNN/DailyMail (R1/R2)</td></tr><tr><td>LoRA</td><td>BART-Large</td><td>16</td><td>8.65M</td><td>43.96/21.06</td></tr><tr><td>ALFloRA (Ours)</td><td>BART-Large</td><td>16</td><td>5.10M</td><td>44.31/21.32</td></tr></table>"
|
| 602 |
+
},
|
| 603 |
+
{
|
| 604 |
+
"type": "text",
|
| 605 |
+
"bbox": [
|
| 606 |
+
0.113,
|
| 607 |
+
0.267,
|
| 608 |
+
0.489,
|
| 609 |
+
0.507
|
| 610 |
+
],
|
| 611 |
+
"angle": 0,
|
| 612 |
+
"content": "the training time, FLOPs, and trainable parameters over six GLUE datasets (except the MNLI and QQP datasets). Note, that for LoRA and ELoRA, the trainable parameters and FLOPs remain fixed for all the datasets. We compute their average runtime the same way as ours. Compared to ELoRA we can yield up to \\(1.86 \\times\\) and \\(2.96 \\times\\) runtime and FLOPs improvement while remaining comparable with LoRA in these two metrics. Compared to LoRA we yield \\(9.5 \\times\\) parameter reduction while remaining comparable with ELoRA. These results clearly demonstrate AFLoRA as a PEFT method that can yield similar parameter efficiency as ELoRA while costing no training overhead in FLOPs or time."
|
| 613 |
+
},
|
| 614 |
+
{
|
| 615 |
+
"type": "text",
|
| 616 |
+
"bbox": [
|
| 617 |
+
0.113,
|
| 618 |
+
0.525,
|
| 619 |
+
0.49,
|
| 620 |
+
0.734
|
| 621 |
+
],
|
| 622 |
+
"angle": 0,
|
| 623 |
+
"content": "Results with Large Language Models (LLMs). We now demonstrate the AFLoRA fine-tuning performance with two popular LLM variants, namely, LLaMA-7B (Touvron et al., 2023) and BARTLarge (Lewis et al., 2019) on GSM8k complex reasoning and CNN/Daily mail summarizing task, respectively. As demonstrated in Table 2, on GSM8k, AFLoRA yields improved accuracy of \\(1.09\\%\\) while requiring \\(3.15 \\times\\) fewer trainable parameters as compared to that with LoRA. On the CNN/DailyMail Summarizing task (Table 3), AFLoRA requires \\(1.69 \\times\\) fewer trainable parameters to reach similar or improved rouge score values."
|
| 624 |
+
},
|
| 625 |
+
{
|
| 626 |
+
"type": "title",
|
| 627 |
+
"bbox": [
|
| 628 |
+
0.114,
|
| 629 |
+
0.748,
|
| 630 |
+
0.373,
|
| 631 |
+
0.763
|
| 632 |
+
],
|
| 633 |
+
"angle": 0,
|
| 634 |
+
"content": "6 Ablations and Discussions"
|
| 635 |
+
},
|
| 636 |
+
{
|
| 637 |
+
"type": "text",
|
| 638 |
+
"bbox": [
|
| 639 |
+
0.113,
|
| 640 |
+
0.775,
|
| 641 |
+
0.488,
|
| 642 |
+
0.823
|
| 643 |
+
],
|
| 644 |
+
"angle": 0,
|
| 645 |
+
"content": "We conducted our ablation studies on six GLUE benchmark datasets, omitting QQP and MNLI, the two most computationally demanding datasets."
|
| 646 |
+
},
|
| 647 |
+
{
|
| 648 |
+
"type": "text",
|
| 649 |
+
"bbox": [
|
| 650 |
+
0.113,
|
| 651 |
+
0.84,
|
| 652 |
+
0.49,
|
| 653 |
+
0.919
|
| 654 |
+
],
|
| 655 |
+
"angle": 0,
|
| 656 |
+
"content": "Do we really need adaptive freezing? We conducted experiments with all the LoRA PMs frozen (same as ELoRA), all the LoRA PMs trainable, and with our adaptive training of LoRA PMs. We use, \\( r = 4 \\) for the LoRA path,"
|
| 657 |
+
},
|
| 658 |
+
{
|
| 659 |
+
"type": "table_caption",
|
| 660 |
+
"bbox": [
|
| 661 |
+
0.509,
|
| 662 |
+
0.078,
|
| 663 |
+
0.885,
|
| 664 |
+
0.121
|
| 665 |
+
],
|
| 666 |
+
"angle": 0,
|
| 667 |
+
"content": "Table 4: Ablation study on the trainability impact of the projection matrices (PM) of the AFLoRA module. We keep the vectors trainable throughout for all."
|
| 668 |
+
},
|
| 669 |
+
{
|
| 670 |
+
"type": "table",
|
| 671 |
+
"bbox": [
|
| 672 |
+
0.513,
|
| 673 |
+
0.131,
|
| 674 |
+
0.886,
|
| 675 |
+
0.169
|
| 676 |
+
],
|
| 677 |
+
"angle": 0,
|
| 678 |
+
"content": "<table><tr><td>PM</td><td>#Params.</td><td>CoLA</td><td>SST-2</td><td>MRPC</td><td>QNLI</td><td>STS-B</td><td>RTE</td><td>Avg.</td></tr><tr><td>Trainable</td><td>0.45M</td><td>70.15</td><td>95.99</td><td>92.4</td><td>94.16</td><td>89.90</td><td>88.45</td><td>88.51</td></tr><tr><td>Frozen</td><td>0.08M</td><td>70.36</td><td>94.95</td><td>89.22</td><td>93.61</td><td>91.17</td><td>85.92</td><td>87.54</td></tr><tr><td>AFLoRA (Ours)</td><td>0.14M</td><td>72.01</td><td>96.22</td><td>91.91</td><td>94.42</td><td>91.84</td><td>88.09</td><td>89.23</td></tr></table>"
|
| 679 |
+
},
|
| 680 |
+
{
|
| 681 |
+
"type": "image",
|
| 682 |
+
"bbox": [
|
| 683 |
+
0.554,
|
| 684 |
+
0.18,
|
| 685 |
+
0.843,
|
| 686 |
+
0.29
|
| 687 |
+
],
|
| 688 |
+
"angle": 0,
|
| 689 |
+
"content": null
|
| 690 |
+
},
|
| 691 |
+
{
|
| 692 |
+
"type": "image_caption",
|
| 693 |
+
"bbox": [
|
| 694 |
+
0.509,
|
| 695 |
+
0.301,
|
| 696 |
+
0.884,
|
| 697 |
+
0.33
|
| 698 |
+
],
|
| 699 |
+
"angle": 0,
|
| 700 |
+
"content": "Figure 4: A comparison of performance outcomes utilizing three distinct freezing score methodologies."
|
| 701 |
+
},
|
| 702 |
+
{
|
| 703 |
+
"type": "table_caption",
|
| 704 |
+
"bbox": [
|
| 705 |
+
0.509,
|
| 706 |
+
0.336,
|
| 707 |
+
0.882,
|
| 708 |
+
0.365
|
| 709 |
+
],
|
| 710 |
+
"angle": 0,
|
| 711 |
+
"content": "Table 5: Ablation study on making the PMs for different layer-types trainable."
|
| 712 |
+
},
|
| 713 |
+
{
|
| 714 |
+
"type": "table",
|
| 715 |
+
"bbox": [
|
| 716 |
+
0.512,
|
| 717 |
+
0.381,
|
| 718 |
+
0.88,
|
| 719 |
+
0.445
|
| 720 |
+
],
|
| 721 |
+
"angle": 0,
|
| 722 |
+
"content": "<table><tr><td>FFN</td><td>Attn</td><td>CoLA</td><td>SST-2</td><td>MRPC</td><td>QNLI</td><td>STS-B</td><td>RTE</td><td>Avg.</td></tr><tr><td>✓</td><td>✓</td><td>70.33</td><td>95.76</td><td>90.93</td><td>94.36</td><td>91.44</td><td>87.37</td><td>88.48</td></tr><tr><td></td><td></td><td>0.15M</td><td>0.19M</td><td>0.18M</td><td>0.19M</td><td>0.16M</td><td>0.17M</td><td>0.17M</td></tr><tr><td>X</td><td>✓</td><td>71.118</td><td>95.986</td><td>89.951</td><td>94.12</td><td>91.39</td><td>86.28</td><td>88.14</td></tr><tr><td></td><td></td><td>0.11M</td><td>0.13M</td><td>0.12M</td><td>0.13M</td><td>0.12M</td><td>0.12M</td><td>0.12M</td></tr><tr><td>✓</td><td>X</td><td>72.01</td><td>96.22</td><td>91.91</td><td>94.42</td><td>91.84</td><td>88.09</td><td>89.02</td></tr><tr><td></td><td></td><td>0.13M</td><td>0.18M</td><td>0.13M</td><td>0.13M</td><td>0.13M</td><td>0.13M</td><td>0.14M</td></tr></table>"
|
| 723 |
+
},
|
| 724 |
+
{
|
| 725 |
+
"type": "text",
|
| 726 |
+
"bbox": [
|
| 727 |
+
0.508,
|
| 728 |
+
0.469,
|
| 729 |
+
0.884,
|
| 730 |
+
0.598
|
| 731 |
+
],
|
| 732 |
+
"angle": 0,
|
| 733 |
+
"content": "for all. As we can see in Table 4, keeping the projection matrices trainable yields better average performance compared to keeping them frozen throughout. However, AFLoRA with adaptive freezing yields even better performance than keeping them trainable throughout, potentially highlighting its ability to regularize the fine-tuning against overfitting."
|
| 734 |
+
},
|
| 735 |
+
{
|
| 736 |
+
"type": "text",
|
| 737 |
+
"bbox": [
|
| 738 |
+
0.508,
|
| 739 |
+
0.614,
|
| 740 |
+
0.884,
|
| 741 |
+
0.871
|
| 742 |
+
],
|
| 743 |
+
"angle": 0,
|
| 744 |
+
"content": "Do we need to keep the PMs trainable for all layer types? There are two major layer types, FFN and the attention layers. We place the PMs in both along with the feature transformation vectors. We then study the necessity of keeping the PMs trainable in these two layer types. Note, here, we keep the vectors trainable for all throughout. As shown in Table 5, keeping the PMs trainable (and then adaptive freezing) in the FFN yields better performance compared to the alternatives. Note we keep the PMs in the attention layers frozen to random values. Interestingly, allowing all PMs to initially train and then adaptively freeze yields poorer performance than allowing them only in MLP. This may hint at the FFN weights to play a more important role in fine-tuning performance."
|
| 745 |
+
},
|
| 746 |
+
{
|
| 747 |
+
"type": "text",
|
| 748 |
+
"bbox": [
|
| 749 |
+
0.509,
|
| 750 |
+
0.888,
|
| 751 |
+
0.882,
|
| 752 |
+
0.919
|
| 753 |
+
],
|
| 754 |
+
"angle": 0,
|
| 755 |
+
"content": "Ablation with sensitivity choices. Fig. 4 presents ablation with three sensitivity scores based"
|
| 756 |
+
},
|
| 757 |
+
{
|
| 758 |
+
"type": "page_number",
|
| 759 |
+
"bbox": [
|
| 760 |
+
0.487,
|
| 761 |
+
0.929,
|
| 762 |
+
0.516,
|
| 763 |
+
0.941
|
| 764 |
+
],
|
| 765 |
+
"angle": 0,
|
| 766 |
+
"content": "164"
|
| 767 |
+
}
|
| 768 |
+
],
|
| 769 |
+
[
|
| 770 |
+
{
|
| 771 |
+
"type": "text",
|
| 772 |
+
"bbox": [
|
| 773 |
+
0.114,
|
| 774 |
+
0.085,
|
| 775 |
+
0.49,
|
| 776 |
+
0.166
|
| 777 |
+
],
|
| 778 |
+
"angle": 0,
|
| 779 |
+
"content": "on three different sensitivity choices, namely, \\( |grad(p)| \\) (adopted in AFLoRA), \\( |p * grad(p)| \\) and \\( |grad(p)/p| \\). On average, the freezing score adopted in AFLoRA, consistently yields better accuracy over the other two."
|
| 780 |
+
},
|
| 781 |
+
{
|
| 782 |
+
"type": "text",
|
| 783 |
+
"bbox": [
|
| 784 |
+
0.113,
|
| 785 |
+
0.182,
|
| 786 |
+
0.49,
|
| 787 |
+
0.39
|
| 788 |
+
],
|
| 789 |
+
"angle": 0,
|
| 790 |
+
"content": "Discussion on Freezing Trend. We use the RTE dataset as a case study, to understand the freezing trend of the PMs across different layers. Specifically, we illustrate the specific number of iterations required before freezing each component in Fig. 5. Interestingly, as can be seen from the figure, analysis reveals that the down-projection matrix parallel to the intermediate linear layer requires longer training duration prior to being frozen, as compared to the other PMs. This may potentially hint at the low approximation ability of the intermediate layer as compared to the second MLP in the FFN."
|
| 791 |
+
},
|
| 792 |
+
{
|
| 793 |
+
"type": "title",
|
| 794 |
+
"bbox": [
|
| 795 |
+
0.114,
|
| 796 |
+
0.405,
|
| 797 |
+
0.255,
|
| 798 |
+
0.421
|
| 799 |
+
],
|
| 800 |
+
"angle": 0,
|
| 801 |
+
"content": "7 Conclusions"
|
| 802 |
+
},
|
| 803 |
+
{
|
| 804 |
+
"type": "text",
|
| 805 |
+
"bbox": [
|
| 806 |
+
0.113,
|
| 807 |
+
0.433,
|
| 808 |
+
0.49,
|
| 809 |
+
0.562
|
| 810 |
+
],
|
| 811 |
+
"angle": 0,
|
| 812 |
+
"content": "In this paper, we presented AFLoRA, adaptive freezing of LoRA adapters that allow near-optimal trainability of the LoRA projection matrices and freezes them driven by a \"freezing score\" after certain fine-tuning steps. Compared to LoRA, AFLoRA can reduce the trainable parameters by up to \\(9.5\\times\\) while yielding \\(0.85\\%\\) average improved performance as evaluated on the GLUE benchmark."
|
| 813 |
+
},
|
| 814 |
+
{
|
| 815 |
+
"type": "title",
|
| 816 |
+
"bbox": [
|
| 817 |
+
0.114,
|
| 818 |
+
0.575,
|
| 819 |
+
0.242,
|
| 820 |
+
0.59
|
| 821 |
+
],
|
| 822 |
+
"angle": 0,
|
| 823 |
+
"content": "8 Limitation"
|
| 824 |
+
},
|
| 825 |
+
{
|
| 826 |
+
"type": "text",
|
| 827 |
+
"bbox": [
|
| 828 |
+
0.113,
|
| 829 |
+
0.602,
|
| 830 |
+
0.49,
|
| 831 |
+
0.746
|
| 832 |
+
],
|
| 833 |
+
"angle": 0,
|
| 834 |
+
"content": "In the ablation study with various freezing score metrics, we discovered that alternative scoring methods outperform ours on certain datasets, suggesting possible room for research in refining the freezing scores. This can further improve performance with AFLoRA. Additionally, the integration of AFLoRA in the adaptive rank evaluation framework can potentially open a new direction for PEFT that we consider as future research."
|
| 835 |
+
},
|
| 836 |
+
{
|
| 837 |
+
"type": "title",
|
| 838 |
+
"bbox": [
|
| 839 |
+
0.115,
|
| 840 |
+
0.776,
|
| 841 |
+
0.214,
|
| 842 |
+
0.791
|
| 843 |
+
],
|
| 844 |
+
"angle": 0,
|
| 845 |
+
"content": "References"
|
| 846 |
+
},
|
| 847 |
+
{
|
| 848 |
+
"type": "text",
|
| 849 |
+
"bbox": [
|
| 850 |
+
0.113,
|
| 851 |
+
0.8,
|
| 852 |
+
0.489,
|
| 853 |
+
0.854
|
| 854 |
+
],
|
| 855 |
+
"angle": 0,
|
| 856 |
+
"content": "Armen Aghajanyan, Luke Zettlemoyer, and Sonal Gupta. 2020. Intrinsic dimensionality explains the effectiveness of language model fine-tuning. arXiv preprint arXiv:2012.13255."
|
| 857 |
+
},
|
| 858 |
+
{
|
| 859 |
+
"type": "text",
|
| 860 |
+
"bbox": [
|
| 861 |
+
0.113,
|
| 862 |
+
0.866,
|
| 863 |
+
0.489,
|
| 864 |
+
0.919
|
| 865 |
+
],
|
| 866 |
+
"angle": 0,
|
| 867 |
+
"content": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot"
|
| 868 |
+
},
|
| 869 |
+
{
|
| 870 |
+
"type": "image",
|
| 871 |
+
"bbox": [
|
| 872 |
+
0.543,
|
| 873 |
+
0.081,
|
| 874 |
+
0.855,
|
| 875 |
+
0.17
|
| 876 |
+
],
|
| 877 |
+
"angle": 0,
|
| 878 |
+
"content": null
|
| 879 |
+
},
|
| 880 |
+
{
|
| 881 |
+
"type": "image_caption",
|
| 882 |
+
"bbox": [
|
| 883 |
+
0.508,
|
| 884 |
+
0.179,
|
| 885 |
+
0.885,
|
| 886 |
+
0.266
|
| 887 |
+
],
|
| 888 |
+
"angle": 0,
|
| 889 |
+
"content": "Figure 5: Visualization of freezing iterations for each layer. 'out' and 'inter' refer to the second and the first MLP layer of the FFN, respectively. 'A' and 'B' represent the down-projection and up-projection matrix, respectively. The darker the color, the more iterations the matrix has to go through before freezing."
|
| 890 |
+
},
|
| 891 |
+
{
|
| 892 |
+
"type": "text",
|
| 893 |
+
"bbox": [
|
| 894 |
+
0.508,
|
| 895 |
+
0.28,
|
| 896 |
+
0.883,
|
| 897 |
+
0.307
|
| 898 |
+
],
|
| 899 |
+
"angle": 0,
|
| 900 |
+
"content": "learners. Advances in neural information processing systems, 33:1877-1901."
|
| 901 |
+
},
|
| 902 |
+
{
|
| 903 |
+
"type": "text",
|
| 904 |
+
"bbox": [
|
| 905 |
+
0.508,
|
| 906 |
+
0.316,
|
| 907 |
+
0.884,
|
| 908 |
+
0.37
|
| 909 |
+
],
|
| 910 |
+
"angle": 0,
|
| 911 |
+
"content": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805."
|
| 912 |
+
},
|
| 913 |
+
{
|
| 914 |
+
"type": "text",
|
| 915 |
+
"bbox": [
|
| 916 |
+
0.508,
|
| 917 |
+
0.378,
|
| 918 |
+
0.884,
|
| 919 |
+
0.432
|
| 920 |
+
],
|
| 921 |
+
"angle": 0,
|
| 922 |
+
"content": "Ning Ding, Xingtai Lv, Qiaosen Wang, Yulin Chen, Bowen Zhou, Zhiyuan Liu, and Maosong Sun. 2023. Sparse low-rank adaptation of pre-trained language models. arXiv preprint arXiv:2311.11696."
|
| 923 |
+
},
|
| 924 |
+
{
|
| 925 |
+
"type": "text",
|
| 926 |
+
"bbox": [
|
| 927 |
+
0.508,
|
| 928 |
+
0.44,
|
| 929 |
+
0.884,
|
| 930 |
+
0.495
|
| 931 |
+
],
|
| 932 |
+
"angle": 0,
|
| 933 |
+
"content": "Song Han, Jeff Pool, John Tran, and William Dally. 2015. Learning both weights and connections for efficient neural network. Advances in neural information processing systems, 28."
|
| 934 |
+
},
|
| 935 |
+
{
|
| 936 |
+
"type": "text",
|
| 937 |
+
"bbox": [
|
| 938 |
+
0.508,
|
| 939 |
+
0.503,
|
| 940 |
+
0.883,
|
| 941 |
+
0.556
|
| 942 |
+
],
|
| 943 |
+
"angle": 0,
|
| 944 |
+
"content": "Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654."
|
| 945 |
+
},
|
| 946 |
+
{
|
| 947 |
+
"type": "text",
|
| 948 |
+
"bbox": [
|
| 949 |
+
0.508,
|
| 950 |
+
0.565,
|
| 951 |
+
0.883,
|
| 952 |
+
0.632
|
| 953 |
+
],
|
| 954 |
+
"angle": 0,
|
| 955 |
+
"content": "Ruidan He, Linlin Liu, Hai Ye, Qingyu Tan, Bosheng Ding, Liying Cheng, Jia-Wei Low, Lidong Bing, and Luo Si. 2021. On the effectiveness of adapter-based tuning for pretrained language model adaptation. arXiv preprint arXiv:2106.03164."
|
| 956 |
+
},
|
| 957 |
+
{
|
| 958 |
+
"type": "text",
|
| 959 |
+
"bbox": [
|
| 960 |
+
0.508,
|
| 961 |
+
0.64,
|
| 962 |
+
0.884,
|
| 963 |
+
0.718
|
| 964 |
+
],
|
| 965 |
+
"angle": 0,
|
| 966 |
+
"content": "Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In International Conference on Machine Learning, pages 2790-2799. PMLR."
|
| 967 |
+
},
|
| 968 |
+
{
|
| 969 |
+
"type": "text",
|
| 970 |
+
"bbox": [
|
| 971 |
+
0.508,
|
| 972 |
+
0.728,
|
| 973 |
+
0.883,
|
| 974 |
+
0.782
|
| 975 |
+
],
|
| 976 |
+
"angle": 0,
|
| 977 |
+
"content": "Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685."
|
| 978 |
+
},
|
| 979 |
+
{
|
| 980 |
+
"type": "text",
|
| 981 |
+
"bbox": [
|
| 982 |
+
0.508,
|
| 983 |
+
0.79,
|
| 984 |
+
0.884,
|
| 985 |
+
0.857
|
| 986 |
+
],
|
| 987 |
+
"angle": 0,
|
| 988 |
+
"content": "Hao Kang, Qingru Zhang, Souvik Kundu, Geonhwa Jeong, Zaoxing Liu, Tushar Krishna, and Tuo Zhao. 2024. Gear: An efficient kv cache compression recipe for near-lossless generative inference of llm. arXiv preprint arXiv:2403.05527."
|
| 989 |
+
},
|
| 990 |
+
{
|
| 991 |
+
"type": "text",
|
| 992 |
+
"bbox": [
|
| 993 |
+
0.508,
|
| 994 |
+
0.866,
|
| 995 |
+
0.884,
|
| 996 |
+
0.919
|
| 997 |
+
],
|
| 998 |
+
"angle": 0,
|
| 999 |
+
"content": "Dawid Jan Kopiczko, Tijmen Blankevoort, and Yuki M Asano. 2024. ELoRA: Efficient low-rank adaptation with random matrices. In The Twelfth International Conference on Learning Representations."
|
| 1000 |
+
},
|
| 1001 |
+
{
|
| 1002 |
+
"type": "page_number",
|
| 1003 |
+
"bbox": [
|
| 1004 |
+
0.487,
|
| 1005 |
+
0.929,
|
| 1006 |
+
0.516,
|
| 1007 |
+
0.941
|
| 1008 |
+
],
|
| 1009 |
+
"angle": 0,
|
| 1010 |
+
"content": "165"
|
| 1011 |
+
}
|
| 1012 |
+
],
|
| 1013 |
+
[
|
| 1014 |
+
{
|
| 1015 |
+
"type": "text",
|
| 1016 |
+
"bbox": [
|
| 1017 |
+
0.114,
|
| 1018 |
+
0.086,
|
| 1019 |
+
0.488,
|
| 1020 |
+
0.152
|
| 1021 |
+
],
|
| 1022 |
+
"angle": 0,
|
| 1023 |
+
"content": "Souvik Kundu, Mahdi Nazemi, Peter A Beerel, and Massoud Pedram. 2021. Dnr: A tunable robust pruning framework through dynamic network rewiring of dnns. In Proceedings of the 26th Asia and South Pacific Design Automation Conference, pages 344-350."
|
| 1024 |
+
},
|
| 1025 |
+
{
|
| 1026 |
+
"type": "text",
|
| 1027 |
+
"bbox": [
|
| 1028 |
+
0.114,
|
| 1029 |
+
0.162,
|
| 1030 |
+
0.49,
|
| 1031 |
+
0.214
|
| 1032 |
+
],
|
| 1033 |
+
"angle": 0,
|
| 1034 |
+
"content": "Souvik Kundu, Sharath Sridhar Nittur, Maciej Szankin, and Sairam Sundaresan. 2024. Sensi-bert: Towards sensitivity driven fine-tuning for parameter-efficient bert. ICASSP."
|
| 1035 |
+
},
|
| 1036 |
+
{
|
| 1037 |
+
"type": "text",
|
| 1038 |
+
"bbox": [
|
| 1039 |
+
0.114,
|
| 1040 |
+
0.224,
|
| 1041 |
+
0.488,
|
| 1042 |
+
0.302
|
| 1043 |
+
],
|
| 1044 |
+
"angle": 0,
|
| 1045 |
+
"content": "Souvik Kundu, Shikai Wang, Qirui Sun, Peter A Beerel, and Massoud Pedram. 2022. Bmpq: bit-gradient sensitivity-driven mixed-precision quantization of dnns from scratch. In 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE), pages 588-591. IEEE."
|
| 1046 |
+
},
|
| 1047 |
+
{
|
| 1048 |
+
"type": "text",
|
| 1049 |
+
"bbox": [
|
| 1050 |
+
0.114,
|
| 1051 |
+
0.312,
|
| 1052 |
+
0.488,
|
| 1053 |
+
0.352
|
| 1054 |
+
],
|
| 1055 |
+
"angle": 0,
|
| 1056 |
+
"content": "Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691."
|
| 1057 |
+
},
|
| 1058 |
+
{
|
| 1059 |
+
"type": "text",
|
| 1060 |
+
"bbox": [
|
| 1061 |
+
0.114,
|
| 1062 |
+
0.362,
|
| 1063 |
+
0.488,
|
| 1064 |
+
0.44
|
| 1065 |
+
],
|
| 1066 |
+
"angle": 0,
|
| 1067 |
+
"content": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461."
|
| 1068 |
+
},
|
| 1069 |
+
{
|
| 1070 |
+
"type": "text",
|
| 1071 |
+
"bbox": [
|
| 1072 |
+
0.114,
|
| 1073 |
+
0.45,
|
| 1074 |
+
0.488,
|
| 1075 |
+
0.489
|
| 1076 |
+
],
|
| 1077 |
+
"angle": 0,
|
| 1078 |
+
"content": "Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190."
|
| 1079 |
+
},
|
| 1080 |
+
{
|
| 1081 |
+
"type": "text",
|
| 1082 |
+
"bbox": [
|
| 1083 |
+
0.114,
|
| 1084 |
+
0.5,
|
| 1085 |
+
0.488,
|
| 1086 |
+
0.564
|
| 1087 |
+
],
|
| 1088 |
+
"angle": 0,
|
| 1089 |
+
"content": "Yixiao Li, Yifan Yu, Qingru Zhang, Chen Liang, Pengcheng He, Weizhu Chen, and Tuo Zhao. 2023. Losparse: Structured compression of large language models based on low-rank and sparse approximation. arXiv preprint arXiv:2306.11222."
|
| 1090 |
+
},
|
| 1091 |
+
{
|
| 1092 |
+
"type": "text",
|
| 1093 |
+
"bbox": [
|
| 1094 |
+
0.114,
|
| 1095 |
+
0.575,
|
| 1096 |
+
0.488,
|
| 1097 |
+
0.64
|
| 1098 |
+
],
|
| 1099 |
+
"angle": 0,
|
| 1100 |
+
"content": "Sourab Mangrulkar, Sylvain Gugger, Lysandre Debut, Younes Belkada, Sayak Paul, and Benjamin Bossan. 2022. Peft: State-of-the-art parameter-efficient fine-tuning methods. https://github.com/huggingface/peft."
|
| 1101 |
+
},
|
| 1102 |
+
{
|
| 1103 |
+
"type": "text",
|
| 1104 |
+
"bbox": [
|
| 1105 |
+
0.114,
|
| 1106 |
+
0.65,
|
| 1107 |
+
0.488,
|
| 1108 |
+
0.716
|
| 1109 |
+
],
|
| 1110 |
+
"angle": 0,
|
| 1111 |
+
"content": "Pavlo Molchanov, Arun Mallya, Stephen Tyree, Iuri Frosio, and Jan Kautz. 2019. Importance estimation for neural network pruning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11264-11272."
|
| 1112 |
+
},
|
| 1113 |
+
{
|
| 1114 |
+
"type": "text",
|
| 1115 |
+
"bbox": [
|
| 1116 |
+
0.114,
|
| 1117 |
+
0.725,
|
| 1118 |
+
0.488,
|
| 1119 |
+
0.803
|
| 1120 |
+
],
|
| 1121 |
+
"angle": 0,
|
| 1122 |
+
"content": "Sharath Nittur Sridhar, Souvik Kundu, Sairam Sundaresan, Maciej Szankin, and Anthony Sarah. 2023. Instatute: Instantaneous neural architecture search during fine-tuning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1523-1527."
|
| 1123 |
+
},
|
| 1124 |
+
{
|
| 1125 |
+
"type": "text",
|
| 1126 |
+
"bbox": [
|
| 1127 |
+
0.114,
|
| 1128 |
+
0.814,
|
| 1129 |
+
0.488,
|
| 1130 |
+
0.918
|
| 1131 |
+
],
|
| 1132 |
+
"angle": 0,
|
| 1133 |
+
"content": "Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan,"
|
| 1134 |
+
},
|
| 1135 |
+
{
|
| 1136 |
+
"type": "text",
|
| 1137 |
+
"bbox": [
|
| 1138 |
+
0.508,
|
| 1139 |
+
0.086,
|
| 1140 |
+
0.883,
|
| 1141 |
+
0.268
|
| 1142 |
+
],
|
| 1143 |
+
"angle": 0,
|
| 1144 |
+
"content": "Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and fine-tuned chat models."
|
| 1145 |
+
},
|
| 1146 |
+
{
|
| 1147 |
+
"type": "text",
|
| 1148 |
+
"bbox": [
|
| 1149 |
+
0.508,
|
| 1150 |
+
0.279,
|
| 1151 |
+
0.883,
|
| 1152 |
+
0.343
|
| 1153 |
+
],
|
| 1154 |
+
"angle": 0,
|
| 1155 |
+
"content": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461."
|
| 1156 |
+
},
|
| 1157 |
+
{
|
| 1158 |
+
"type": "text",
|
| 1159 |
+
"bbox": [
|
| 1160 |
+
0.508,
|
| 1161 |
+
0.355,
|
| 1162 |
+
0.883,
|
| 1163 |
+
0.498
|
| 1164 |
+
],
|
| 1165 |
+
"angle": 0,
|
| 1166 |
+
"content": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierrick Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics."
|
| 1167 |
+
},
|
| 1168 |
+
{
|
| 1169 |
+
"type": "text",
|
| 1170 |
+
"bbox": [
|
| 1171 |
+
0.508,
|
| 1172 |
+
0.509,
|
| 1173 |
+
0.883,
|
| 1174 |
+
0.56
|
| 1175 |
+
],
|
| 1176 |
+
"angle": 0,
|
| 1177 |
+
"content": "Lu Yin, Ajay Jaiswal, Shiwei Liu, Souvik Kundu, and Zhangyang Wang. 2024. Pruning small pre-trained weights irreversibly and monotonically impairs \"difficult\" downstream tasks in llms."
|
| 1178 |
+
},
|
| 1179 |
+
{
|
| 1180 |
+
"type": "text",
|
| 1181 |
+
"bbox": [
|
| 1182 |
+
0.508,
|
| 1183 |
+
0.572,
|
| 1184 |
+
0.883,
|
| 1185 |
+
0.637
|
| 1186 |
+
],
|
| 1187 |
+
"angle": 0,
|
| 1188 |
+
"content": "Qingru Zhang, Minshuo Chen, Alexander Bukharin, Pengcheng He, Yu Cheng, Weizhu Chen, and Tuo Zhao. 2023. Adaptive budget allocation for parameter-efficient fine-tuning. In The Eleventh International Conference on Learning Representations."
|
| 1189 |
+
},
|
| 1190 |
+
{
|
| 1191 |
+
"type": "text",
|
| 1192 |
+
"bbox": [
|
| 1193 |
+
0.508,
|
| 1194 |
+
0.647,
|
| 1195 |
+
0.883,
|
| 1196 |
+
0.725
|
| 1197 |
+
],
|
| 1198 |
+
"angle": 0,
|
| 1199 |
+
"content": "Qingru Zhang, Simiao Zuo, Chen Liang, Alexander Bukharin, Pengcheng He, Weizhu Chen, and Tuo Zhao. 2022. Platon: Pruning large transformer models with upper confidence bound of weight importance. In International Conference on Machine Learning, pages 26809-26823. PMLR."
|
| 1200 |
+
},
|
| 1201 |
+
{
|
| 1202 |
+
"type": "page_number",
|
| 1203 |
+
"bbox": [
|
| 1204 |
+
0.488,
|
| 1205 |
+
0.929,
|
| 1206 |
+
0.516,
|
| 1207 |
+
0.94
|
| 1208 |
+
],
|
| 1209 |
+
"angle": 0,
|
| 1210 |
+
"content": "166"
|
| 1211 |
+
}
|
| 1212 |
+
],
|
| 1213 |
+
[
|
| 1214 |
+
{
|
| 1215 |
+
"type": "title",
|
| 1216 |
+
"bbox": [
|
| 1217 |
+
0.115,
|
| 1218 |
+
0.085,
|
| 1219 |
+
0.239,
|
| 1220 |
+
0.101
|
| 1221 |
+
],
|
| 1222 |
+
"angle": 0,
|
| 1223 |
+
"content": "A Appendix"
|
| 1224 |
+
},
|
| 1225 |
+
{
|
| 1226 |
+
"type": "title",
|
| 1227 |
+
"bbox": [
|
| 1228 |
+
0.115,
|
| 1229 |
+
0.11,
|
| 1230 |
+
0.226,
|
| 1231 |
+
0.123
|
| 1232 |
+
],
|
| 1233 |
+
"angle": 0,
|
| 1234 |
+
"content": "A.1 Dataset"
|
| 1235 |
+
},
|
| 1236 |
+
{
|
| 1237 |
+
"type": "text",
|
| 1238 |
+
"bbox": [
|
| 1239 |
+
0.114,
|
| 1240 |
+
0.131,
|
| 1241 |
+
0.488,
|
| 1242 |
+
0.21
|
| 1243 |
+
],
|
| 1244 |
+
"angle": 0,
|
| 1245 |
+
"content": "The details of train/test/dev splits and the evaluation metric of the GLUE (Wang et al., 2018) dataset are reported in Table 6. We use the Huggingface Transformers library (Wolf et al., 2020) to source all the datasets."
|
| 1246 |
+
},
|
| 1247 |
+
{
|
| 1248 |
+
"type": "text",
|
| 1249 |
+
"bbox": [
|
| 1250 |
+
0.114,
|
| 1251 |
+
0.22,
|
| 1252 |
+
0.487,
|
| 1253 |
+
0.304
|
| 1254 |
+
],
|
| 1255 |
+
"angle": 0,
|
| 1256 |
+
"content": "Table 6: Statistics of the GLUE benchmark datasets.\"Mcc\", \"Acc\", \"F1\" and \"Pear\" represent Matthews correlation coefficient, accuracy, the F1 score and the Pearson correlation coefficient respectively. And \"Acc\" for the MNLI dataset contains the accuracy for the matched and mismatched subset of the datasets."
|
| 1257 |
+
},
|
| 1258 |
+
{
|
| 1259 |
+
"type": "table",
|
| 1260 |
+
"bbox": [
|
| 1261 |
+
0.189,
|
| 1262 |
+
0.306,
|
| 1263 |
+
0.413,
|
| 1264 |
+
0.386
|
| 1265 |
+
],
|
| 1266 |
+
"angle": 0,
|
| 1267 |
+
"content": "<table><tr><td>Dataset</td><td>#Train</td><td>#Valid</td><td>#Test</td><td>Metric</td></tr><tr><td>CoLA</td><td>8.5k</td><td>1,043</td><td>1,063</td><td>Mcc</td></tr><tr><td>SST-2</td><td>67k</td><td>872</td><td>1.8k</td><td>Acc</td></tr><tr><td>MRPC</td><td>3.7k</td><td>408</td><td>1.7k</td><td>Acc</td></tr><tr><td>QQP</td><td>364k</td><td>40.4k</td><td>391k</td><td>Acc/F1</td></tr><tr><td>STS-B</td><td>5.7k</td><td>1.5k</td><td>1.4k</td><td>Pear</td></tr><tr><td>MNLI</td><td>393k</td><td>9.8k/9.8k</td><td>9.8k/9.8k</td><td>Acc</td></tr><tr><td>QNLI</td><td>105k</td><td>5.5k</td><td>5.5k</td><td>Acc</td></tr><tr><td>RTE</td><td>2.5k</td><td>277</td><td>3k</td><td>Acc</td></tr></table>"
|
| 1268 |
+
},
|
| 1269 |
+
{
|
| 1270 |
+
"type": "title",
|
| 1271 |
+
"bbox": [
|
| 1272 |
+
0.115,
|
| 1273 |
+
0.406,
|
| 1274 |
+
0.407,
|
| 1275 |
+
0.422
|
| 1276 |
+
],
|
| 1277 |
+
"angle": 0,
|
| 1278 |
+
"content": "A.2 Hyperparameter configuration"
|
| 1279 |
+
},
|
| 1280 |
+
{
|
| 1281 |
+
"type": "text",
|
| 1282 |
+
"bbox": [
|
| 1283 |
+
0.113,
|
| 1284 |
+
0.427,
|
| 1285 |
+
0.489,
|
| 1286 |
+
0.553
|
| 1287 |
+
],
|
| 1288 |
+
"angle": 0,
|
| 1289 |
+
"content": "Table 7 shows the main hyper-parameter set up in this paper. Besides them, we use the same optimizer, warmup Ratio, and LR schedule as Hu et al. (2021). We use NVIDIA RTX A6000 (maximum GPU memory=49140MB) to measure the training runtime. For all experiments, we run 5 times using different random seeds and report the average results."
|
| 1290 |
+
},
|
| 1291 |
+
{
|
| 1292 |
+
"type": "table_caption",
|
| 1293 |
+
"bbox": [
|
| 1294 |
+
0.114,
|
| 1295 |
+
0.565,
|
| 1296 |
+
0.487,
|
| 1297 |
+
0.592
|
| 1298 |
+
],
|
| 1299 |
+
"angle": 0,
|
| 1300 |
+
"content": "Table 7: Hyperparameter setup for all eight datasets in GLUE benchmark"
|
| 1301 |
+
},
|
| 1302 |
+
{
|
| 1303 |
+
"type": "table",
|
| 1304 |
+
"bbox": [
|
| 1305 |
+
0.122,
|
| 1306 |
+
0.599,
|
| 1307 |
+
0.477,
|
| 1308 |
+
0.688
|
| 1309 |
+
],
|
| 1310 |
+
"angle": 0,
|
| 1311 |
+
"content": "<table><tr><td>Hyperparameter</td><td>CoLA</td><td>SST-2</td><td>MRPC</td><td>QNLI</td><td>STS-B</td><td>RTE</td><td>MNLI</td><td>QQP</td></tr><tr><td># epochs</td><td>20</td><td>10</td><td>20</td><td>10</td><td>20</td><td>20</td><td>10</td><td>10</td></tr><tr><td>Batch size</td><td></td><td></td><td></td><td>64</td><td></td><td></td><td></td><td></td></tr><tr><td>Max Seq. Len.</td><td></td><td></td><td></td><td>256</td><td></td><td></td><td></td><td></td></tr><tr><td>Clf. Lr.*</td><td>4E-2</td><td>4E-3</td><td>8E-2</td><td>4E-3</td><td>2E-2</td><td>4E-2</td><td>4E-3</td><td>4E-3</td></tr><tr><td>Learning rate</td><td>1E-2</td><td>4E-3</td><td>1E-2</td><td>1E-3</td><td>2E-3</td><td>1E-3</td><td>1E-3</td><td>4E-3</td></tr><tr><td>ti(epoch)</td><td></td><td></td><td></td><td>1</td><td></td><td></td><td></td><td></td></tr><tr><td>tf(epoch)</td><td>14</td><td>7</td><td>14</td><td>7</td><td>14</td><td>14</td><td>7</td><td>7</td></tr><tr><td>β1</td><td></td><td></td><td></td><td>0.85</td><td></td><td></td><td></td><td></td></tr><tr><td>β2</td><td></td><td></td><td></td><td>0.95</td><td></td><td></td><td></td><td></td></tr></table>"
|
| 1312 |
+
},
|
| 1313 |
+
{
|
| 1314 |
+
"type": "table_footnote",
|
| 1315 |
+
"bbox": [
|
| 1316 |
+
0.134,
|
| 1317 |
+
0.689,
|
| 1318 |
+
0.477,
|
| 1319 |
+
0.713
|
| 1320 |
+
],
|
| 1321 |
+
"angle": 0,
|
| 1322 |
+
"content": "* \"Clf. Lr.* means the learning rate for the classification head."
|
| 1323 |
+
},
|
| 1324 |
+
{
|
| 1325 |
+
"type": "title",
|
| 1326 |
+
"bbox": [
|
| 1327 |
+
0.115,
|
| 1328 |
+
0.729,
|
| 1329 |
+
0.455,
|
| 1330 |
+
0.776
|
| 1331 |
+
],
|
| 1332 |
+
"angle": 0,
|
| 1333 |
+
"content": "A.3 Ablation study on if freezing the two projection matrices in the same layer simultaneously"
|
| 1334 |
+
},
|
| 1335 |
+
{
|
| 1336 |
+
"type": "text",
|
| 1337 |
+
"bbox": [
|
| 1338 |
+
0.113,
|
| 1339 |
+
0.782,
|
| 1340 |
+
0.489,
|
| 1341 |
+
0.877
|
| 1342 |
+
],
|
| 1343 |
+
"angle": 0,
|
| 1344 |
+
"content": "We study the value of freezing both projection matrices in the same layer simultaneously. The results, depicted in Table 8, demonstrate that freezing the projection matrices separately yields consistently superior performance compared to freezing them simultaneously."
|
| 1345 |
+
},
|
| 1346 |
+
{
|
| 1347 |
+
"type": "table_caption",
|
| 1348 |
+
"bbox": [
|
| 1349 |
+
0.508,
|
| 1350 |
+
0.423,
|
| 1351 |
+
0.882,
|
| 1352 |
+
0.464
|
| 1353 |
+
],
|
| 1354 |
+
"angle": 0,
|
| 1355 |
+
"content": "Table 8: Ablation study on whether freezing the two projection matrices in the same layer simultaneously or independently."
|
| 1356 |
+
},
|
| 1357 |
+
{
|
| 1358 |
+
"type": "table",
|
| 1359 |
+
"bbox": [
|
| 1360 |
+
0.56,
|
| 1361 |
+
0.466,
|
| 1362 |
+
0.83,
|
| 1363 |
+
0.577
|
| 1364 |
+
],
|
| 1365 |
+
"angle": 0,
|
| 1366 |
+
"content": "<table><tr><td></td><td>Simultaneously</td><td>Independently</td></tr><tr><td>CoLA</td><td>67.90</td><td>72.01</td></tr><tr><td>SST-2</td><td>95.87</td><td>96.22</td></tr><tr><td>MRPC</td><td>91.67</td><td>91.91</td></tr><tr><td>STS-B</td><td>91.64</td><td>91.84</td></tr><tr><td>QNLI</td><td>94.20</td><td>94.42</td></tr><tr><td>RTE</td><td>87.00</td><td>88.09</td></tr><tr><td>Avg.</td><td>88.05</td><td>89.02</td></tr><tr><td>#Params</td><td>0.146M</td><td>0.138M</td></tr></table>"
|
| 1367 |
+
},
|
| 1368 |
+
{
|
| 1369 |
+
"type": "page_number",
|
| 1370 |
+
"bbox": [
|
| 1371 |
+
0.488,
|
| 1372 |
+
0.929,
|
| 1373 |
+
0.516,
|
| 1374 |
+
0.941
|
| 1375 |
+
],
|
| 1376 |
+
"angle": 0,
|
| 1377 |
+
"content": "167"
|
| 1378 |
+
}
|
| 1379 |
+
]
|
| 1380 |
+
]
|
2024/AFLoRA_ Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models/cedf799c-cbb4-4ca7-91c5-53457a9975a6_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ee2ba2550b4ae87ab4d3367d36b506ca03ef1a4c258da7c58d9996bb0116db0d
|
| 3 |
+
size 778865
|
2024/AFLoRA_ Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models/full.md
ADDED
|
@@ -0,0 +1,234 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models
|
| 2 |
+
|
| 3 |
+
Zeyu Liu $^{1,1}$ Souvik Kundu $^{1,2}$ Anni Li $^{1}$ Junrui Wan $^{1}$ Lianghao Jiang $^{1}$ Peter A. Beerel $^{1}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup> University of Southern California, USA <sup>2</sup> Intel Labs, San Diego, USA
|
| 6 |
+
|
| 7 |
+
{liuzeyu, annili, junruiwa, ljiang40, pabeerel}@usc.edu souvikk.kundu@intel.com
|
| 8 |
+
†Equally contributing authors
|
| 9 |
+
|
| 10 |
+
# Abstract
|
| 11 |
+
|
| 12 |
+
We present a novel parameter-efficient finetuning (PEFT) method, dubbed as adaptive freezing of low rank adaptation (AFLoRA). Specifically, for each pre-trained frozen weight tensor, we add a parallel path of trainable low-rank matrices, namely a down-projection and an up-projection matrix, each of which is followed by a feature transformation vector. Based on a novel freezing score, we then incrementally freeze these projection matrices during fine-tuning to reduce the computation and alleviate over-fitting. Our experimental results demonstrate that we can achieve state-of-the-art performance with an average improvement of up to $1.09\%$ as evaluated on the GLUE and GSM8k benchmark while yielding up to $9.5 \times$ fewer average trainable parameters. While compared in terms of runtime, AFLoRA can yield up to $1.86 \times$ improvement as opposed to similar PEFT alternatives. Besides the practical utility of our approach, we provide insights on the trainability requirements of LoRA paths at different modules and the freezing schedule for the different projection matrices. Code is released at: https://github.com/ zeyuliu1037/AFLoRA/tree/main.
|
| 13 |
+
|
| 14 |
+
# 1 Introduction
|
| 15 |
+
|
| 16 |
+
Pre-trained language models such as BERT (Devlin et al., 2018), GPT-3 (Brown et al., 2020), and LLaMA2 (Touvron et al., 2023) have demonstrated commendable performance on various natural language processing (NLP) tasks (Kang et al., 2024). However, their zero-shot performance on many downstream tasks often falls short of expectations. One possible solution is full fine-tuning (FFT) of the model on the downstream dataset. However, the large model parameter size makes this process prohibitively costly.
|
| 17 |
+
|
| 18 |
+
To address this challenge, various parameter-efficient fine-tuning (PEFT) methods including low
|
| 19 |
+
|
| 20 |
+

|
| 21 |
+
Figure 1: Schematic comparison of LoRA (Hu et al., 2021), ELoRA (Kopiczko et al., 2024), and AFLoRA and their associated advantages and disadvantages in terms of various metrics. $r_{L}$ and $r_{V}$ , represent the rank of the low-rank path used in LoRA and ELoRA methods, respectively. FT and KU refer to fine-tuned weights and the Kaiming uniform initialization, respectively.
|
| 22 |
+
|
| 23 |
+
rank adaptation (LoRA) (Hu et al., 2021), adapter tuning (He et al., 2021), and prompt tuning (Lester et al., 2021) are proposed. These methods add parameters to the trained model for fine-tuning, bypassing the need to adjust the weights of the pre-trained model. In particular, LoRA (Hu et al., 2021) and its variants (Zhang et al., 2023) add a trainable low-rank path consisting of down-projection and up-projection matrices to the model, inspired by (Aghajanyan et al., 2020) which showed that such low-rank paths can effectively approximate the trained weight tensors. ELoRA (Kopiczko et al., 2024) extends LoRA by adding trainable feature transformation vectors to the output of each project matrix. They showed that SoTA accuracy can be achieved with the projection matrices frozen after random initialization while keeping the two feature transformation vectors trainable. This approach significantly reduces the number of trainable parameters. However, compared to LoRA, ELoRA incurs higher computation costs due to the higher rank needed for the frozen projection matrices. Fig. 1 illustrates LoRA and ELoRA, contrasting them to our proposed method AFLoRA.
|
| 24 |
+
|
| 25 |
+
Our contributions. To reduce the trainable parameter count and computation costs of fine-tuning, we present Adaptive Freezing of Low Rank Adaptation (AFLoRA). More specifically, we first investigate the rank needed for the frozen LoRA path in ELoRA and observe that reducing the rank of the frozen projection matrices (PM) causes a drop in fine-tuning performance.
|
| 26 |
+
|
| 27 |
+
Based on this insight, we present AFLoRA, which starts with a low-rank trainable path that includes projection matrices and feature transformation vectors and trains the path for some epochs. We then gradually freeze the projection matrices based on a novel freezing score that acts as a proxy for the trainability requirement of a LoRA tensor. In this way, we not only help alleviate the over-fitting issue but also, improve the computation efficiency. To evaluate the benefit of AFLoRA, we perform extensive evaluations on multiple NLP benchmark datasets and compare accuracy, FLOPs, and training time with several existing alternatives. Specifically, compared to ELoRA we yield $1.86 \times$ and $2.96 \times$ improvement in runtime and FLOPs, respectively, while remaining comparable as LoRA on these two metrics. Compared to LoRA we require $9.5 \times$ fewer average trainable parameters to yield similar or improved performance.
|
| 28 |
+
|
| 29 |
+
# 2 Related Works
|
| 30 |
+
|
| 31 |
+
PEFT (Hu et al., 2021; Kundu et al., 2024; Sridhar et al., 2023; Yin et al., 2024) refers to a collection of methodologies that focus on allowing a small number of parameters to fine-tune to yield good performance on a downstream task. For example, prefix-tuning (Li and Liang, 2021) adds trainable prefix tokens to a model's input or hidden layers while adapter-tuning (Houlsby et al., 2019) inserts small neural network layers, known as adapters, within each layer of a pre-trained model. LoRA (Hu et al., 2021), on the other hand, adds low-rank tensors in parallel to the frozen pre-trained weights. AdaLoRA (Zhang et al., 2023) allows the rank of the LoRA path to be chosen in an adaptive way. Other variants like SoRA (Ding et al., 2023) and LoSparse (Li et al., 2023) have investigated the impact of sparsity in and alongside the low-rank path, respectively. Recently, efficient low-rank adaptation (ELoRA) (Kopiczko et al., 2024) has proposed to keep the LoRA path frozen, while introducing two trainable feature transformation
|
| 32 |
+
|
| 33 |
+

|
| 34 |
+
Figure 2: Performance of ELoRA with two different ranks of the frozen projection matrices.
|
| 35 |
+
|
| 36 |
+
vectors. Thus, this work only studies an extreme scenario of keeping the LoRA path frozen, and, to the best of our knowledge, no work has investigated the trainability requirement of the projection matrices.
|
| 37 |
+
|
| 38 |
+
# 3 Motivational Case Study
|
| 39 |
+
|
| 40 |
+
To understand the high-rank requirement for the frozen projection matrices in ELoRA, we conduct two sets of fine-tuning on SST-2 and MRPC, with ELoRA having rank $(r)$ of 1024 and 4, respectively. As we can see in Fig. 2, the model with $r = 4$ , yields poorer performance, highlighting the need for high rank for the frozen tensors. This high rank causes ELoRA to potentially be FLOPs inefficient.
|
| 41 |
+
|
| 42 |
+
# 4 AFLoRA: Methodology
|
| 43 |
+
|
| 44 |
+
Module Structure. Inspired by the framework proposed by Kopiczko et al. (2024), we design the LoRA module to encompass four components, namely, the down-projection linear layer $(lora_{A})$ , the up-projection linear layer $(lora_{B})$ , and two feature transform vectors $(s_{d},$ and $s_b)$ placed before and after $lora_{B}$ . However, unlike (Kopiczko et al., 2024), we keep both the projection matrices $(lora_{A}$ and $lora_{B})$ and vectors trainable at the beginning and keep the rank very low. The module processes a given input $X$ through these components to produce an output $Y$ . The complete operation for a layer $l$ can be described as follows:
|
| 45 |
+
|
| 46 |
+
$$
|
| 47 |
+
Y = W _ {0} ^ {l} X + \Lambda_ {b} ^ {l} B ^ {l} \Lambda_ {d} ^ {l} A ^ {l} X \tag {1}
|
| 48 |
+
$$
|
| 49 |
+
|
| 50 |
+
Here, $A^l$ and $B^l$ are the trainable LoRA tensors of $lora_A^l$ and $lora_B^l$ , respectively. $\Lambda_d$ and $\Lambda_b$ are the vectors of $s_d$ , and $s_b$ , respectively. $W_0^l$ represents the frozen pre-trained weights. We use Kaiming Uniform initialization for $A^l$ and $B^l$ , and follow (Kopiczko et al., 2024) to initialize the vectors.
|
| 51 |
+
|
| 52 |
+
Table 1: Comparison of different LoRA variants with DeBERTaV3 on the GLUE benchmark.
|
| 53 |
+
|
| 54 |
+
<table><tr><td>Method</td><td>#Params. ↓</td><td>CoLA ↑</td><td>SST-2 ↑</td><td>MRPC ↑</td><td>QNLI ↑</td><td>STS-B ↑</td><td>RTE ↑</td><td>MNLI ↑</td><td>QQP ↑</td><td>Avg. ↑</td></tr><tr><td>FFT</td><td>184M</td><td>69.21</td><td>95.64</td><td>89.22</td><td>93.78</td><td>91.59</td><td>82.49</td><td>89.98/89.95</td><td>92.05/89.31</td><td>87.82</td></tr><tr><td>LoRA (r = 8)</td><td>1.33M</td><td>69.73</td><td>95.57</td><td>89.71</td><td>93.76</td><td>91.86</td><td>85.32</td><td>90.47/90.46</td><td>91.95/89.26</td><td>88.38</td></tr><tr><td>AdaLoRA</td><td>1.27M</td><td>70.86</td><td>95.95</td><td>90.22</td><td>94.28</td><td>91.39</td><td>87.36</td><td>90.27/90.30</td><td>92.13/88.41</td><td>88.83</td></tr><tr><td>SoRA (r = 4)</td><td>0.47M</td><td>71.05</td><td>95.57</td><td>90.20</td><td>93.92</td><td>91.76</td><td>86.04</td><td>90.38/90.43</td><td>92.06/89.44</td><td>88.71</td></tr><tr><td>ELoRA*</td><td>0.16M</td><td>70.74</td><td>95.18</td><td>90.93</td><td>93.58</td><td>91.08</td><td>87.36</td><td>90.11/90.22</td><td>90.69/87.63</td><td>88.53</td></tr><tr><td>AFLoRA (r = 4)</td><td>0.14M**</td><td>72.01</td><td>96.22</td><td>91.91</td><td>94.42</td><td>91.84</td><td>88.09</td><td>89.88/90.17</td><td>90.81/87.77</td><td>89.23</td></tr></table>
|
| 55 |
+
|
| 56 |
+
* The original paper has results with the RoBERTa, we generated the results with our implementation on DeBERTaV3 with the rank of 1024.
|
| 57 |
+
** As the number of trainable parameters is changed during training, we computed this by averaging over the whole training epochs over all datasets.
|
| 58 |
+
|
| 59 |
+
Adaptive Freezing. In pruning literature (Han et al., 2015; Molchanov et al., 2019; Zhang et al., 2022; Yin et al., 2024; Kundu et al., 2021, 2022), sensitivity is gauged to reflect weight variability, necessitating consideration of both the weights' magnitudes and their gradients. Small weight values suggest minimal impact, while minor gradient values indicate stability. Taking inspiration from this idea, here we introduce the concept of a "freezing score". However, unlike pruning where both magnitude and gradient play a critical role in identifying insignificant weight, we leverage only gradient as a proxy to compute the freezing score. This is because, we assume large magnitude weights with negligible change has the same priority to be frozen as that for small magnitude weights. This score quantifies the degree to which weights vary throughout the training process. Consequently, when the expected changes to the weights become negligible, we may consider them to be frozen, thereby saving computational resources and energy. The following equation describes the freezing score evaluation steps for a low-rank tensor $A^l$ .
|
| 60 |
+
|
| 61 |
+
$$
|
| 62 |
+
I _ {A ^ {l}} = \left| \nabla \mathcal {L} (\boldsymbol {\theta}) \right|, \bar {I} _ {A ^ {l}} ^ {(t)} = \beta_ {1} \bar {I} _ {A ^ {l}} ^ {(t - 1)} + (1 - \beta_ {1}) I _ {A ^ {l}} ^ {(t)} \tag {2}
|
| 63 |
+
$$
|
| 64 |
+
|
| 65 |
+
$$
|
| 66 |
+
U _ {A ^ {l}} ^ {(t)} = \left| I _ {A ^ {l}} ^ {(t)} - \bar {I} _ {A ^ {l}} ^ {(t)} \right|, \bar {U} _ {A ^ {l}} ^ {(t)} = \beta_ {2} \bar {U} _ {A ^ {l}} ^ {(t - 1)} + (1 - \beta_ {2}) U _ {A ^ {l}} ^ {(t)} \tag {3}
|
| 67 |
+
$$
|
| 68 |
+
|
| 69 |
+
$$
|
| 70 |
+
s _ {A ^ {l}} ^ {(t)} = \operatorname {m e a n} \left(\bar {I} _ {A ^ {l}} ^ {(t)} \circ \bar {U} _ {A ^ {l}} ^ {(t)}\right) \tag {4}
|
| 71 |
+
$$
|
| 72 |
+
|
| 73 |
+
Here, for each projection tensor at iteration $t$ , we compute a smoothed gradient $(\overline{I}_{A^l}^{(t)})$ and uncertainly tensor $(\overline{U}_{A^l}^{(t)})$ , as shown in Eq. 2 and 3, respectively. We then evaluate the freezing score $s_{A^l}^{(t)}$ , as the mean of the tensor generated via Hadamard product (○) between $\overline{I}_{A^l}^{(t)}$ and $\overline{U}_{A^l}^{(t)}$ .
|
| 74 |
+
|
| 75 |
+
To apply thresholding on the LoRA freezing scores, we use the cubic schedule as (Zhang et al., 2022). In specific, we keep the projection matrices trainable for the initial $t_i$ training steps, and then progressively freeze them by calculating the freezing fraction $r(t)$ as shown in Eq. 5. Finally, all the projection matrices freeze beyond $T - t_f$ steps. Note, at step $t$ , for a computed freezing fraction $k$ , we freeze the lowest $k\%$ projection matrices.
|
| 76 |
+
|
| 77 |
+

|
| 78 |
+
Figure 3: A comparison of various system performances between LoRA, ELoRA, and AFLoRA.
|
| 79 |
+
|
| 80 |
+

|
| 81 |
+
|
| 82 |
+

|
| 83 |
+
|
| 84 |
+
$$
|
| 85 |
+
r (t) = \left\{ \begin{array}{l l} 0 & 0 \leq t < t _ {i} \\ 1 - \left(1 - \frac {t - t _ {i}}{T - t _ {i} - t _ {f}}\right) ^ {3} & t _ {i} \leq t < T - t _ {f} \\ 1 & \text {o t h e r w i s e} \end{array} \right. \tag {5}
|
| 86 |
+
$$
|
| 87 |
+
|
| 88 |
+
where $t$ refers to current #step, $T$ is the total number of fine-tuning steps. We set $t_i$ to the steps corresponding to one epoch and set $t_f$ to 70% of the total training steps.
|
| 89 |
+
|
| 90 |
+
# 5 Experiments
|
| 91 |
+
|
| 92 |
+
Models & Datasets. We use the PEFT framework of (Mangrulkar et al., 2022) and evaluate the fine-tuning performance of DeBERTaV3-base (He et al., 2020) to fine-tune on our framework on the General Language Understanding Evaluation (GLUE) benchmark (Wang et al., 2018). The details of the hyperparameter settings for each dataset are listed in Appendix A.2.
|
| 93 |
+
|
| 94 |
+
Performance Comparison. We benchmark the performance with AFLoRA and present a comparison with LoRA and its variants. For ELoRA, we reproduce the results at our end while the results for other methods are sourced from (Ding et al., 2023). As shown in Table 1, AFLoRA can achieve SoTA performance on the majority of datasets and on average while requiring similar and $9.5 \times$ fewer average trainable parameters as compared to ELoRA and LoRA, respectively.
|
| 95 |
+
|
| 96 |
+
Runtime & FLOPs Comparison. Fig. 3 shows the comparison of the normalized average training runtime, normalized FLOPs, and normalized trainable parameters. For AFLoRA, we average
|
| 97 |
+
|
| 98 |
+
Table 2: Results on auto-regressive complex reasoning task using LLM.
|
| 99 |
+
|
| 100 |
+
<table><tr><td>Method</td><td>Model</td><td>Low-rank val.</td><td># Params.</td><td>GSM8k Acc (%)</td></tr><tr><td>LoRA</td><td>LLaMA-7B</td><td>32</td><td>56.1M</td><td>37.50</td></tr><tr><td>ALFoRA (Ours)</td><td>LLaMA-7B</td><td>32</td><td>17.8M</td><td>38.59</td></tr></table>
|
| 101 |
+
|
| 102 |
+
Table 3: Results on summarizing task using LLM. We use rouge 1 (R1) and rouge 2 (R2) scores to measure the summarization quality.
|
| 103 |
+
|
| 104 |
+
<table><tr><td>Method</td><td>Model</td><td>Low-rank val.</td><td># Params.</td><td>CNN/DailyMail (R1/R2)</td></tr><tr><td>LoRA</td><td>BART-Large</td><td>16</td><td>8.65M</td><td>43.96/21.06</td></tr><tr><td>ALFloRA (Ours)</td><td>BART-Large</td><td>16</td><td>5.10M</td><td>44.31/21.32</td></tr></table>
|
| 105 |
+
|
| 106 |
+
the training time, FLOPs, and trainable parameters over six GLUE datasets (except the MNLI and QQP datasets). Note, that for LoRA and ELoRA, the trainable parameters and FLOPs remain fixed for all the datasets. We compute their average runtime the same way as ours. Compared to ELoRA we can yield up to $1.86 \times$ and $2.96 \times$ runtime and FLOPs improvement while remaining comparable with LoRA in these two metrics. Compared to LoRA we yield $9.5 \times$ parameter reduction while remaining comparable with ELoRA. These results clearly demonstrate AFLoRA as a PEFT method that can yield similar parameter efficiency as ELoRA while costing no training overhead in FLOPs or time.
|
| 107 |
+
|
| 108 |
+
Results with Large Language Models (LLMs). We now demonstrate the AFLoRA fine-tuning performance with two popular LLM variants, namely, LLaMA-7B (Touvron et al., 2023) and BARTLarge (Lewis et al., 2019) on GSM8k complex reasoning and CNN/Daily mail summarizing task, respectively. As demonstrated in Table 2, on GSM8k, AFLoRA yields improved accuracy of $1.09\%$ while requiring $3.15 \times$ fewer trainable parameters as compared to that with LoRA. On the CNN/DailyMail Summarizing task (Table 3), AFLoRA requires $1.69 \times$ fewer trainable parameters to reach similar or improved rouge score values.
|
| 109 |
+
|
| 110 |
+
# 6 Ablations and Discussions
|
| 111 |
+
|
| 112 |
+
We conducted our ablation studies on six GLUE benchmark datasets, omitting QQP and MNLI, the two most computationally demanding datasets.
|
| 113 |
+
|
| 114 |
+
Do we really need adaptive freezing? We conducted experiments with all the LoRA PMs frozen (same as ELoRA), all the LoRA PMs trainable, and with our adaptive training of LoRA PMs. We use, $r = 4$ for the LoRA path,
|
| 115 |
+
|
| 116 |
+
Table 4: Ablation study on the trainability impact of the projection matrices (PM) of the AFLoRA module. We keep the vectors trainable throughout for all.
|
| 117 |
+
|
| 118 |
+
<table><tr><td>PM</td><td>#Params.</td><td>CoLA</td><td>SST-2</td><td>MRPC</td><td>QNLI</td><td>STS-B</td><td>RTE</td><td>Avg.</td></tr><tr><td>Trainable</td><td>0.45M</td><td>70.15</td><td>95.99</td><td>92.4</td><td>94.16</td><td>89.90</td><td>88.45</td><td>88.51</td></tr><tr><td>Frozen</td><td>0.08M</td><td>70.36</td><td>94.95</td><td>89.22</td><td>93.61</td><td>91.17</td><td>85.92</td><td>87.54</td></tr><tr><td>AFLoRA (Ours)</td><td>0.14M</td><td>72.01</td><td>96.22</td><td>91.91</td><td>94.42</td><td>91.84</td><td>88.09</td><td>89.23</td></tr></table>
|
| 119 |
+
|
| 120 |
+

|
| 121 |
+
Figure 4: A comparison of performance outcomes utilizing three distinct freezing score methodologies.
|
| 122 |
+
|
| 123 |
+
Table 5: Ablation study on making the PMs for different layer-types trainable.
|
| 124 |
+
|
| 125 |
+
<table><tr><td>FFN</td><td>Attn</td><td>CoLA</td><td>SST-2</td><td>MRPC</td><td>QNLI</td><td>STS-B</td><td>RTE</td><td>Avg.</td></tr><tr><td>✓</td><td>✓</td><td>70.33</td><td>95.76</td><td>90.93</td><td>94.36</td><td>91.44</td><td>87.37</td><td>88.48</td></tr><tr><td></td><td></td><td>0.15M</td><td>0.19M</td><td>0.18M</td><td>0.19M</td><td>0.16M</td><td>0.17M</td><td>0.17M</td></tr><tr><td>X</td><td>✓</td><td>71.118</td><td>95.986</td><td>89.951</td><td>94.12</td><td>91.39</td><td>86.28</td><td>88.14</td></tr><tr><td></td><td></td><td>0.11M</td><td>0.13M</td><td>0.12M</td><td>0.13M</td><td>0.12M</td><td>0.12M</td><td>0.12M</td></tr><tr><td>✓</td><td>X</td><td>72.01</td><td>96.22</td><td>91.91</td><td>94.42</td><td>91.84</td><td>88.09</td><td>89.02</td></tr><tr><td></td><td></td><td>0.13M</td><td>0.18M</td><td>0.13M</td><td>0.13M</td><td>0.13M</td><td>0.13M</td><td>0.14M</td></tr></table>
|
| 126 |
+
|
| 127 |
+
for all. As we can see in Table 4, keeping the projection matrices trainable yields better average performance compared to keeping them frozen throughout. However, AFLoRA with adaptive freezing yields even better performance than keeping them trainable throughout, potentially highlighting its ability to regularize the fine-tuning against overfitting.
|
| 128 |
+
|
| 129 |
+
Do we need to keep the PMs trainable for all layer types? There are two major layer types, FFN and the attention layers. We place the PMs in both along with the feature transformation vectors. We then study the necessity of keeping the PMs trainable in these two layer types. Note, here, we keep the vectors trainable for all throughout. As shown in Table 5, keeping the PMs trainable (and then adaptive freezing) in the FFN yields better performance compared to the alternatives. Note we keep the PMs in the attention layers frozen to random values. Interestingly, allowing all PMs to initially train and then adaptively freeze yields poorer performance than allowing them only in MLP. This may hint at the FFN weights to play a more important role in fine-tuning performance.
|
| 130 |
+
|
| 131 |
+
Ablation with sensitivity choices. Fig. 4 presents ablation with three sensitivity scores based
|
| 132 |
+
|
| 133 |
+
on three different sensitivity choices, namely, $|grad(p)|$ (adopted in AFLoRA), $|p * grad(p)|$ and $|grad(p)/p|$ . On average, the freezing score adopted in AFLoRA, consistently yields better accuracy over the other two.
|
| 134 |
+
|
| 135 |
+
Discussion on Freezing Trend. We use the RTE dataset as a case study, to understand the freezing trend of the PMs across different layers. Specifically, we illustrate the specific number of iterations required before freezing each component in Fig. 5. Interestingly, as can be seen from the figure, analysis reveals that the down-projection matrix parallel to the intermediate linear layer requires longer training duration prior to being frozen, as compared to the other PMs. This may potentially hint at the low approximation ability of the intermediate layer as compared to the second MLP in the FFN.
|
| 136 |
+
|
| 137 |
+
# 7 Conclusions
|
| 138 |
+
|
| 139 |
+
In this paper, we presented AFLoRA, adaptive freezing of LoRA adapters that allow near-optimal trainability of the LoRA projection matrices and freezes them driven by a "freezing score" after certain fine-tuning steps. Compared to LoRA, AFLoRA can reduce the trainable parameters by up to $9.5\times$ while yielding $0.85\%$ average improved performance as evaluated on the GLUE benchmark.
|
| 140 |
+
|
| 141 |
+
# 8 Limitation
|
| 142 |
+
|
| 143 |
+
In the ablation study with various freezing score metrics, we discovered that alternative scoring methods outperform ours on certain datasets, suggesting possible room for research in refining the freezing scores. This can further improve performance with AFLoRA. Additionally, the integration of AFLoRA in the adaptive rank evaluation framework can potentially open a new direction for PEFT that we consider as future research.
|
| 144 |
+
|
| 145 |
+
# References
|
| 146 |
+
|
| 147 |
+
Armen Aghajanyan, Luke Zettlemoyer, and Sonal Gupta. 2020. Intrinsic dimensionality explains the effectiveness of language model fine-tuning. arXiv preprint arXiv:2012.13255.
|
| 148 |
+
|
| 149 |
+
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot
|
| 150 |
+
|
| 151 |
+

|
| 152 |
+
Figure 5: Visualization of freezing iterations for each layer. 'out' and 'inter' refer to the second and the first MLP layer of the FFN, respectively. 'A' and 'B' represent the down-projection and up-projection matrix, respectively. The darker the color, the more iterations the matrix has to go through before freezing.
|
| 153 |
+
|
| 154 |
+
learners. Advances in neural information processing systems, 33:1877-1901.
|
| 155 |
+
|
| 156 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
|
| 157 |
+
|
| 158 |
+
Ning Ding, Xingtai Lv, Qiaosen Wang, Yulin Chen, Bowen Zhou, Zhiyuan Liu, and Maosong Sun. 2023. Sparse low-rank adaptation of pre-trained language models. arXiv preprint arXiv:2311.11696.
|
| 159 |
+
|
| 160 |
+
Song Han, Jeff Pool, John Tran, and William Dally. 2015. Learning both weights and connections for efficient neural network. Advances in neural information processing systems, 28.
|
| 161 |
+
|
| 162 |
+
Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654.
|
| 163 |
+
|
| 164 |
+
Ruidan He, Linlin Liu, Hai Ye, Qingyu Tan, Bosheng Ding, Liying Cheng, Jia-Wei Low, Lidong Bing, and Luo Si. 2021. On the effectiveness of adapter-based tuning for pretrained language model adaptation. arXiv preprint arXiv:2106.03164.
|
| 165 |
+
|
| 166 |
+
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In International Conference on Machine Learning, pages 2790-2799. PMLR.
|
| 167 |
+
|
| 168 |
+
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685.
|
| 169 |
+
|
| 170 |
+
Hao Kang, Qingru Zhang, Souvik Kundu, Geonhwa Jeong, Zaoxing Liu, Tushar Krishna, and Tuo Zhao. 2024. Gear: An efficient kv cache compression recipe for near-lossless generative inference of llm. arXiv preprint arXiv:2403.05527.
|
| 171 |
+
|
| 172 |
+
Dawid Jan Kopiczko, Tijmen Blankevoort, and Yuki M Asano. 2024. ELoRA: Efficient low-rank adaptation with random matrices. In The Twelfth International Conference on Learning Representations.
|
| 173 |
+
|
| 174 |
+
Souvik Kundu, Mahdi Nazemi, Peter A Beerel, and Massoud Pedram. 2021. Dnr: A tunable robust pruning framework through dynamic network rewiring of dnns. In Proceedings of the 26th Asia and South Pacific Design Automation Conference, pages 344-350.
|
| 175 |
+
|
| 176 |
+
Souvik Kundu, Sharath Sridhar Nittur, Maciej Szankin, and Sairam Sundaresan. 2024. Sensi-bert: Towards sensitivity driven fine-tuning for parameter-efficient bert. ICASSP.
|
| 177 |
+
|
| 178 |
+
Souvik Kundu, Shikai Wang, Qirui Sun, Peter A Beerel, and Massoud Pedram. 2022. Bmpq: bit-gradient sensitivity-driven mixed-precision quantization of dnns from scratch. In 2022 Design, Automation & Test in Europe Conference & Exhibition (DATE), pages 588-591. IEEE.
|
| 179 |
+
|
| 180 |
+
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021. The power of scale for parameter-efficient prompt tuning. arXiv preprint arXiv:2104.08691.
|
| 181 |
+
|
| 182 |
+
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
|
| 183 |
+
|
| 184 |
+
Xiang Lisa Li and Percy Liang. 2021. Prefix-tuning: Optimizing continuous prompts for generation. arXiv preprint arXiv:2101.00190.
|
| 185 |
+
|
| 186 |
+
Yixiao Li, Yifan Yu, Qingru Zhang, Chen Liang, Pengcheng He, Weizhu Chen, and Tuo Zhao. 2023. Losparse: Structured compression of large language models based on low-rank and sparse approximation. arXiv preprint arXiv:2306.11222.
|
| 187 |
+
|
| 188 |
+
Sourab Mangrulkar, Sylvain Gugger, Lysandre Debut, Younes Belkada, Sayak Paul, and Benjamin Bossan. 2022. Peft: State-of-the-art parameter-efficient fine-tuning methods. https://github.com/huggingface/peft.
|
| 189 |
+
|
| 190 |
+
Pavlo Molchanov, Arun Mallya, Stephen Tyree, Iuri Frosio, and Jan Kautz. 2019. Importance estimation for neural network pruning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 11264-11272.
|
| 191 |
+
|
| 192 |
+
Sharath Nittur Sridhar, Souvik Kundu, Sairam Sundaresan, Maciej Szankin, and Anthony Sarah. 2023. Instatute: Instantaneous neural architecture search during fine-tuning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1523-1527.
|
| 193 |
+
|
| 194 |
+
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan,
|
| 195 |
+
|
| 196 |
+
Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023. Llama 2: Open foundation and fine-tuned chat models.
|
| 197 |
+
|
| 198 |
+
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2018. Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.
|
| 199 |
+
|
| 200 |
+
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumont, Clement Delangue, Anthony Moi, Pierrick Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
|
| 201 |
+
|
| 202 |
+
Lu Yin, Ajay Jaiswal, Shiwei Liu, Souvik Kundu, and Zhangyang Wang. 2024. Pruning small pre-trained weights irreversibly and monotonically impairs "difficult" downstream tasks in llms.
|
| 203 |
+
|
| 204 |
+
Qingru Zhang, Minshuo Chen, Alexander Bukharin, Pengcheng He, Yu Cheng, Weizhu Chen, and Tuo Zhao. 2023. Adaptive budget allocation for parameter-efficient fine-tuning. In The Eleventh International Conference on Learning Representations.
|
| 205 |
+
|
| 206 |
+
Qingru Zhang, Simiao Zuo, Chen Liang, Alexander Bukharin, Pengcheng He, Weizhu Chen, and Tuo Zhao. 2022. Platon: Pruning large transformer models with upper confidence bound of weight importance. In International Conference on Machine Learning, pages 26809-26823. PMLR.
|
| 207 |
+
|
| 208 |
+
# A Appendix
|
| 209 |
+
|
| 210 |
+
# A.1 Dataset
|
| 211 |
+
|
| 212 |
+
The details of train/test/dev splits and the evaluation metric of the GLUE (Wang et al., 2018) dataset are reported in Table 6. We use the Huggingface Transformers library (Wolf et al., 2020) to source all the datasets.
|
| 213 |
+
|
| 214 |
+
Table 6: Statistics of the GLUE benchmark datasets."Mcc", "Acc", "F1" and "Pear" represent Matthews correlation coefficient, accuracy, the F1 score and the Pearson correlation coefficient respectively. And "Acc" for the MNLI dataset contains the accuracy for the matched and mismatched subset of the datasets.
|
| 215 |
+
|
| 216 |
+
<table><tr><td>Dataset</td><td>#Train</td><td>#Valid</td><td>#Test</td><td>Metric</td></tr><tr><td>CoLA</td><td>8.5k</td><td>1,043</td><td>1,063</td><td>Mcc</td></tr><tr><td>SST-2</td><td>67k</td><td>872</td><td>1.8k</td><td>Acc</td></tr><tr><td>MRPC</td><td>3.7k</td><td>408</td><td>1.7k</td><td>Acc</td></tr><tr><td>QQP</td><td>364k</td><td>40.4k</td><td>391k</td><td>Acc/F1</td></tr><tr><td>STS-B</td><td>5.7k</td><td>1.5k</td><td>1.4k</td><td>Pear</td></tr><tr><td>MNLI</td><td>393k</td><td>9.8k/9.8k</td><td>9.8k/9.8k</td><td>Acc</td></tr><tr><td>QNLI</td><td>105k</td><td>5.5k</td><td>5.5k</td><td>Acc</td></tr><tr><td>RTE</td><td>2.5k</td><td>277</td><td>3k</td><td>Acc</td></tr></table>
|
| 217 |
+
|
| 218 |
+
# A.2 Hyperparameter configuration
|
| 219 |
+
|
| 220 |
+
Table 7 shows the main hyper-parameter set up in this paper. Besides them, we use the same optimizer, warmup Ratio, and LR schedule as Hu et al. (2021). We use NVIDIA RTX A6000 (maximum GPU memory=49140MB) to measure the training runtime. For all experiments, we run 5 times using different random seeds and report the average results.
|
| 221 |
+
|
| 222 |
+
Table 7: Hyperparameter setup for all eight datasets in GLUE benchmark
|
| 223 |
+
|
| 224 |
+
<table><tr><td>Hyperparameter</td><td>CoLA</td><td>SST-2</td><td>MRPC</td><td>QNLI</td><td>STS-B</td><td>RTE</td><td>MNLI</td><td>QQP</td></tr><tr><td># epochs</td><td>20</td><td>10</td><td>20</td><td>10</td><td>20</td><td>20</td><td>10</td><td>10</td></tr><tr><td>Batch size</td><td></td><td></td><td></td><td>64</td><td></td><td></td><td></td><td></td></tr><tr><td>Max Seq. Len.</td><td></td><td></td><td></td><td>256</td><td></td><td></td><td></td><td></td></tr><tr><td>Clf. Lr.*</td><td>4E-2</td><td>4E-3</td><td>8E-2</td><td>4E-3</td><td>2E-2</td><td>4E-2</td><td>4E-3</td><td>4E-3</td></tr><tr><td>Learning rate</td><td>1E-2</td><td>4E-3</td><td>1E-2</td><td>1E-3</td><td>2E-3</td><td>1E-3</td><td>1E-3</td><td>4E-3</td></tr><tr><td>ti(epoch)</td><td></td><td></td><td></td><td>1</td><td></td><td></td><td></td><td></td></tr><tr><td>tf(epoch)</td><td>14</td><td>7</td><td>14</td><td>7</td><td>14</td><td>14</td><td>7</td><td>7</td></tr><tr><td>β1</td><td></td><td></td><td></td><td>0.85</td><td></td><td></td><td></td><td></td></tr><tr><td>β2</td><td></td><td></td><td></td><td>0.95</td><td></td><td></td><td></td><td></td></tr></table>
|
| 225 |
+
|
| 226 |
+
* "Clf. Lr.* means the learning rate for the classification head.
|
| 227 |
+
|
| 228 |
+
# A.3 Ablation study on if freezing the two projection matrices in the same layer simultaneously
|
| 229 |
+
|
| 230 |
+
We study the value of freezing both projection matrices in the same layer simultaneously. The results, depicted in Table 8, demonstrate that freezing the projection matrices separately yields consistently superior performance compared to freezing them simultaneously.
|
| 231 |
+
|
| 232 |
+
Table 8: Ablation study on whether freezing the two projection matrices in the same layer simultaneously or independently.
|
| 233 |
+
|
| 234 |
+
<table><tr><td></td><td>Simultaneously</td><td>Independently</td></tr><tr><td>CoLA</td><td>67.90</td><td>72.01</td></tr><tr><td>SST-2</td><td>95.87</td><td>96.22</td></tr><tr><td>MRPC</td><td>91.67</td><td>91.91</td></tr><tr><td>STS-B</td><td>91.64</td><td>91.84</td></tr><tr><td>QNLI</td><td>94.20</td><td>94.42</td></tr><tr><td>RTE</td><td>87.00</td><td>88.09</td></tr><tr><td>Avg.</td><td>88.05</td><td>89.02</td></tr><tr><td>#Params</td><td>0.146M</td><td>0.138M</td></tr></table>
|
2024/AFLoRA_ Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:11f417757be3f0e8a31d5b1128d310c2635f0e08c20d80063951a0f3a3d89748
|
| 3 |
+
size 284011
|
2024/AFLoRA_ Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/AGR_ Reinforced Causal Agent-Guided Self-explaining Rationalization/19a5d2a6-e0ae-4231-a1c8-5819b81e5755_content_list.json
ADDED
|
@@ -0,0 +1,1814 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "AGR: Reinforced Causal Agent-Guided Self-explaining Rationalization",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
129,
|
| 8 |
+
90,
|
| 9 |
+
867,
|
| 10 |
+
111
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Yunxiao Zhao $^{1}$ , Zhiqiang Wang $^{1,2*}$ , Xiaoli Li $^{3}$ , Jiye Liang $^{1,2}$ , Ru Li $^{1,2*}$",
|
| 17 |
+
"bbox": [
|
| 18 |
+
200,
|
| 19 |
+
128,
|
| 20 |
+
794,
|
| 21 |
+
145
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "list",
|
| 27 |
+
"sub_type": "text",
|
| 28 |
+
"list_items": [
|
| 29 |
+
"1. School of Computer and Information Technology, Shanxi University, Taiyuan, China",
|
| 30 |
+
"2. Key Laboratory of Computational Intelligence and Chinese Information Processing of Ministry of Education, Shanxi University, Taiyuan, China",
|
| 31 |
+
"3. Institute for Infocomm Research, A*Star, Singapore yunxiaomr@163.com, {wangzq,ljy,liru}@sxu.edu.cn, xlli@ntu.edu.sg"
|
| 32 |
+
],
|
| 33 |
+
"bbox": [
|
| 34 |
+
144,
|
| 35 |
+
146,
|
| 36 |
+
850,
|
| 37 |
+
229
|
| 38 |
+
],
|
| 39 |
+
"page_idx": 0
|
| 40 |
+
},
|
| 41 |
+
{
|
| 42 |
+
"type": "text",
|
| 43 |
+
"text": "Abstract",
|
| 44 |
+
"text_level": 1,
|
| 45 |
+
"bbox": [
|
| 46 |
+
260,
|
| 47 |
+
260,
|
| 48 |
+
339,
|
| 49 |
+
275
|
| 50 |
+
],
|
| 51 |
+
"page_idx": 0
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"type": "text",
|
| 55 |
+
"text": "Most existing rationalization approaches are susceptible to degeneration accumulation due to a lack of effective control over the learning direction of the model during training. To address this issue, we propose a novel approach AGR (Agent-Guided Rationalization), guiding the next action of the model based on its current training state. Specifically, we introduce causal intervention calculus to quantify the causal effects inherent during rationale training, and utilize reinforcement learning process to refine the learning bias of them. Furthermore, we pretrain an agent within this reinforced causal environment to guide the next step of the model. We theoretically demonstrate that a good model needs the desired guidance, and empirically show the effectiveness of our approach, outperforming existing state-of-the-art methods on BeerAdvocate and HotelReview datasets.",
|
| 56 |
+
"bbox": [
|
| 57 |
+
141,
|
| 58 |
+
286,
|
| 59 |
+
460,
|
| 60 |
+
556
|
| 61 |
+
],
|
| 62 |
+
"page_idx": 0
|
| 63 |
+
},
|
| 64 |
+
{
|
| 65 |
+
"type": "text",
|
| 66 |
+
"text": "1 Introduction",
|
| 67 |
+
"text_level": 1,
|
| 68 |
+
"bbox": [
|
| 69 |
+
114,
|
| 70 |
+
567,
|
| 71 |
+
258,
|
| 72 |
+
583
|
| 73 |
+
],
|
| 74 |
+
"page_idx": 0
|
| 75 |
+
},
|
| 76 |
+
{
|
| 77 |
+
"type": "text",
|
| 78 |
+
"text": "To explain the prediction of neural networks, selective rationalization task (Lei et al., 2016; Yu et al., 2019, 2021) has been studied in recent years. As shown in Figure 1, it aims to select a small and human-intelligible subset (i.e., rationale) from the input to support and explain the prediction results when yielding them. As an interpretable diagram, rationalization holds significant potential for elucidating the decision-making process of predictive models, building trust, and deriving insightful and pertinent insights (Yuan et al., 2020; Zhang et al., 2023; Deng et al., 2023).",
|
| 79 |
+
"bbox": [
|
| 80 |
+
112,
|
| 81 |
+
593,
|
| 82 |
+
489,
|
| 83 |
+
785
|
| 84 |
+
],
|
| 85 |
+
"page_idx": 0
|
| 86 |
+
},
|
| 87 |
+
{
|
| 88 |
+
"type": "text",
|
| 89 |
+
"text": "Various approaches have been proposed for rationalization, spanning from early rationale sampling-based methods (Bao et al., 2018; Bastings et al., 2019; Paranjape et al., 2020) to the extra-component-based methods (De Cao et al., 2020; Huang et al., 2021; Yu et al., 2021; Liu et al., 2022; Yue et al., 2022; Liu et al., 2023a). These",
|
| 90 |
+
"bbox": [
|
| 91 |
+
112,
|
| 92 |
+
787,
|
| 93 |
+
489,
|
| 94 |
+
898
|
| 95 |
+
],
|
| 96 |
+
"page_idx": 0
|
| 97 |
+
},
|
| 98 |
+
{
|
| 99 |
+
"type": "image",
|
| 100 |
+
"img_path": "images/55018625f4765160669aec9198c29001b9633d29f52ab2da25ad6006eea3cc69.jpg",
|
| 101 |
+
"image_caption": [
|
| 102 |
+
"Figure 1: The standard selective rationalization, where $X, Z, \\hat{Y}, Y$ represent the input text, rationale, prediction and the groundtruth label, respectively. The red text indicates the small and human-intelligible subset."
|
| 103 |
+
],
|
| 104 |
+
"image_footnote": [],
|
| 105 |
+
"bbox": [
|
| 106 |
+
512,
|
| 107 |
+
258,
|
| 108 |
+
878,
|
| 109 |
+
317
|
| 110 |
+
],
|
| 111 |
+
"page_idx": 0
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"type": "text",
|
| 115 |
+
"text": "methods predominantly concentrate on improving the performance of rationalization models by either refining the sampling directly or aligning additional information beyond the rationale, resulting in impressive results. However, to the best of our knowledge, the current methods are prone to degeneration accumulation since they usually do not discern whether the generator during training has produced unmeaningful or flawed rationales; instead, they directly pass them to the predictor even if generated rationales are degraded.",
|
| 116 |
+
"bbox": [
|
| 117 |
+
507,
|
| 118 |
+
394,
|
| 119 |
+
884,
|
| 120 |
+
571
|
| 121 |
+
],
|
| 122 |
+
"page_idx": 0
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"type": "text",
|
| 126 |
+
"text": "For instance, the underlined rationale in Figure 1 is degraded, as the word appearance alone does not reliably determine the sentiment polarity of input $X$ . But the predictor overfits to this uninformative rationale and classifies the sentiment according to whether \"appearance\" is included in the rationale. Consequently, when the predictor receives degraded rationales, it steers the model towards an undesirable direction (aka., learning bias). Thus, optimizing this bias during training is crucial for ensuring the model's generalization performance.",
|
| 127 |
+
"bbox": [
|
| 128 |
+
507,
|
| 129 |
+
571,
|
| 130 |
+
882,
|
| 131 |
+
747
|
| 132 |
+
],
|
| 133 |
+
"page_idx": 0
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"type": "text",
|
| 137 |
+
"text": "The proposed methods (Chang et al., 2020; Zhang et al., 2023; Yue et al., 2023) fall short in considering rationalization optimization comprehensively, neglecting existing causality during rationale learning. Although they often employ causal theory to uncover relationships between rationale pieces, they struggle to directly optimize",
|
| 138 |
+
"bbox": [
|
| 139 |
+
507,
|
| 140 |
+
749,
|
| 141 |
+
884,
|
| 142 |
+
862
|
| 143 |
+
],
|
| 144 |
+
"page_idx": 0
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"type": "page_footnote",
|
| 148 |
+
"text": "Degeneration over rationalization is a highly challenging problem, which means the predictor may overfit to meaningless rationales generated by the not yet well-trained generator (Yu et al., 2019; Liu et al., 2023b,d).",
|
| 149 |
+
"bbox": [
|
| 150 |
+
507,
|
| 151 |
+
871,
|
| 152 |
+
885,
|
| 153 |
+
921
|
| 154 |
+
],
|
| 155 |
+
"page_idx": 0
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"type": "page_footnote",
|
| 159 |
+
"text": "* Corresponding author",
|
| 160 |
+
"bbox": [
|
| 161 |
+
136,
|
| 162 |
+
904,
|
| 163 |
+
278,
|
| 164 |
+
920
|
| 165 |
+
],
|
| 166 |
+
"page_idx": 0
|
| 167 |
+
},
|
| 168 |
+
{
|
| 169 |
+
"type": "page_number",
|
| 170 |
+
"text": "510",
|
| 171 |
+
"bbox": [
|
| 172 |
+
485,
|
| 173 |
+
927,
|
| 174 |
+
515,
|
| 175 |
+
940
|
| 176 |
+
],
|
| 177 |
+
"page_idx": 0
|
| 178 |
+
},
|
| 179 |
+
{
|
| 180 |
+
"type": "footer",
|
| 181 |
+
"text": "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 510-518 August 11-16, 2024 ©2024 Association for Computational Linguistics",
|
| 182 |
+
"bbox": [
|
| 183 |
+
95,
|
| 184 |
+
945,
|
| 185 |
+
900,
|
| 186 |
+
973
|
| 187 |
+
],
|
| 188 |
+
"page_idx": 0
|
| 189 |
+
},
|
| 190 |
+
{
|
| 191 |
+
"type": "text",
|
| 192 |
+
"text": "the cooperative game dynamics between the generator and predictor during training. As shown in Figure 1, optimizing rationale from \"appearance\" to \"appearance: light yellow to almost clear\" necessitates evaluating the causal impact on target prediction, guiding the model's subsequent optimization. Thus, if we could construct a guiding signal to reward or penalize the learning behavior of the model, this would significantly reduce the model's learning bias during training, alleviating the problem of degeneration accumulation.",
|
| 193 |
+
"bbox": [
|
| 194 |
+
112,
|
| 195 |
+
84,
|
| 196 |
+
489,
|
| 197 |
+
260
|
| 198 |
+
],
|
| 199 |
+
"page_idx": 1
|
| 200 |
+
},
|
| 201 |
+
{
|
| 202 |
+
"type": "text",
|
| 203 |
+
"text": "To address the above problems, we propose a novel rationalization method named AGR (Agent-Guided Rationalization), which leverages a reinforced causal agent to guide the cooperative game optimization during rationale training, as shown in Figure 2. In particular, 1) we quantify the causal effects in the rationale optimization process, and design a reinforcement learning (RL) process (e.g., Markov decision) to refine the learning bias during training. 2) We further pretrain an agent within reinforced causal environment to guide next actions by a system of rewards. We also theoretically illustrate that a robust model needs the desired guidance. 3) Experimental results demonstrate the effectiveness of our approach, surpassing state-of-the-art methods on BeerAdvocate and HotelReview datasets.",
|
| 204 |
+
"bbox": [
|
| 205 |
+
115,
|
| 206 |
+
261,
|
| 207 |
+
489,
|
| 208 |
+
517
|
| 209 |
+
],
|
| 210 |
+
"page_idx": 1
|
| 211 |
+
},
|
| 212 |
+
{
|
| 213 |
+
"type": "text",
|
| 214 |
+
"text": "2 Problem Formulation",
|
| 215 |
+
"text_level": 1,
|
| 216 |
+
"bbox": [
|
| 217 |
+
112,
|
| 218 |
+
530,
|
| 219 |
+
337,
|
| 220 |
+
544
|
| 221 |
+
],
|
| 222 |
+
"page_idx": 1
|
| 223 |
+
},
|
| 224 |
+
{
|
| 225 |
+
"type": "text",
|
| 226 |
+
"text": "Notation. Following previous research (Liu et al., 2023b,c,d), we consider the classification problem and denote the generator and predictor as $f_{G}(\\cdot)$ and $f_{P}(\\cdot)$ , with $\\theta_{g}$ and $\\theta_{p}$ representing their parameters. The input text $X = [x_{1}, x_{2}, \\dots, x_{l}] (1 \\leq i \\leq l)$ consists of tokens $x_{i}$ , where $l$ is the number of tokens. The label of $X$ is a one-hot vector $Y \\in \\{0,1\\}^{c}$ , where $c$ is the number of categories.",
|
| 227 |
+
"bbox": [
|
| 228 |
+
112,
|
| 229 |
+
554,
|
| 230 |
+
489,
|
| 231 |
+
682
|
| 232 |
+
],
|
| 233 |
+
"page_idx": 1
|
| 234 |
+
},
|
| 235 |
+
{
|
| 236 |
+
"type": "text",
|
| 237 |
+
"text": "Cooperative game for rationalization. The $f_{G}(\\cdot)$ selects the most informative pieces from $X$ by a sequence of binary mask $M = [m_1, \\dots, m_l] \\in \\{0, 1\\}^l$ . Then, it forms the rationale $Z = M \\odot X = [m_1 x_1, m_2 x_2, \\dots, m_l x_l]$ , where the informativeness of $Z$ is measured by the negative cross entropy $-H(Y, \\hat{Y})$ . Consequently, the $f_{G}(\\cdot)$ and $f_{P}(\\cdot)$ are optimized cooperatively by",
|
| 238 |
+
"bbox": [
|
| 239 |
+
112,
|
| 240 |
+
683,
|
| 241 |
+
489,
|
| 242 |
+
812
|
| 243 |
+
],
|
| 244 |
+
"page_idx": 1
|
| 245 |
+
},
|
| 246 |
+
{
|
| 247 |
+
"type": "equation",
|
| 248 |
+
"text": "\n$$\n\\min _ {\\theta_ {g}, \\theta_ {p}} \\mathcal {H} (Y, \\hat {Y} \\mid f _ {G} (X)), s. t. \\hat {Y} = f _ {P} \\left(f _ {G} (X)\\right). \\tag {1}\n$$\n",
|
| 249 |
+
"text_format": "latex",
|
| 250 |
+
"bbox": [
|
| 251 |
+
121,
|
| 252 |
+
822,
|
| 253 |
+
487,
|
| 254 |
+
848
|
| 255 |
+
],
|
| 256 |
+
"page_idx": 1
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "text",
|
| 260 |
+
"text": "In addition, rationales are usually constrained by compact and coherent regularization terms $\\Omega(M) = \\lambda_1\\left|\\frac{\\|M\\|_1}{l} - s\\right| + \\lambda_2\\sum_t|m_t - m_{t-1}|$ (Chang et al., 2020), where $s$ is a pre-defined sparsity level.",
|
| 261 |
+
"bbox": [
|
| 262 |
+
112,
|
| 263 |
+
853,
|
| 264 |
+
489,
|
| 265 |
+
921
|
| 266 |
+
],
|
| 267 |
+
"page_idx": 1
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "image",
|
| 271 |
+
"img_path": "images/c373464f24cafdcaf236551cddf442ddddd14ee96a18c7d031a74ec802dce041.jpg",
|
| 272 |
+
"image_caption": [
|
| 273 |
+
"Figure 2: The architecture of AGR. $X$ and $\\hat{Y}$ are the input and output. $S_{i}$ is the $i$ -th update state of rationale, while $\\widetilde{S}_{i}$ is the state after guidance by the agent."
|
| 274 |
+
],
|
| 275 |
+
"image_footnote": [],
|
| 276 |
+
"bbox": [
|
| 277 |
+
515,
|
| 278 |
+
80,
|
| 279 |
+
878,
|
| 280 |
+
249
|
| 281 |
+
],
|
| 282 |
+
"page_idx": 1
|
| 283 |
+
},
|
| 284 |
+
{
|
| 285 |
+
"type": "text",
|
| 286 |
+
"text": "3 Reinforced Causal Agent",
|
| 287 |
+
"text_level": 1,
|
| 288 |
+
"bbox": [
|
| 289 |
+
507,
|
| 290 |
+
313,
|
| 291 |
+
759,
|
| 292 |
+
331
|
| 293 |
+
],
|
| 294 |
+
"page_idx": 1
|
| 295 |
+
},
|
| 296 |
+
{
|
| 297 |
+
"type": "text",
|
| 298 |
+
"text": "In this section, we present our reinforced causal agent, considering both causal effect and learning bias of degeneration during rationale training.",
|
| 299 |
+
"bbox": [
|
| 300 |
+
507,
|
| 301 |
+
338,
|
| 302 |
+
882,
|
| 303 |
+
387
|
| 304 |
+
],
|
| 305 |
+
"page_idx": 1
|
| 306 |
+
},
|
| 307 |
+
{
|
| 308 |
+
"type": "text",
|
| 309 |
+
"text": "3.1 Rationale Causal Attribution",
|
| 310 |
+
"text_level": 1,
|
| 311 |
+
"bbox": [
|
| 312 |
+
507,
|
| 313 |
+
398,
|
| 314 |
+
784,
|
| 315 |
+
413
|
| 316 |
+
],
|
| 317 |
+
"page_idx": 1
|
| 318 |
+
},
|
| 319 |
+
{
|
| 320 |
+
"type": "text",
|
| 321 |
+
"text": "Formally, we construct a rationale $\\mathcal{Z}_k^*$ by maximizing an attribution metric $A(\\cdot)$ in rationalization",
|
| 322 |
+
"bbox": [
|
| 323 |
+
507,
|
| 324 |
+
419,
|
| 325 |
+
882,
|
| 326 |
+
450
|
| 327 |
+
],
|
| 328 |
+
"page_idx": 1
|
| 329 |
+
},
|
| 330 |
+
{
|
| 331 |
+
"type": "equation",
|
| 332 |
+
"text": "\n$$\n\\mathcal {Z} _ {K} ^ {*} = \\arg \\max _ {\\mathcal {Z} _ {K} \\subseteq X} A \\left(\\mathcal {Z} _ {K} | \\hat {y} _ {c}\\right), \\tag {2}\n$$\n",
|
| 333 |
+
"text_format": "latex",
|
| 334 |
+
"bbox": [
|
| 335 |
+
574,
|
| 336 |
+
467,
|
| 337 |
+
882,
|
| 338 |
+
485
|
| 339 |
+
],
|
| 340 |
+
"page_idx": 1
|
| 341 |
+
},
|
| 342 |
+
{
|
| 343 |
+
"type": "text",
|
| 344 |
+
"text": "where $A(\\cdot)$ measures the contribution of each candidate $\\mathcal{Z}_K$ to the target prediction $\\hat{y}_c$ .",
|
| 345 |
+
"bbox": [
|
| 346 |
+
507,
|
| 347 |
+
491,
|
| 348 |
+
882,
|
| 349 |
+
524
|
| 350 |
+
],
|
| 351 |
+
"page_idx": 1
|
| 352 |
+
},
|
| 353 |
+
{
|
| 354 |
+
"type": "text",
|
| 355 |
+
"text": "However, $A(\\mathcal{Z}_K|\\hat{y}_c)$ needs to be quantified. To this end, we introduce causal intervention calculus $do(\\cdot)$ , including $do(Z = Z_K)$ and $do(Z = \\emptyset)$ (Pearl, 2009; Pearl et al., 2016), and reformulate the causal contribution from $\\varnothing$ to $\\mathcal{Z}_K$ by mutual information,",
|
| 356 |
+
"bbox": [
|
| 357 |
+
507,
|
| 358 |
+
524,
|
| 359 |
+
882,
|
| 360 |
+
604
|
| 361 |
+
],
|
| 362 |
+
"page_idx": 1
|
| 363 |
+
},
|
| 364 |
+
{
|
| 365 |
+
"type": "equation",
|
| 366 |
+
"text": "\n$$\nA \\left(\\mathcal {Z} _ {K} \\left| \\hat {y} _ {c}\\right) = I \\left(\\hat {y} _ {c}, d o \\left(\\mathcal {Z} _ {K}\\right)\\right) - I \\left(\\hat {y} _ {c}, d o (\\emptyset)\\right). \\right. \\tag {3}\n$$\n",
|
| 367 |
+
"text_format": "latex",
|
| 368 |
+
"bbox": [
|
| 369 |
+
519,
|
| 370 |
+
617,
|
| 371 |
+
880,
|
| 372 |
+
634
|
| 373 |
+
],
|
| 374 |
+
"page_idx": 1
|
| 375 |
+
},
|
| 376 |
+
{
|
| 377 |
+
"type": "text",
|
| 378 |
+
"text": "3.2 Markov Decision Process as RL",
|
| 379 |
+
"text_level": 1,
|
| 380 |
+
"bbox": [
|
| 381 |
+
507,
|
| 382 |
+
646,
|
| 383 |
+
803,
|
| 384 |
+
661
|
| 385 |
+
],
|
| 386 |
+
"page_idx": 1
|
| 387 |
+
},
|
| 388 |
+
{
|
| 389 |
+
"type": "text",
|
| 390 |
+
"text": "Equation 3 illustrates the procedure for deriving $\\mathcal{Z}_K$ from an initial state of zero training. However, it may generate degraded rationales at step $i$ , where $0 < i < K$ . Thus we need to seek for quantifiable objectives between $\\mathcal{Z}_i$ and $\\mathcal{Z}_{i+1}$ ,",
|
| 391 |
+
"bbox": [
|
| 392 |
+
507,
|
| 393 |
+
668,
|
| 394 |
+
882,
|
| 395 |
+
747
|
| 396 |
+
],
|
| 397 |
+
"page_idx": 1
|
| 398 |
+
},
|
| 399 |
+
{
|
| 400 |
+
"type": "equation",
|
| 401 |
+
"text": "\n$$\n\\mathcal {Z} _ {i + 1} = \\arg \\max _ {\\mathcal {Z} _ {i + 1} \\in \\{X \\backslash \\mathcal {Z} _ {i} \\}} A \\left(\\mathcal {Z} _ {i + 1} \\mid \\mathcal {Z} _ {i}, \\hat {y} _ {c}\\right). \\tag {4}\n$$\n",
|
| 402 |
+
"text_format": "latex",
|
| 403 |
+
"bbox": [
|
| 404 |
+
524,
|
| 405 |
+
760,
|
| 406 |
+
880,
|
| 407 |
+
778
|
| 408 |
+
],
|
| 409 |
+
"page_idx": 1
|
| 410 |
+
},
|
| 411 |
+
{
|
| 412 |
+
"type": "text",
|
| 413 |
+
"text": "According to Equation 3, we have the causal contribution between $\\mathcal{Z}_i$ and $\\mathcal{Z}_{i + 1}$ .. $A(\\mathcal{Z}_{i + 1}|\\mathcal{Z}_i,\\hat{y}_c) =$ $I(\\hat{y}_c,do(\\mathcal{Z}_{i + 1})) - I(\\hat{y}_c,do(\\mathcal{Z}_i))$ . So,",
|
| 414 |
+
"bbox": [
|
| 415 |
+
507,
|
| 416 |
+
790,
|
| 417 |
+
882,
|
| 418 |
+
840
|
| 419 |
+
],
|
| 420 |
+
"page_idx": 1
|
| 421 |
+
},
|
| 422 |
+
{
|
| 423 |
+
"type": "equation",
|
| 424 |
+
"text": "\n$$\n\\begin{array}{l} A \\left(\\mathcal {Z} _ {i + 1} \\mid \\mathcal {Z} _ {i}, \\hat {y} _ {c}\\right) = - H \\left(\\hat {y} _ {c} \\mid \\mathcal {Z} _ {i + 1}\\right) + H \\left(\\hat {y} _ {c} \\mid \\mathcal {Z} _ {i}\\right) \\\\ = - H \\left(\\hat {y} _ {c} \\mid \\left\\{\\mathcal {Z} _ {i} \\cup \\left\\{z _ {i + 1} \\right\\} \\right\\}\\right) + H \\left(\\hat {y} _ {c} \\mid \\mathcal {Z} _ {i}\\right) \\tag {5} \\\\ = - p _ {\\theta} \\left(\\hat {y} _ {c} \\mid \\mathcal {Z}\\right) \\log \\frac {p _ {\\theta} \\left(\\hat {y} _ {c} \\mid \\mathcal {Z} _ {i}\\right)}{p _ {\\theta} \\left(\\hat {y} _ {c} \\mid \\{\\mathcal {Z} _ {i} \\cup \\left\\{z _ {i + 1} \\right\\} \\}\\right)}, \\\\ \\end{array}\n$$\n",
|
| 425 |
+
"text_format": "latex",
|
| 426 |
+
"bbox": [
|
| 427 |
+
522,
|
| 428 |
+
850,
|
| 429 |
+
880,
|
| 430 |
+
925
|
| 431 |
+
],
|
| 432 |
+
"page_idx": 1
|
| 433 |
+
},
|
| 434 |
+
{
|
| 435 |
+
"type": "page_number",
|
| 436 |
+
"text": "511",
|
| 437 |
+
"bbox": [
|
| 438 |
+
485,
|
| 439 |
+
928,
|
| 440 |
+
512,
|
| 441 |
+
940
|
| 442 |
+
],
|
| 443 |
+
"page_idx": 1
|
| 444 |
+
},
|
| 445 |
+
{
|
| 446 |
+
"type": "text",
|
| 447 |
+
"text": "where $H(\\hat{y}_c|\\mathcal{Z}_i)$ is the term of conditional entropy. As a result, Equation 5 explicitly quantifies $\\mathcal{Z}_{i + 1}$ 's effect with previously obtained rationale $\\mathcal{Z}_i$ .",
|
| 448 |
+
"bbox": [
|
| 449 |
+
112,
|
| 450 |
+
84,
|
| 451 |
+
487,
|
| 452 |
+
131
|
| 453 |
+
],
|
| 454 |
+
"page_idx": 2
|
| 455 |
+
},
|
| 456 |
+
{
|
| 457 |
+
"type": "text",
|
| 458 |
+
"text": "To further promote the cooperative game, we model the training process of rationale as a Markov decision process $\\mathbb{M} = \\{\\mathbb{S},\\mathbb{A},\\mathbb{P},\\mathbb{R}\\}$ , where $\\mathbb{S} = \\{s_i\\}$ represents set of states abstracting the process of optimizing rationale during training, and $\\mathbb{A} = \\{a_i\\}$ indicates the set of actions. In particular, The transition dynamics $\\mathbb{P}(s_{i + 1}|s_i,a_{i + 1})$ specify how the state $s_{i + 1}$ is updated from the prior state $s_i$ by taking action $a_{i + 1}$ . Besides, $\\mathbb{R}(s_i,a_{i + 1})$ quantifies the reward obtained after taking action $a_{i + 1}$ based on the prior state $s_i$ . Therefore, cooperative training for rationale can be depicted as the sequence process $(s_0,a_1,r_1,s_1,\\dots,a_K,r_K,s_K)$ , where the state $s_i$ can be formulated by $s_i = Z_i$ in the $i$ -th update; $s_0 = Z_0$ can be initiated by generator $f_{G}(\\cdot)$ .",
|
| 459 |
+
"bbox": [
|
| 460 |
+
112,
|
| 461 |
+
133,
|
| 462 |
+
487,
|
| 463 |
+
374
|
| 464 |
+
],
|
| 465 |
+
"page_idx": 2
|
| 466 |
+
},
|
| 467 |
+
{
|
| 468 |
+
"type": "text",
|
| 469 |
+
"text": "Nevertheless, the above process exhibits a limitation in its inability to detect learning bias at any given state $s_i$ . To address this, we reformulate the sequence process as $(<s_0, \\widetilde{a}_0, \\widetilde{r}_0, \\widetilde{s}_0>, a_1, r_1, <s_1, \\widetilde{a}_1, \\widetilde{r}_1, \\widetilde{s}_1>, \\dots, a_K, r_K, <s_K, \\widetilde{a}_K, \\widetilde{r}_K, \\widetilde{s}_K>)$ , where $<s_i, \\widetilde{a}_i, \\widetilde{r}_i, \\widetilde{s}_i>$ indicates process of transitioning from state $s_i$ to $\\widetilde{s}_i$ in the $i$ -th update.",
|
| 470 |
+
"bbox": [
|
| 471 |
+
112,
|
| 472 |
+
375,
|
| 473 |
+
487,
|
| 474 |
+
486
|
| 475 |
+
],
|
| 476 |
+
"page_idx": 2
|
| 477 |
+
},
|
| 478 |
+
{
|
| 479 |
+
"type": "text",
|
| 480 |
+
"text": "Given the state $s_i = Z_i$ , we derive the available action space: $\\widetilde{\\mathbb{A}}_i = \\{X\\backslash Z_i\\}$ . The searched action can be represented as",
|
| 481 |
+
"bbox": [
|
| 482 |
+
112,
|
| 483 |
+
487,
|
| 484 |
+
487,
|
| 485 |
+
533
|
| 486 |
+
],
|
| 487 |
+
"page_idx": 2
|
| 488 |
+
},
|
| 489 |
+
{
|
| 490 |
+
"type": "equation",
|
| 491 |
+
"text": "\n$$\n\\widetilde {a} _ {i} = \\widetilde {z} _ {i}, \\tag {6}\n$$\n",
|
| 492 |
+
"text_format": "latex",
|
| 493 |
+
"bbox": [
|
| 494 |
+
268,
|
| 495 |
+
541,
|
| 496 |
+
487,
|
| 497 |
+
558
|
| 498 |
+
],
|
| 499 |
+
"page_idx": 2
|
| 500 |
+
},
|
| 501 |
+
{
|
| 502 |
+
"type": "text",
|
| 503 |
+
"text": "where $\\widetilde{z}_i\\in \\{X\\backslash Z_i\\}$ indicates candidate rationale in action space. Having made the action $\\widetilde{a}_i$ , the state transition is to merge $\\widetilde{z}_i$ into $Z_{i}$ , i.e., $\\widetilde{Z}_i = Z_i\\cup \\{\\widetilde{z}_i\\}$",
|
| 504 |
+
"bbox": [
|
| 505 |
+
112,
|
| 506 |
+
562,
|
| 507 |
+
487,
|
| 508 |
+
609
|
| 509 |
+
],
|
| 510 |
+
"page_idx": 2
|
| 511 |
+
},
|
| 512 |
+
{
|
| 513 |
+
"type": "text",
|
| 514 |
+
"text": "To assess the effectiveness of the action $\\widetilde{a}_i$ in mitigating the learning bias of the model, the reward $\\widetilde{\\mathbb{R}}_i(\\widetilde{s}_i,\\widetilde{a}_i)$ at state $s_i$ can be formulated as follows:",
|
| 515 |
+
"bbox": [
|
| 516 |
+
112,
|
| 517 |
+
609,
|
| 518 |
+
487,
|
| 519 |
+
652
|
| 520 |
+
],
|
| 521 |
+
"page_idx": 2
|
| 522 |
+
},
|
| 523 |
+
{
|
| 524 |
+
"type": "equation",
|
| 525 |
+
"text": "\n$$\n\\widetilde {\\mathbb {R}} _ {i} = \\left\\{ \\begin{array}{l l} A \\left(\\widetilde {z _ {i}} \\mid Z _ {i}, \\hat {y} _ {c} ^ {*}\\right) + 1, & i f f _ {P} \\left(Z _ {i} \\cup \\{\\widetilde {z _ {i}} \\}\\right) = \\hat {y} _ {c} ^ {*} \\\\ A \\left(\\widetilde {z _ {i}} \\mid Z _ {i}, \\hat {y} _ {c} ^ {*}\\right) - 1, & o t h e r w i s e. \\end{array} \\right. \\tag {7}\n$$\n",
|
| 526 |
+
"text_format": "latex",
|
| 527 |
+
"bbox": [
|
| 528 |
+
132,
|
| 529 |
+
656,
|
| 530 |
+
487,
|
| 531 |
+
683
|
| 532 |
+
],
|
| 533 |
+
"page_idx": 2
|
| 534 |
+
},
|
| 535 |
+
{
|
| 536 |
+
"type": "text",
|
| 537 |
+
"text": "According to Equation 5, although we can quantify the probabilities at states $\\widetilde{s}_i$ and $s_i$ , and present the relevant reward $\\widetilde{\\mathbb{R}}_i$ , obtaining $y_c^*$ poses a challenge.",
|
| 538 |
+
"bbox": [
|
| 539 |
+
112,
|
| 540 |
+
687,
|
| 541 |
+
487,
|
| 542 |
+
736
|
| 543 |
+
],
|
| 544 |
+
"page_idx": 2
|
| 545 |
+
},
|
| 546 |
+
{
|
| 547 |
+
"type": "text",
|
| 548 |
+
"text": "3.3 Pretrained Agent",
|
| 549 |
+
"text_level": 1,
|
| 550 |
+
"bbox": [
|
| 551 |
+
112,
|
| 552 |
+
744,
|
| 553 |
+
297,
|
| 554 |
+
759
|
| 555 |
+
],
|
| 556 |
+
"page_idx": 2
|
| 557 |
+
},
|
| 558 |
+
{
|
| 559 |
+
"type": "text",
|
| 560 |
+
"text": "To address the limitation, we propose a reinforced causal agent in the aforementioned causal and reinforcement learning framework to better align the probability distribution of the target prediction and theoretically justify the creation of an auxiliary agent targeting $\\hat{y}_c$ .",
|
| 561 |
+
"bbox": [
|
| 562 |
+
112,
|
| 563 |
+
765,
|
| 564 |
+
487,
|
| 565 |
+
860
|
| 566 |
+
],
|
| 567 |
+
"page_idx": 2
|
| 568 |
+
},
|
| 569 |
+
{
|
| 570 |
+
"type": "text",
|
| 571 |
+
"text": "Pretrained Embedding. We pretrain the auxiliary agent, denoted as $f_{A}(\\cdot)$ , with",
|
| 572 |
+
"bbox": [
|
| 573 |
+
112,
|
| 574 |
+
862,
|
| 575 |
+
487,
|
| 576 |
+
894
|
| 577 |
+
],
|
| 578 |
+
"page_idx": 2
|
| 579 |
+
},
|
| 580 |
+
{
|
| 581 |
+
"type": "equation",
|
| 582 |
+
"text": "\n$$\n\\theta_ {A} ^ {*} = \\arg \\min _ {\\theta_ {A}} \\mathcal {H} (Y, \\hat {Y} | X), s. t. \\hat {Y} = f _ {A} (X), \\tag {8}\n$$\n",
|
| 583 |
+
"text_format": "latex",
|
| 584 |
+
"bbox": [
|
| 585 |
+
129,
|
| 586 |
+
898,
|
| 587 |
+
487,
|
| 588 |
+
923
|
| 589 |
+
],
|
| 590 |
+
"page_idx": 2
|
| 591 |
+
},
|
| 592 |
+
{
|
| 593 |
+
"type": "text",
|
| 594 |
+
"text": "where $\\theta_{A}$ represents the parameters of the agent, and $\\theta_{A}^{*}$ denotes the optimal solution.",
|
| 595 |
+
"bbox": [
|
| 596 |
+
507,
|
| 597 |
+
84,
|
| 598 |
+
882,
|
| 599 |
+
115
|
| 600 |
+
],
|
| 601 |
+
"page_idx": 2
|
| 602 |
+
},
|
| 603 |
+
{
|
| 604 |
+
"type": "text",
|
| 605 |
+
"text": "Theorem Analysis. Assuming $X, Z, Y$ , and $\\mathcal{A}$ as random variables in rationalization representing the input, rationale, label, and auxiliary variable, respectively, we propose:",
|
| 606 |
+
"bbox": [
|
| 607 |
+
507,
|
| 608 |
+
116,
|
| 609 |
+
882,
|
| 610 |
+
181
|
| 611 |
+
],
|
| 612 |
+
"page_idx": 2
|
| 613 |
+
},
|
| 614 |
+
{
|
| 615 |
+
"type": "text",
|
| 616 |
+
"text": "Lemma 1. Given $X$ , $Z$ , $Y$ , $\\hat{Y} = f_{P}(f_{G}(X))$ . Existing a guiding variable $\\mathcal{A}$ could enable the predictor $f_{P}(\\cdot)$ to achieve good predictions. That is, a solution for $\\mathcal{A}$ exists, and $X$ is a solution of $\\mathcal{A}$ .",
|
| 617 |
+
"bbox": [
|
| 618 |
+
507,
|
| 619 |
+
183,
|
| 620 |
+
882,
|
| 621 |
+
247
|
| 622 |
+
],
|
| 623 |
+
"page_idx": 2
|
| 624 |
+
},
|
| 625 |
+
{
|
| 626 |
+
"type": "text",
|
| 627 |
+
"text": "The proof is provided in Appendix A. Lemma 1 suggests that constructing an auxiliary variable $\\mathcal{A}$ aligned with $X$ for rationalization contributes to the learning of a good prediction.",
|
| 628 |
+
"bbox": [
|
| 629 |
+
507,
|
| 630 |
+
250,
|
| 631 |
+
882,
|
| 632 |
+
315
|
| 633 |
+
],
|
| 634 |
+
"page_idx": 2
|
| 635 |
+
},
|
| 636 |
+
{
|
| 637 |
+
"type": "text",
|
| 638 |
+
"text": "4 Agent-Guided Rationalization",
|
| 639 |
+
"text_level": 1,
|
| 640 |
+
"bbox": [
|
| 641 |
+
507,
|
| 642 |
+
325,
|
| 643 |
+
803,
|
| 644 |
+
341
|
| 645 |
+
],
|
| 646 |
+
"page_idx": 2
|
| 647 |
+
},
|
| 648 |
+
{
|
| 649 |
+
"type": "text",
|
| 650 |
+
"text": "As depicted in Figure 2, following the establishment of the environment for the reinforced causal agent, we delineate the construction and training of the policy network $q_{\\phi}$ .",
|
| 651 |
+
"bbox": [
|
| 652 |
+
507,
|
| 653 |
+
350,
|
| 654 |
+
882,
|
| 655 |
+
414
|
| 656 |
+
],
|
| 657 |
+
"page_idx": 2
|
| 658 |
+
},
|
| 659 |
+
{
|
| 660 |
+
"type": "text",
|
| 661 |
+
"text": "4.1 Policy Network Architecture",
|
| 662 |
+
"text_level": 1,
|
| 663 |
+
"bbox": [
|
| 664 |
+
507,
|
| 665 |
+
424,
|
| 666 |
+
781,
|
| 667 |
+
439
|
| 668 |
+
],
|
| 669 |
+
"page_idx": 2
|
| 670 |
+
},
|
| 671 |
+
{
|
| 672 |
+
"type": "text",
|
| 673 |
+
"text": "It takes the pair of intermediate state $\\mathcal{Z}_i$ and $\\hat{y}_c$ provided by $f_{A}(\\cdot)$ as input. Formally,",
|
| 674 |
+
"bbox": [
|
| 675 |
+
507,
|
| 676 |
+
444,
|
| 677 |
+
880,
|
| 678 |
+
476
|
| 679 |
+
],
|
| 680 |
+
"page_idx": 2
|
| 681 |
+
},
|
| 682 |
+
{
|
| 683 |
+
"type": "equation",
|
| 684 |
+
"text": "\n$$\n\\widetilde {z} _ {i} \\sim q _ {\\phi} \\left(\\mathcal {Z} _ {i}, \\hat {y} _ {c}\\right), \\tag {9}\n$$\n",
|
| 685 |
+
"text_format": "latex",
|
| 686 |
+
"bbox": [
|
| 687 |
+
633,
|
| 688 |
+
483,
|
| 689 |
+
882,
|
| 690 |
+
501
|
| 691 |
+
],
|
| 692 |
+
"page_idx": 2
|
| 693 |
+
},
|
| 694 |
+
{
|
| 695 |
+
"type": "text",
|
| 696 |
+
"text": "where $\\theta_{\\phi}$ is the trainable parameters of the policy network, and $\\widetilde{z}_i$ is generated according to the probability of next action $\\mathbb{P}_{\\phi}(\\widetilde{z}_i|\\mathcal{Z}_i,\\hat{y}_c)$ .",
|
| 697 |
+
"bbox": [
|
| 698 |
+
507,
|
| 699 |
+
507,
|
| 700 |
+
882,
|
| 701 |
+
555
|
| 702 |
+
],
|
| 703 |
+
"page_idx": 2
|
| 704 |
+
},
|
| 705 |
+
{
|
| 706 |
+
"type": "text",
|
| 707 |
+
"text": "Representation learning of action candidates. With the space of action candidates $\\widetilde{\\mathbb{A}}_i = X\\backslash \\mathcal{Z}_i$ our policy network first learns the representation for each action candidate $\\widetilde{a}_{i}^{(j)}(0 < j < N)$ , where $N$ is the number of candidates.",
|
| 708 |
+
"bbox": [
|
| 709 |
+
507,
|
| 710 |
+
556,
|
| 711 |
+
882,
|
| 712 |
+
634
|
| 713 |
+
],
|
| 714 |
+
"page_idx": 2
|
| 715 |
+
},
|
| 716 |
+
{
|
| 717 |
+
"type": "text",
|
| 718 |
+
"text": "Then, we employ the encoder to encode $X \\backslash \\mathcal{Z}_i$ for obtaining the action representation of $\\widetilde{z}_i$ by",
|
| 719 |
+
"bbox": [
|
| 720 |
+
507,
|
| 721 |
+
638,
|
| 722 |
+
880,
|
| 723 |
+
670
|
| 724 |
+
],
|
| 725 |
+
"page_idx": 2
|
| 726 |
+
},
|
| 727 |
+
{
|
| 728 |
+
"type": "equation",
|
| 729 |
+
"text": "\n$$\ne _ {\\widetilde {z} _ {i}} = \\operatorname {e n c o d e r} \\left(X \\backslash \\mathcal {Z} _ {i}\\right), \\tag {10}\n$$\n",
|
| 730 |
+
"text_format": "latex",
|
| 731 |
+
"bbox": [
|
| 732 |
+
606,
|
| 733 |
+
676,
|
| 734 |
+
882,
|
| 735 |
+
694
|
| 736 |
+
],
|
| 737 |
+
"page_idx": 2
|
| 738 |
+
},
|
| 739 |
+
{
|
| 740 |
+
"type": "text",
|
| 741 |
+
"text": "utilizing bidirectional Gated Recurrent Units (GRUs) (Cho et al., 2014) as the encoder.",
|
| 742 |
+
"bbox": [
|
| 743 |
+
507,
|
| 744 |
+
700,
|
| 745 |
+
880,
|
| 746 |
+
731
|
| 747 |
+
],
|
| 748 |
+
"page_idx": 2
|
| 749 |
+
},
|
| 750 |
+
{
|
| 751 |
+
"type": "text",
|
| 752 |
+
"text": "Sampling of action. The policy network aims to select a singular action $\\widetilde{a}_i = \\widetilde{z}_i$ from the search space, prioritizing its relevance to the current state $s_i = \\mathcal{Z}_i$ . This selection process is modeled as:",
|
| 753 |
+
"bbox": [
|
| 754 |
+
507,
|
| 755 |
+
732,
|
| 756 |
+
880,
|
| 757 |
+
796
|
| 758 |
+
],
|
| 759 |
+
"page_idx": 2
|
| 760 |
+
},
|
| 761 |
+
{
|
| 762 |
+
"type": "equation",
|
| 763 |
+
"text": "\n$$\np _ {\\widetilde {z} _ {i}} = M L P \\left(\\left[ \\boldsymbol {e} _ {\\widetilde {z} _ {i}}; \\boldsymbol {e} _ {\\mathcal {Z} _ {i}} \\right]\\right), \\tag {11}\n$$\n",
|
| 764 |
+
"text_format": "latex",
|
| 765 |
+
"bbox": [
|
| 766 |
+
600,
|
| 767 |
+
803,
|
| 768 |
+
882,
|
| 769 |
+
821
|
| 770 |
+
],
|
| 771 |
+
"page_idx": 2
|
| 772 |
+
},
|
| 773 |
+
{
|
| 774 |
+
"type": "text",
|
| 775 |
+
"text": "where $e_{\\mathcal{Z}_i}$ indicates the current rationale's representation. The selection probability for each action candidate within $\\widetilde{\\mathbb{A}}_i$ is computed using",
|
| 776 |
+
"bbox": [
|
| 777 |
+
507,
|
| 778 |
+
827,
|
| 779 |
+
882,
|
| 780 |
+
875
|
| 781 |
+
],
|
| 782 |
+
"page_idx": 2
|
| 783 |
+
},
|
| 784 |
+
{
|
| 785 |
+
"type": "equation",
|
| 786 |
+
"text": "\n$$\n\\mathbb {P} _ {\\phi} \\left(\\widetilde {z} _ {i} \\mid \\mathcal {Z} _ {i}, \\hat {y} _ {c}\\right) = \\operatorname {s o f t m a x} _ {\\widetilde {\\mathbb {A}} _ {i}} \\left(p _ {\\widetilde {z} _ {i}}\\right), \\tag {12}\n$$\n",
|
| 787 |
+
"text_format": "latex",
|
| 788 |
+
"bbox": [
|
| 789 |
+
569,
|
| 790 |
+
881,
|
| 791 |
+
882,
|
| 792 |
+
900
|
| 793 |
+
],
|
| 794 |
+
"page_idx": 2
|
| 795 |
+
},
|
| 796 |
+
{
|
| 797 |
+
"type": "text",
|
| 798 |
+
"text": "where $\\phi$ is the parameters collected of MLP.",
|
| 799 |
+
"bbox": [
|
| 800 |
+
507,
|
| 801 |
+
904,
|
| 802 |
+
838,
|
| 803 |
+
920
|
| 804 |
+
],
|
| 805 |
+
"page_idx": 2
|
| 806 |
+
},
|
| 807 |
+
{
|
| 808 |
+
"type": "page_number",
|
| 809 |
+
"text": "512",
|
| 810 |
+
"bbox": [
|
| 811 |
+
485,
|
| 812 |
+
927,
|
| 813 |
+
515,
|
| 814 |
+
940
|
| 815 |
+
],
|
| 816 |
+
"page_idx": 2
|
| 817 |
+
},
|
| 818 |
+
{
|
| 819 |
+
"type": "table",
|
| 820 |
+
"img_path": "images/7d0e42e8825f739d585f6799cb48636881ba2eed9a0287d76273971aed28ab47.jpg",
|
| 821 |
+
"table_caption": [],
|
| 822 |
+
"table_footnote": [],
|
| 823 |
+
"table_body": "<table><tr><td rowspan=\"2\">Methods</td><td rowspan=\"2\">S</td><td colspan=\"3\">Appearance</td><td colspan=\"3\">Aroma</td><td colspan=\"3\">Palate</td></tr><tr><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>RNP (Lei et al., 2016)</td><td>20</td><td>39.4</td><td>44.9</td><td>42.0</td><td>37.5</td><td>51.9</td><td>43.5</td><td>21.6</td><td>38.9</td><td>27.8</td></tr><tr><td>HardKuma (Bastings et al., 2019)</td><td>20</td><td>64.9</td><td>69.2</td><td>67.0</td><td>37.0</td><td>55.8</td><td>44.5</td><td>14.6</td><td>22.3</td><td>17.7</td></tr><tr><td>IB (Paranjape et al., 2020)</td><td>20</td><td>59.3</td><td>69.0</td><td>63.8</td><td>38.6</td><td>55.5</td><td>45.6</td><td>21.6</td><td>48.5</td><td>29.9</td></tr><tr><td>INVRAT (Chang et al., 2020)</td><td>20</td><td>58.9</td><td>67.2</td><td>62.8</td><td>29.3</td><td>52.1</td><td>37.5</td><td>24.0</td><td>55.2</td><td>33.5</td></tr><tr><td>DARE (Yue et al., 2022)</td><td>20</td><td>63.7</td><td>71.8</td><td>67.5</td><td>41.0</td><td>61.5</td><td>49.3</td><td>24.4</td><td>54.9</td><td>33.8</td></tr><tr><td>FR (Liu et al., 2022)</td><td>20</td><td>74.9</td><td>84.9</td><td>79.6</td><td>58.7</td><td>73.3</td><td>65.2</td><td>36.6</td><td>59.4</td><td>45.3</td></tr><tr><td>Inter-RAT (Yue et al., 2023)</td><td>20</td><td>62.0</td><td>76.7</td><td>68.6</td><td>44.2</td><td>65.4</td><td>52.8</td><td>26.3</td><td>59.1</td><td>36.4</td></tr><tr><td>MGR (Liu et al., 2023b)</td><td>20</td><td>76.3</td><td>83.6</td><td>79.8</td><td>64.4</td><td>81.3</td><td>71.9</td><td>47.1</td><td>73.1</td><td>57.3</td></tr><tr><td>AGR(Ours)</td><td>20</td><td>83.7</td><td>87.5</td><td>85.6</td><td>67.5</td><td>81.4</td><td>73.8</td><td>47.6</td><td>77.7</td><td>59.0</td></tr></table>",
|
| 824 |
+
"bbox": [
|
| 825 |
+
136,
|
| 826 |
+
80,
|
| 827 |
+
863,
|
| 828 |
+
227
|
| 829 |
+
],
|
| 830 |
+
"page_idx": 3
|
| 831 |
+
},
|
| 832 |
+
{
|
| 833 |
+
"type": "table",
|
| 834 |
+
"img_path": "images/0cfcfc745925670f828708193e503950dcc94260e2577d1c54da82840fcb83dc.jpg",
|
| 835 |
+
"table_caption": [
|
| 836 |
+
"Table 1: Results on BeerAdvocate, where Bold text indicates the best experimental results across different methods."
|
| 837 |
+
],
|
| 838 |
+
"table_footnote": [],
|
| 839 |
+
"table_body": "<table><tr><td rowspan=\"2\">Methods</td><td colspan=\"4\">Appearance</td><td colspan=\"4\">Appearance</td><td colspan=\"4\">Appearance</td></tr><tr><td>S</td><td>P</td><td>R</td><td>F1</td><td>S</td><td>P</td><td>R</td><td>F1</td><td>S</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>RNP</td><td>10</td><td>32.4</td><td>18.6</td><td>23.6</td><td>20</td><td>39.4</td><td>44.9</td><td>42.0</td><td>30</td><td>24.2</td><td>41.2</td><td>30.5</td></tr><tr><td>DARE</td><td>10</td><td>63.9</td><td>42.8</td><td>51.3</td><td>20</td><td>63.7</td><td>71.8</td><td>67.5</td><td>30</td><td>45.5</td><td>80.6</td><td>58.1</td></tr><tr><td>FR</td><td>10</td><td>70.4</td><td>42.0</td><td>52.6</td><td>20</td><td>74.9</td><td>84.9</td><td>79.6</td><td>30</td><td>50.6</td><td>81.4</td><td>62.3</td></tr><tr><td>Inter-RAT</td><td>10</td><td>66.0</td><td>46.5</td><td>54.6</td><td>20</td><td>62.0</td><td>76.7</td><td>68.6</td><td>30</td><td>48.1</td><td>82.7</td><td>60.8</td></tr><tr><td>MGR</td><td>10</td><td>87.5</td><td>51.7</td><td>65.0</td><td>20</td><td>76.3</td><td>83.6</td><td>79.8</td><td>30</td><td>57.2</td><td>93.9</td><td>71.1</td></tr><tr><td>AGR</td><td>10</td><td>83.5</td><td>54.9</td><td>66.2</td><td>20</td><td>83.7</td><td>87.5</td><td>85.6</td><td>30</td><td>59.7</td><td>94.3</td><td>73.1</td></tr></table>",
|
| 840 |
+
"bbox": [
|
| 841 |
+
122,
|
| 842 |
+
267,
|
| 843 |
+
485,
|
| 844 |
+
337
|
| 845 |
+
],
|
| 846 |
+
"page_idx": 3
|
| 847 |
+
},
|
| 848 |
+
{
|
| 849 |
+
"type": "text",
|
| 850 |
+
"text": "4.2 Policy Gradient Training",
|
| 851 |
+
"text_level": 1,
|
| 852 |
+
"bbox": [
|
| 853 |
+
112,
|
| 854 |
+
380,
|
| 855 |
+
357,
|
| 856 |
+
395
|
| 857 |
+
],
|
| 858 |
+
"page_idx": 3
|
| 859 |
+
},
|
| 860 |
+
{
|
| 861 |
+
"type": "text",
|
| 862 |
+
"text": "Since discrete sampling within the policy network blocks gradients, we adopt policy gradient-based training framework REINFORCE (Sutton et al., 1999). The objective $\\max_{\\Omega}(\\mathbb{L})$ is as follows:",
|
| 863 |
+
"bbox": [
|
| 864 |
+
112,
|
| 865 |
+
401,
|
| 866 |
+
487,
|
| 867 |
+
464
|
| 868 |
+
],
|
| 869 |
+
"page_idx": 3
|
| 870 |
+
},
|
| 871 |
+
{
|
| 872 |
+
"type": "equation",
|
| 873 |
+
"text": "\n$$\n\\max _ {\\phi} \\mathbb {E} _ {\\mathcal {Z} _ {i} \\in \\widetilde {\\mathbb {A}} _ {i}} \\mathbb {E} _ {i} \\left[ \\widetilde {\\mathbb {R}} \\left(\\mathcal {Z} _ {i}, \\bar {z} _ {i}\\right) \\log \\mathcal {P} _ {\\phi} \\left(\\bar {z} _ {i} \\mid \\mathcal {Z} _ {i}, \\hat {y} _ {c}\\right) \\right]. \\tag {13}\n$$\n",
|
| 874 |
+
"text_format": "latex",
|
| 875 |
+
"bbox": [
|
| 876 |
+
122,
|
| 877 |
+
474,
|
| 878 |
+
487,
|
| 879 |
+
499
|
| 880 |
+
],
|
| 881 |
+
"page_idx": 3
|
| 882 |
+
},
|
| 883 |
+
{
|
| 884 |
+
"type": "text",
|
| 885 |
+
"text": "The final task loss is a jointly optimized objective:",
|
| 886 |
+
"bbox": [
|
| 887 |
+
112,
|
| 888 |
+
508,
|
| 889 |
+
487,
|
| 890 |
+
525
|
| 891 |
+
],
|
| 892 |
+
"page_idx": 3
|
| 893 |
+
},
|
| 894 |
+
{
|
| 895 |
+
"type": "equation",
|
| 896 |
+
"text": "\n$$\n\\min _ {\\theta_ {g}, \\theta_ {p}} \\mathcal {H} (Y, \\hat {Y}) + \\Omega (M) - \\Omega (\\mathbb {L}), s. t. \\hat {Y} = f _ {P} \\left(f _ {G} (X)\\right) \\tag {14}\n$$\n",
|
| 897 |
+
"text_format": "latex",
|
| 898 |
+
"bbox": [
|
| 899 |
+
112,
|
| 900 |
+
533,
|
| 901 |
+
489,
|
| 902 |
+
573
|
| 903 |
+
],
|
| 904 |
+
"page_idx": 3
|
| 905 |
+
},
|
| 906 |
+
{
|
| 907 |
+
"type": "text",
|
| 908 |
+
"text": "5 Experiments",
|
| 909 |
+
"text_level": 1,
|
| 910 |
+
"bbox": [
|
| 911 |
+
112,
|
| 912 |
+
585,
|
| 913 |
+
258,
|
| 914 |
+
602
|
| 915 |
+
],
|
| 916 |
+
"page_idx": 3
|
| 917 |
+
},
|
| 918 |
+
{
|
| 919 |
+
"type": "text",
|
| 920 |
+
"text": "5.1 Datasets, Baselines and Evaluation Metrics",
|
| 921 |
+
"text_level": 1,
|
| 922 |
+
"bbox": [
|
| 923 |
+
112,
|
| 924 |
+
611,
|
| 925 |
+
431,
|
| 926 |
+
640
|
| 927 |
+
],
|
| 928 |
+
"page_idx": 3
|
| 929 |
+
},
|
| 930 |
+
{
|
| 931 |
+
"type": "text",
|
| 932 |
+
"text": "Datasets. We compare AGR using BeerAdvocate (McAuley et al., 2012) and HotelReview (Wang et al., 2010) datasets, which are two multi-aspect sentiment classification datasets widely used in rationalization. Following existing work, we obtain the data in the same way as Yue et al. (2023) for BeerAdvocate, and we preprocess HotelReview dataset in the same way as Huang et al. (2021) and Liu et al. (2023b).",
|
| 933 |
+
"bbox": [
|
| 934 |
+
112,
|
| 935 |
+
646,
|
| 936 |
+
487,
|
| 937 |
+
790
|
| 938 |
+
],
|
| 939 |
+
"page_idx": 3
|
| 940 |
+
},
|
| 941 |
+
{
|
| 942 |
+
"type": "text",
|
| 943 |
+
"text": "Baselines. We compare with eight models for Beer-Advocate, including three sampling-based methods: RNP (Lei et al., 2016), HardKuma (Bastings et al., 2019), Information Bottleneck (IB) (Paranjape et al., 2020), and three extra-component-based methods: DARE (Yue et al., 2022), FR (Liu et al., 2022), MGR (Liu et al., 2023b), and two causal-based methods: INVRAT (Chang et al., 2020),",
|
| 944 |
+
"bbox": [
|
| 945 |
+
112,
|
| 946 |
+
793,
|
| 947 |
+
489,
|
| 948 |
+
921
|
| 949 |
+
],
|
| 950 |
+
"page_idx": 3
|
| 951 |
+
},
|
| 952 |
+
{
|
| 953 |
+
"type": "table",
|
| 954 |
+
"img_path": "images/770b05dac6358605b424fb7e913e76635837939a7ac29c186b0fe3ec31d9bebb.jpg",
|
| 955 |
+
"table_caption": [
|
| 956 |
+
"Table 2: The different sparsity results on BeerAdvocate."
|
| 957 |
+
],
|
| 958 |
+
"table_footnote": [],
|
| 959 |
+
"table_body": "<table><tr><td colspan=\"2\">Methods</td><td>S</td><td>P</td><td>R</td><td>F1</td></tr><tr><td rowspan=\"5\">Location</td><td>RNP (Lei et al., 2016)</td><td>10.9</td><td>43.3</td><td>55.5</td><td>48.6</td></tr><tr><td>CAR (Chang et al., 2019)</td><td>10.6</td><td>46.6</td><td>58.1</td><td>51.7</td></tr><tr><td>DMR (Huang et al., 2021)</td><td>10.7</td><td>47.5</td><td>60.1</td><td>53.1</td></tr><tr><td>A2R (Yu et al., 2021)</td><td>8.5</td><td>43.1</td><td>43.2</td><td>43.1</td></tr><tr><td>MGR (Liu et al., 2023b)</td><td>9.7</td><td>52.5</td><td>60.5</td><td>56.2</td></tr><tr><td colspan=\"2\">AGR(Ours)</td><td>9.3</td><td>54.9</td><td>60.5</td><td>57.6</td></tr><tr><td colspan=\"2\"></td><td>S</td><td>P</td><td>R</td><td>F1</td></tr><tr><td rowspan=\"5\">Service</td><td>RNP (Lei et al., 2016)</td><td>11.0</td><td>40.0</td><td>38.2</td><td>39.1</td></tr><tr><td>CAR (Chang et al., 2019)</td><td>11.7</td><td>40.7</td><td>41.4</td><td>41.1</td></tr><tr><td>DMR (Huang et al., 2021)</td><td>11.6</td><td>43.0</td><td>43.6</td><td>43.3</td></tr><tr><td>A2R (Yu et al., 2021)</td><td>11.4</td><td>37.3</td><td>37.2</td><td>37.2</td></tr><tr><td>MGR (Liu et al., 2023b)</td><td>11.8</td><td>45.0</td><td>46.4</td><td>45.7</td></tr><tr><td colspan=\"2\">AGR(Ours)</td><td>12.3</td><td>45.9</td><td>49.3</td><td>47.6</td></tr><tr><td colspan=\"2\"></td><td>S</td><td>P</td><td>R</td><td>F1</td></tr><tr><td rowspan=\"5\">Cleanliness</td><td>RNP (Lei et al., 2016)</td><td>10.6</td><td>30.5</td><td>36.0</td><td>33.0</td></tr><tr><td>CAR (Chang et al., 2019)</td><td>9.9</td><td>32.3</td><td>35.7</td><td>33.9</td></tr><tr><td>DMR (Huang et al., 2021)</td><td>10.3</td><td>31.4</td><td>36.4</td><td>33.7</td></tr><tr><td>A2R (Yu et al., 2021)</td><td>8.9</td><td>33.2</td><td>33.3</td><td>33.3</td></tr><tr><td>MGR (Liu et al., 2023b)</td><td>10.5</td><td>37.6</td><td>44.5</td><td>40.7</td></tr><tr><td colspan=\"2\">AGR(Ours)</td><td>10.3</td><td>39.0</td><td>45.5</td><td>42.0</td></tr></table>",
|
| 960 |
+
"bbox": [
|
| 961 |
+
519,
|
| 962 |
+
267,
|
| 963 |
+
878,
|
| 964 |
+
498
|
| 965 |
+
],
|
| 966 |
+
"page_idx": 3
|
| 967 |
+
},
|
| 968 |
+
{
|
| 969 |
+
"type": "text",
|
| 970 |
+
"text": "Table 3: The experimental results on HotelReview.",
|
| 971 |
+
"bbox": [
|
| 972 |
+
522,
|
| 973 |
+
507,
|
| 974 |
+
867,
|
| 975 |
+
521
|
| 976 |
+
],
|
| 977 |
+
"page_idx": 3
|
| 978 |
+
},
|
| 979 |
+
{
|
| 980 |
+
"type": "text",
|
| 981 |
+
"text": "Inter-RAT (Yue et al., 2023). For HotelReview dataset, we compare with five models, including RNP (Lei et al., 2016), CAR (Chang et al., 2019), DMR (Huang et al., 2021), A2R (Yu et al., 2021), and MGR (Liu et al., 2023b).",
|
| 982 |
+
"bbox": [
|
| 983 |
+
507,
|
| 984 |
+
542,
|
| 985 |
+
882,
|
| 986 |
+
621
|
| 987 |
+
],
|
| 988 |
+
"page_idx": 3
|
| 989 |
+
},
|
| 990 |
+
{
|
| 991 |
+
"type": "text",
|
| 992 |
+
"text": "Evaluation Metrics. Following (Huang et al., 2021; Yu et al., 2021; Yue et al., 2023; Liu et al., 2023b), we focus on the quality of rationales, and adopt Precision (P), Recall (R), and F1-score (F1) as metrics. We perform the best results on the validation set before testing on the test set. The Appendix B provides further details in this section.",
|
| 993 |
+
"bbox": [
|
| 994 |
+
507,
|
| 995 |
+
624,
|
| 996 |
+
882,
|
| 997 |
+
736
|
| 998 |
+
],
|
| 999 |
+
"page_idx": 3
|
| 1000 |
+
},
|
| 1001 |
+
{
|
| 1002 |
+
"type": "text",
|
| 1003 |
+
"text": "5.2 Performance Comparison",
|
| 1004 |
+
"text_level": 1,
|
| 1005 |
+
"bbox": [
|
| 1006 |
+
507,
|
| 1007 |
+
752,
|
| 1008 |
+
757,
|
| 1009 |
+
768
|
| 1010 |
+
],
|
| 1011 |
+
"page_idx": 3
|
| 1012 |
+
},
|
| 1013 |
+
{
|
| 1014 |
+
"type": "text",
|
| 1015 |
+
"text": "Results on BeerAdvocate. As shown in Table 1, our proposed method AGR outperforms all the eight baselines in terms of three aspects for BeerAdvocate dataset. Furthermore, in sparsity experiments (Table 2), AGR consistently outperforms the latest state-of-the-art results, affirming its effectiveness for selective rationalization.",
|
| 1016 |
+
"bbox": [
|
| 1017 |
+
507,
|
| 1018 |
+
775,
|
| 1019 |
+
882,
|
| 1020 |
+
885
|
| 1021 |
+
],
|
| 1022 |
+
"page_idx": 3
|
| 1023 |
+
},
|
| 1024 |
+
{
|
| 1025 |
+
"type": "text",
|
| 1026 |
+
"text": "Results on HotelReview. Table 3 shows that our model once again obtains the best performance",
|
| 1027 |
+
"bbox": [
|
| 1028 |
+
507,
|
| 1029 |
+
889,
|
| 1030 |
+
882,
|
| 1031 |
+
921
|
| 1032 |
+
],
|
| 1033 |
+
"page_idx": 3
|
| 1034 |
+
},
|
| 1035 |
+
{
|
| 1036 |
+
"type": "page_number",
|
| 1037 |
+
"text": "513",
|
| 1038 |
+
"bbox": [
|
| 1039 |
+
485,
|
| 1040 |
+
928,
|
| 1041 |
+
515,
|
| 1042 |
+
940
|
| 1043 |
+
],
|
| 1044 |
+
"page_idx": 3
|
| 1045 |
+
},
|
| 1046 |
+
{
|
| 1047 |
+
"type": "table",
|
| 1048 |
+
"img_path": "images/cec704e6375e298ff4dc245c9e3a491c95c6aec60fc572efada250f1f7a1b0a9.jpg",
|
| 1049 |
+
"table_caption": [
|
| 1050 |
+
"Table 4: Examples of generated rationales. Human-annotated rationales are underlined. Rationales from three models are highlighted in blue and are denoted as ${Z}_{1},{Z}_{2}$ and ${Z}_{3}$ respectively."
|
| 1051 |
+
],
|
| 1052 |
+
"table_footnote": [],
|
| 1053 |
+
"table_body": "<table><tr><td>FR (2022)</td><td>MGR (2023b)</td><td>AGR (Ours)</td></tr><tr><td>Aspect: Beer-Appearance\nLabel: Positive, Pred: Positive\nText: i picked this beer up on a whim as i was in the mood for a good coffee stout and the siren-like figure somehow told me this is the beer for you . a bit freaky , but i went with it . i was impressed from the very first pour . like any stout , the color is a dark molasses black . but … the head was thick and dense with good retention . the coffee aroma was intense ! the roasted goodness almost overwhelms my sense of smell .the roasted coffee flavors are the first things that i could taste along with hints of chocolate . however , i can tell there ’s more complexity here than my palette can decipher . the coffee flavors bring bitterness but it ’s not over powering as the sweetness of the malt cuts the bitterness quite nicely the beer has carbonation but once the bubbles have escaped the beer gives a creamy , velvety feel and finish . the alcohol was very well hidden in this beer which is scary …</td><td>Aspect: Beer-Appearance\nLabel: Positive, Pred: Positive\nText: i picked this beer up on a whim as i was in the mood for a good coffee stout and the siren-like figure somehow told me this is the beer for you . a bit freaky , but i went with it . i was impressed from the very first pour . like any stout , the color is a dark molasses black . but … the head was thick and dense with good retention . the caffeine aroma was intense ! the roasted goodness almost overwhelms my sense of smell .the roasted coffee flavors are the first things that i could taste along with hints of chocolate . however , i can tell there ’s more complexity here than my palette can decipher . the coffee flavors bring bitterness but it ’s not over powering as the sweetness of the malt cuts the bitterness quite nicely the beer has carbonation but once the bubbles have escaped the beer gives a creamy , velvety feel and finish . the alcoholwas very well hidden in this beer which is scary …</td><td>Aspect: Beer-Appearance\nLabel: Positive, Pred: Positive\nText: i picked this beer up on a whim as i was in the mood for a good coffee stout and the siren-like figure somehow told me this is the beer for you . a bit freaky , but i went with it . i was impressed from the very first pour . like any stout , the color is a dark molasses black . but … theHead was thick and dense with good retention . the caffeine aroma was intense ! the roasted goodness almost overwhelms my sense of smell .the roasted coffee flavors are the first things that i could taste along with hints of chocolate . however , i can tell there ’s more complexity here than my palette can decipher . the coffee flavors bring bitterness but it ’s not over powering as the sweetness of the malt cuts the bitterness quite nicely the beer has carbonation but once the bubbles have escaped the beer gives a creamy . velvety feel and finish . the alcohol was very well hidden in this beer which is scary …</td></tr></table>",
|
| 1054 |
+
"bbox": [
|
| 1055 |
+
119,
|
| 1056 |
+
122,
|
| 1057 |
+
878,
|
| 1058 |
+
464
|
| 1059 |
+
],
|
| 1060 |
+
"page_idx": 4
|
| 1061 |
+
},
|
| 1062 |
+
{
|
| 1063 |
+
"type": "table",
|
| 1064 |
+
"img_path": "images/78250e65fbaacfcd0fcca3177bc62dfbb259c25aa9f49becebbba08f064401e8.jpg",
|
| 1065 |
+
"table_caption": [],
|
| 1066 |
+
"table_footnote": [],
|
| 1067 |
+
"table_body": "<table><tr><td rowspan=\"2\">Methods</td><td colspan=\"4\">Appearance</td></tr><tr><td>S</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>AGR</td><td>20</td><td>83.7</td><td>87.5</td><td>85.6</td></tr><tr><td>-w/o causal.</td><td>20</td><td>81.5</td><td>87.8</td><td>84.5</td></tr><tr><td>-w/o embedd.</td><td>20</td><td>81.9</td><td>86.9</td><td>84.3</td></tr><tr><td>-w/o both</td><td>20</td><td>74.3</td><td>85.2</td><td>79.4</td></tr></table>",
|
| 1068 |
+
"bbox": [
|
| 1069 |
+
164,
|
| 1070 |
+
476,
|
| 1071 |
+
442,
|
| 1072 |
+
552
|
| 1073 |
+
],
|
| 1074 |
+
"page_idx": 4
|
| 1075 |
+
},
|
| 1076 |
+
{
|
| 1077 |
+
"type": "text",
|
| 1078 |
+
"text": "across all multi-aspects datasets consistently.",
|
| 1079 |
+
"bbox": [
|
| 1080 |
+
112,
|
| 1081 |
+
595,
|
| 1082 |
+
447,
|
| 1083 |
+
609
|
| 1084 |
+
],
|
| 1085 |
+
"page_idx": 4
|
| 1086 |
+
},
|
| 1087 |
+
{
|
| 1088 |
+
"type": "text",
|
| 1089 |
+
"text": "Ablation Studies. To further verify the effectiveness of AGR, we conduct the ablation experiments. As depicted in Table 5, removing either the optimized objective of causal effectiveness (referred to as causal.), the pretrained agent embedding (referred to as embedd.), or both, results in a notable decline in AGR's performance, underscoring the critical roles played by our proposed key components in AGR method.",
|
| 1090 |
+
"bbox": [
|
| 1091 |
+
112,
|
| 1092 |
+
613,
|
| 1093 |
+
487,
|
| 1094 |
+
755
|
| 1095 |
+
],
|
| 1096 |
+
"page_idx": 4
|
| 1097 |
+
},
|
| 1098 |
+
{
|
| 1099 |
+
"type": "text",
|
| 1100 |
+
"text": "Further Analyses. Firstly, we compare AGR with FR and MGR, providing the visualized examples. For example, we can observe from Table 4 that although all three methods are able to focus on the appearance aspect, FR and MGR still exhibit some degeneration (since the selective rationale still has some distance from the target prediction). However, AGR utilizes causal calculus to capture the causal variations between $Z_{1}$ and $Z_{2}$ , as well as between $Z_{2}$ and $Z_{3}$ , regarding the target prediction,",
|
| 1101 |
+
"bbox": [
|
| 1102 |
+
112,
|
| 1103 |
+
760,
|
| 1104 |
+
489,
|
| 1105 |
+
921
|
| 1106 |
+
],
|
| 1107 |
+
"page_idx": 4
|
| 1108 |
+
},
|
| 1109 |
+
{
|
| 1110 |
+
"type": "text",
|
| 1111 |
+
"text": "thereby gradually mitigating this degeneration during the training process. The Appendix C presents more visualized examples. Secondly, similar to (Liu et al., 2023b), we also compare the complexity of AGR with other models. As shown in Table 6, we can see that the complexity of AGR has been somewhat improved compared to latest work; however, there is still room for further improvement. This will be a key focus of future research.",
|
| 1112 |
+
"bbox": [
|
| 1113 |
+
507,
|
| 1114 |
+
479,
|
| 1115 |
+
884,
|
| 1116 |
+
624
|
| 1117 |
+
],
|
| 1118 |
+
"page_idx": 4
|
| 1119 |
+
},
|
| 1120 |
+
{
|
| 1121 |
+
"type": "table",
|
| 1122 |
+
"img_path": "images/1f6b4b427251cd18b1ab488628ab9bd25978cdb7a2e34e81aa9beec79b4851a8.jpg",
|
| 1123 |
+
"table_caption": [
|
| 1124 |
+
"Table 5: Ablation studies on the BeerAdvocate."
|
| 1125 |
+
],
|
| 1126 |
+
"table_footnote": [],
|
| 1127 |
+
"table_body": "<table><tr><td></td><td>RNP</td><td>FR</td><td>AGR</td><td>CAR</td></tr><tr><td>modules parameters</td><td>1gen+1pred 2×</td><td>1gen+1pred 2×</td><td>1gen+1pred+1agent 3×</td><td>1gen+2pred 3×</td></tr><tr><td></td><td>DARE</td><td>CAR</td><td>DMR</td><td>MGR</td></tr><tr><td>modules parameters</td><td>1gen+1pred+guider 3×</td><td>1gen+2pred 3×</td><td>1gen+3pred 4×</td><td>3gen+1pred 4×</td></tr></table>",
|
| 1128 |
+
"bbox": [
|
| 1129 |
+
515,
|
| 1130 |
+
634,
|
| 1131 |
+
882,
|
| 1132 |
+
697
|
| 1133 |
+
],
|
| 1134 |
+
"page_idx": 4
|
| 1135 |
+
},
|
| 1136 |
+
{
|
| 1137 |
+
"type": "text",
|
| 1138 |
+
"text": "Table 6: The complexity of different models. \"gen\": generator. \"pred\": predictor.",
|
| 1139 |
+
"bbox": [
|
| 1140 |
+
507,
|
| 1141 |
+
706,
|
| 1142 |
+
882,
|
| 1143 |
+
736
|
| 1144 |
+
],
|
| 1145 |
+
"page_idx": 4
|
| 1146 |
+
},
|
| 1147 |
+
{
|
| 1148 |
+
"type": "text",
|
| 1149 |
+
"text": "6 Conclusion",
|
| 1150 |
+
"text_level": 1,
|
| 1151 |
+
"bbox": [
|
| 1152 |
+
507,
|
| 1153 |
+
751,
|
| 1154 |
+
640,
|
| 1155 |
+
766
|
| 1156 |
+
],
|
| 1157 |
+
"page_idx": 4
|
| 1158 |
+
},
|
| 1159 |
+
{
|
| 1160 |
+
"type": "text",
|
| 1161 |
+
"text": "In this paper, we propose AGR, a reinforced causal agent-based rationalization approach to guide the cooperative game optimization during rationale training. Our theoretical insights underscore the necessity of this guidance signal for accurate predictions. Empirical evaluations on two widely-used benchmarks indicate the effectiveness of our proposed approach, surpassing existing state-of-the-art methods for selective rationalization.",
|
| 1162 |
+
"bbox": [
|
| 1163 |
+
507,
|
| 1164 |
+
776,
|
| 1165 |
+
884,
|
| 1166 |
+
919
|
| 1167 |
+
],
|
| 1168 |
+
"page_idx": 4
|
| 1169 |
+
},
|
| 1170 |
+
{
|
| 1171 |
+
"type": "page_number",
|
| 1172 |
+
"text": "514",
|
| 1173 |
+
"bbox": [
|
| 1174 |
+
485,
|
| 1175 |
+
928,
|
| 1176 |
+
515,
|
| 1177 |
+
940
|
| 1178 |
+
],
|
| 1179 |
+
"page_idx": 4
|
| 1180 |
+
},
|
| 1181 |
+
{
|
| 1182 |
+
"type": "text",
|
| 1183 |
+
"text": "Limitations",
|
| 1184 |
+
"text_level": 1,
|
| 1185 |
+
"bbox": [
|
| 1186 |
+
114,
|
| 1187 |
+
84,
|
| 1188 |
+
220,
|
| 1189 |
+
98
|
| 1190 |
+
],
|
| 1191 |
+
"page_idx": 5
|
| 1192 |
+
},
|
| 1193 |
+
{
|
| 1194 |
+
"type": "text",
|
| 1195 |
+
"text": "There are still some limitations that need further improvement in the future. Firstly, optimizing cooperative game of rationalization during training brings great significance to the model performance, but how to more efficiently search for meaningful actions within a larger search space for good rationales remains the next direction to explore. Nextly, this work does not involve the debiasing techniques of data-level. Considering the debiasing technique may be a good way to further improve the results. In addition, as the latest research (Chen et al., 2022; Liu et al., 2023a,b) has shown that it is still a challenging task to finetune pretrained language models on the cooperative game framework. Therefore, how to incorporate the cooperative framework and (large) language models is a research interest.",
|
| 1196 |
+
"bbox": [
|
| 1197 |
+
112,
|
| 1198 |
+
109,
|
| 1199 |
+
492,
|
| 1200 |
+
367
|
| 1201 |
+
],
|
| 1202 |
+
"page_idx": 5
|
| 1203 |
+
},
|
| 1204 |
+
{
|
| 1205 |
+
"type": "text",
|
| 1206 |
+
"text": "Ethics Statement",
|
| 1207 |
+
"text_level": 1,
|
| 1208 |
+
"bbox": [
|
| 1209 |
+
114,
|
| 1210 |
+
378,
|
| 1211 |
+
265,
|
| 1212 |
+
394
|
| 1213 |
+
],
|
| 1214 |
+
"page_idx": 5
|
| 1215 |
+
},
|
| 1216 |
+
{
|
| 1217 |
+
"type": "text",
|
| 1218 |
+
"text": "This paper does not involve the presentation of a new dataset and the utilization of demographic or identity characteristics information.",
|
| 1219 |
+
"bbox": [
|
| 1220 |
+
112,
|
| 1221 |
+
404,
|
| 1222 |
+
489,
|
| 1223 |
+
453
|
| 1224 |
+
],
|
| 1225 |
+
"page_idx": 5
|
| 1226 |
+
},
|
| 1227 |
+
{
|
| 1228 |
+
"type": "text",
|
| 1229 |
+
"text": "Acknowledgements",
|
| 1230 |
+
"text_level": 1,
|
| 1231 |
+
"bbox": [
|
| 1232 |
+
114,
|
| 1233 |
+
464,
|
| 1234 |
+
285,
|
| 1235 |
+
481
|
| 1236 |
+
],
|
| 1237 |
+
"page_idx": 5
|
| 1238 |
+
},
|
| 1239 |
+
{
|
| 1240 |
+
"type": "text",
|
| 1241 |
+
"text": "We would like to thank all the anonymous reviewers for their valuable feedback. This work was supported by the National Natural Science Foundation of China (Nos.62376144, 62272285, 62076155) and the Science and Technology Cooperation and Exchange Special Project of Shanxi Province (No.202204041101016).",
|
| 1242 |
+
"bbox": [
|
| 1243 |
+
112,
|
| 1244 |
+
489,
|
| 1245 |
+
489,
|
| 1246 |
+
602
|
| 1247 |
+
],
|
| 1248 |
+
"page_idx": 5
|
| 1249 |
+
},
|
| 1250 |
+
{
|
| 1251 |
+
"type": "text",
|
| 1252 |
+
"text": "References",
|
| 1253 |
+
"text_level": 1,
|
| 1254 |
+
"bbox": [
|
| 1255 |
+
114,
|
| 1256 |
+
629,
|
| 1257 |
+
213,
|
| 1258 |
+
644
|
| 1259 |
+
],
|
| 1260 |
+
"page_idx": 5
|
| 1261 |
+
},
|
| 1262 |
+
{
|
| 1263 |
+
"type": "list",
|
| 1264 |
+
"sub_type": "ref_text",
|
| 1265 |
+
"list_items": [
|
| 1266 |
+
"Yujia Bao, Shiyu Chang, Mo Yu, and Regina Barzilay. 2018. Deriving machine attention from human rationales. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1903-1913, Brussels, Belgium. Association for Computational Linguistics.",
|
| 1267 |
+
"Jasmijn Bastings, Wilker Aziz, and Ivan Titov. 2019. Interpretable neural predictions with differentiable binary variables. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2963-2977, Florence, Italy. Association for Computational Linguistics.",
|
| 1268 |
+
"Shiyu Chang, Yang Zhang, Mo Yu, and Tommi Jaakkola. 2019. A game theoretic approach to class-wise selective rationalization. Advances in neural information processing systems, 32.",
|
| 1269 |
+
"Shiyu Chang, Yang Zhang, Mo Yu, and Tommi Jaakkola. 2020. Invariant rationalization. In International"
|
| 1270 |
+
],
|
| 1271 |
+
"bbox": [
|
| 1272 |
+
115,
|
| 1273 |
+
652,
|
| 1274 |
+
489,
|
| 1275 |
+
920
|
| 1276 |
+
],
|
| 1277 |
+
"page_idx": 5
|
| 1278 |
+
},
|
| 1279 |
+
{
|
| 1280 |
+
"type": "list",
|
| 1281 |
+
"sub_type": "ref_text",
|
| 1282 |
+
"list_items": [
|
| 1283 |
+
"Conference on Machine Learning, pages 1448-1458. PMLR.",
|
| 1284 |
+
"Howard Chen, Jacqueline He, Karthik Narasimhan, and Danqi Chen. 2022. Can rationalization improve robustness? In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3792-3805, Seattle, United States. Association for Computational Linguistics.",
|
| 1285 |
+
"Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724-1734, Doha, Qatar. Association for Computational Linguistics.",
|
| 1286 |
+
"Nicola De Cao, Michael Sejr Schlichtkrull, Wilker Aziz, and Ivan Titov. 2020. How do decisions emerge across layers in neural models? interpretation with differentiable masking. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3243-3255, Online. Association for Computational Linguistics.",
|
| 1287 |
+
"Zhiying Deng, Jianjun Li, Zhiqiang Guo, and Guohui Li. 2023. Multi-aspect interest neighbor-augmented network for next-basket recommendation. ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1-5.",
|
| 1288 |
+
"Yongfeng Huang, Yujun Chen, Yulun Du, and Zhilin Yang. 2021. Distribution matching for rationalization. In AAAI Conference on Artificial Intelligence.",
|
| 1289 |
+
"Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), San Diego, CA, USA.",
|
| 1290 |
+
"Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107-117, Austin, Texas. Association for Computational Linguistics.",
|
| 1291 |
+
"Wei Liu, Haozhao Wang, Jun Wang, Zhiying Deng, YuanKai Zhang, Cheng Wang, and Ruixuan Li. 2023a. Enhancing the rationale-input alignment for self-explaining rationalization. arXiv preprint arXiv:2312.04103.",
|
| 1292 |
+
"Wei Liu, Haozhao Wang, Jun Wang, Ruixuan Li, Xinyang Li, YuanKai Zhang, and Yang Qiu. 2023b. MGR: Multi-generator based rationalization. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12771-12787, Toronto, Canada. Association for Computational Linguistics."
|
| 1293 |
+
],
|
| 1294 |
+
"bbox": [
|
| 1295 |
+
510,
|
| 1296 |
+
85,
|
| 1297 |
+
884,
|
| 1298 |
+
920
|
| 1299 |
+
],
|
| 1300 |
+
"page_idx": 5
|
| 1301 |
+
},
|
| 1302 |
+
{
|
| 1303 |
+
"type": "page_number",
|
| 1304 |
+
"text": "515",
|
| 1305 |
+
"bbox": [
|
| 1306 |
+
485,
|
| 1307 |
+
928,
|
| 1308 |
+
515,
|
| 1309 |
+
940
|
| 1310 |
+
],
|
| 1311 |
+
"page_idx": 5
|
| 1312 |
+
},
|
| 1313 |
+
{
|
| 1314 |
+
"type": "list",
|
| 1315 |
+
"sub_type": "ref_text",
|
| 1316 |
+
"list_items": [
|
| 1317 |
+
"Wei Liu, Haozhao Wang, Jun Wang, Ruixuan Li, Chao Yue, and YuanKai Zhang. 2022. Fr: Folded rationalization with a unified encoder. Advances in Neural Information Processing Systems, 35:6954-6966.",
|
| 1318 |
+
"Wei Liu, Jun Wang, Haozhao Wang, Ruixuan Li, Zhiying Deng, YuanKai Zhang, and Yang Qiu. 2023c. D-separation for causal self-explanation. In Thirty-seventh Conference on Neural Information Processing Systems.",
|
| 1319 |
+
"Wei Liu, Jun Wang, Haozhao Wang, Ruixuan Li, Yang Qiu, Yuankai Zhang, Jie Han, and Yixiong Zou. 2023d. Decoupled rationalization with asymmetric learning rates: A flexible lipschitz restraint. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 1535-1547.",
|
| 1320 |
+
"Julian McAuley, Jure Leskovec, and Dan Jurafsky. 2012. Learning attitudes and attributes from multi-aspect reviews. 2012 IEEE 12th International Conference on Data Mining, pages 1020-1025.",
|
| 1321 |
+
"Bhargavi Paranjape, Mandar Joshi, John Thickstun, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. An information bottleneck approach for controlling conciseness in rationale extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 1938-1952, Online. Association for Computational Linguistics.",
|
| 1322 |
+
"Judea Pearl. 2009. Causality. Cambridge university press.",
|
| 1323 |
+
"Judea Pearl, Madelyn Glymour, and Nicholas P Jewell. 2016. Causal inference in statistics: A primer. John Wiley & Sons.",
|
| 1324 |
+
"Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.",
|
| 1325 |
+
"Richard S Sutton, David McAllester, Satinder Singh, and Yishay Mansour. 1999. Policy gradient methods for reinforcement learning with function approximation. Advances in neural information processing systems, 12.",
|
| 1326 |
+
"Hongning Wang, Yue Lu, and Chengxiang Zhai. 2010. Latent aspect rating analysis on review text data: A rating regression approach. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '10, page 783-792, New York, NY, USA. Association for Computing Machinery.",
|
| 1327 |
+
"Mo Yu, Shiyu Chang, Yang Zhang, and Tommi S Jaakkola. 2019. Rethinking cooperative rationalization: Introspective extraction and complement control. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing."
|
| 1328 |
+
],
|
| 1329 |
+
"bbox": [
|
| 1330 |
+
115,
|
| 1331 |
+
85,
|
| 1332 |
+
485,
|
| 1333 |
+
920
|
| 1334 |
+
],
|
| 1335 |
+
"page_idx": 6
|
| 1336 |
+
},
|
| 1337 |
+
{
|
| 1338 |
+
"type": "list",
|
| 1339 |
+
"sub_type": "ref_text",
|
| 1340 |
+
"list_items": [
|
| 1341 |
+
"Mo Yu, Yang Zhang, Shiyu Chang, and Tommi Jaakkola. 2021. Understanding interlocking dynamics of cooperative rationalization. Advances in Neural Information Processing Systems, 34:12822-12835.",
|
| 1342 |
+
"Hao Yuan, Lei Cai, Xia Hu, Jie Wang, and Shuiwang Ji. 2020. Interpreting image classifiers by generating discrete masks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(4).",
|
| 1343 |
+
"Linan Yue, Qi Liu, Yichao Du, Yanqing An, Li Wang, and Enhong Chen. 2022. Dare: Disentanglement-augmented rationale extraction. Advances in Neural Information Processing Systems, 35:26603-26617.",
|
| 1344 |
+
"Linan Yue, Qi Liu, Li Wang, Yanqing An, Yichao Du, and Zhenya Huang. 2023. Interventional rationalization. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 11404-11418, Singapore. Association for Computational Linguistics.",
|
| 1345 |
+
"Wenbo Zhang, Tong Wu, Yunlong Wang, Yong Cai, and Hengrui Cai. 2023. Towards trustworthy explanation: on causal rationalization. In Proceedings of the 40th International Conference on Machine Learning. JMLR.org."
|
| 1346 |
+
],
|
| 1347 |
+
"bbox": [
|
| 1348 |
+
510,
|
| 1349 |
+
85,
|
| 1350 |
+
880,
|
| 1351 |
+
428
|
| 1352 |
+
],
|
| 1353 |
+
"page_idx": 6
|
| 1354 |
+
},
|
| 1355 |
+
{
|
| 1356 |
+
"type": "page_number",
|
| 1357 |
+
"text": "516",
|
| 1358 |
+
"bbox": [
|
| 1359 |
+
485,
|
| 1360 |
+
928,
|
| 1361 |
+
515,
|
| 1362 |
+
939
|
| 1363 |
+
],
|
| 1364 |
+
"page_idx": 6
|
| 1365 |
+
},
|
| 1366 |
+
{
|
| 1367 |
+
"type": "text",
|
| 1368 |
+
"text": "A Proof of Lemma 1",
|
| 1369 |
+
"text_level": 1,
|
| 1370 |
+
"bbox": [
|
| 1371 |
+
114,
|
| 1372 |
+
84,
|
| 1373 |
+
309,
|
| 1374 |
+
99
|
| 1375 |
+
],
|
| 1376 |
+
"page_idx": 7
|
| 1377 |
+
},
|
| 1378 |
+
{
|
| 1379 |
+
"type": "text",
|
| 1380 |
+
"text": "Given random variables $X, Z, Y$ , and $\\mathcal{A}$ , where $\\mathcal{A}$ is drawn from the distribution of $X$ . According to Section 2, to obtain a good predictor, we have",
|
| 1381 |
+
"bbox": [
|
| 1382 |
+
112,
|
| 1383 |
+
105,
|
| 1384 |
+
487,
|
| 1385 |
+
154
|
| 1386 |
+
],
|
| 1387 |
+
"page_idx": 7
|
| 1388 |
+
},
|
| 1389 |
+
{
|
| 1390 |
+
"type": "equation",
|
| 1391 |
+
"text": "\n$$\n\\min _ {\\theta_ {g}, \\theta_ {p}} \\mathcal {H} (Y, \\hat {Y}) = \\min _ {\\theta_ {g}, \\theta_ {p}} \\mathcal {H} (Y, f _ {P} (Z)), \\tag {15}\n$$\n",
|
| 1392 |
+
"text_format": "latex",
|
| 1393 |
+
"bbox": [
|
| 1394 |
+
151,
|
| 1395 |
+
164,
|
| 1396 |
+
487,
|
| 1397 |
+
191
|
| 1398 |
+
],
|
| 1399 |
+
"page_idx": 7
|
| 1400 |
+
},
|
| 1401 |
+
{
|
| 1402 |
+
"type": "text",
|
| 1403 |
+
"text": "where $Z = f_{G}(X)$ . It means that we need to minimize $H(Y,Z)$ (Liu et al., 2023b), i.e., to reduce more uncertainty and indicate the label $Y$ . We assume that exist variable $\\mathcal{A}$ could make to reduce the uncertainty of learning $Y$ , then our goal is to make $H(Y,\\mathcal{A})\\leq H(Y,Z)$ .",
|
| 1404 |
+
"bbox": [
|
| 1405 |
+
112,
|
| 1406 |
+
193,
|
| 1407 |
+
487,
|
| 1408 |
+
290
|
| 1409 |
+
],
|
| 1410 |
+
"page_idx": 7
|
| 1411 |
+
},
|
| 1412 |
+
{
|
| 1413 |
+
"type": "text",
|
| 1414 |
+
"text": "According to the mutual information formula, we can obtain:",
|
| 1415 |
+
"bbox": [
|
| 1416 |
+
112,
|
| 1417 |
+
291,
|
| 1418 |
+
487,
|
| 1419 |
+
321
|
| 1420 |
+
],
|
| 1421 |
+
"page_idx": 7
|
| 1422 |
+
},
|
| 1423 |
+
{
|
| 1424 |
+
"type": "equation",
|
| 1425 |
+
"text": "\n$$\nH (Y) - H (Y, \\mathcal {A}) \\geq H (Y) - H (Y, Z), \\tag {16}\n$$\n",
|
| 1426 |
+
"text_format": "latex",
|
| 1427 |
+
"bbox": [
|
| 1428 |
+
139,
|
| 1429 |
+
334,
|
| 1430 |
+
487,
|
| 1431 |
+
351
|
| 1432 |
+
],
|
| 1433 |
+
"page_idx": 7
|
| 1434 |
+
},
|
| 1435 |
+
{
|
| 1436 |
+
"type": "text",
|
| 1437 |
+
"text": "so,",
|
| 1438 |
+
"bbox": [
|
| 1439 |
+
114,
|
| 1440 |
+
357,
|
| 1441 |
+
139,
|
| 1442 |
+
369
|
| 1443 |
+
],
|
| 1444 |
+
"page_idx": 7
|
| 1445 |
+
},
|
| 1446 |
+
{
|
| 1447 |
+
"type": "equation",
|
| 1448 |
+
"text": "\n$$\nI (Y, \\mathcal {A}) \\geq I (Y, Z). \\tag {17}\n$$\n",
|
| 1449 |
+
"text_format": "latex",
|
| 1450 |
+
"bbox": [
|
| 1451 |
+
226,
|
| 1452 |
+
370,
|
| 1453 |
+
485,
|
| 1454 |
+
387
|
| 1455 |
+
],
|
| 1456 |
+
"page_idx": 7
|
| 1457 |
+
},
|
| 1458 |
+
{
|
| 1459 |
+
"type": "text",
|
| 1460 |
+
"text": "Next, since we have $X = \\{Z, X \\backslash Z\\}$ where $X \\backslash Z$ denotes the text derived from $X$ and unrelated to the rationale, so we can obtain mutual information between $X$ and $Y$ ,",
|
| 1461 |
+
"bbox": [
|
| 1462 |
+
112,
|
| 1463 |
+
388,
|
| 1464 |
+
485,
|
| 1465 |
+
450
|
| 1466 |
+
],
|
| 1467 |
+
"page_idx": 7
|
| 1468 |
+
},
|
| 1469 |
+
{
|
| 1470 |
+
"type": "equation",
|
| 1471 |
+
"text": "\n$$\n\\begin{array}{l} I (Y; X) = I (Y; \\{Z, X \\backslash Z \\}) \\\\ = I (Y; Z) + I (Y; X \\backslash Z | Z) \\tag {18} \\\\ \\end{array}\n$$\n",
|
| 1472 |
+
"text_format": "latex",
|
| 1473 |
+
"bbox": [
|
| 1474 |
+
191,
|
| 1475 |
+
461,
|
| 1476 |
+
485,
|
| 1477 |
+
497
|
| 1478 |
+
],
|
| 1479 |
+
"page_idx": 7
|
| 1480 |
+
},
|
| 1481 |
+
{
|
| 1482 |
+
"type": "text",
|
| 1483 |
+
"text": "According to the non-negativity of mutual information, we have $I(Y;X\\backslash Z|Z)\\geq 0$ , so",
|
| 1484 |
+
"bbox": [
|
| 1485 |
+
114,
|
| 1486 |
+
500,
|
| 1487 |
+
487,
|
| 1488 |
+
532
|
| 1489 |
+
],
|
| 1490 |
+
"page_idx": 7
|
| 1491 |
+
},
|
| 1492 |
+
{
|
| 1493 |
+
"type": "equation",
|
| 1494 |
+
"text": "\n$$\nI (Y, X) \\geq I (Y, Z) \\tag {19}\n$$\n",
|
| 1495 |
+
"text_format": "latex",
|
| 1496 |
+
"bbox": [
|
| 1497 |
+
228,
|
| 1498 |
+
543,
|
| 1499 |
+
485,
|
| 1500 |
+
560
|
| 1501 |
+
],
|
| 1502 |
+
"page_idx": 7
|
| 1503 |
+
},
|
| 1504 |
+
{
|
| 1505 |
+
"type": "text",
|
| 1506 |
+
"text": "Further, we denote $I(Y,X) = \\varepsilon_0 \\geq \\varepsilon_1 \\geq I(Y,Z) \\geq \\varepsilon_2$ , where $\\varepsilon_1$ and $\\varepsilon_2$ indicate the upper and lower bounds of $I(Y,Z)$ , respectively.",
|
| 1507 |
+
"bbox": [
|
| 1508 |
+
114,
|
| 1509 |
+
565,
|
| 1510 |
+
487,
|
| 1511 |
+
612
|
| 1512 |
+
],
|
| 1513 |
+
"page_idx": 7
|
| 1514 |
+
},
|
| 1515 |
+
{
|
| 1516 |
+
"type": "text",
|
| 1517 |
+
"text": "Therefore, we can obtain that when $\\mathcal{A} = X$ , the equation $I(Y,\\mathcal{A}) = \\varepsilon_0 \\geq \\varepsilon_1 \\geq I(Y,Z)$ is satisfied. That is to say, a solution for $\\mathcal{A}$ exists, and $X$ is a solution of $\\mathcal{A}$ .",
|
| 1518 |
+
"bbox": [
|
| 1519 |
+
112,
|
| 1520 |
+
614,
|
| 1521 |
+
487,
|
| 1522 |
+
675
|
| 1523 |
+
],
|
| 1524 |
+
"page_idx": 7
|
| 1525 |
+
},
|
| 1526 |
+
{
|
| 1527 |
+
"type": "text",
|
| 1528 |
+
"text": "The proof of Lemma 1 is completed.",
|
| 1529 |
+
"bbox": [
|
| 1530 |
+
132,
|
| 1531 |
+
678,
|
| 1532 |
+
406,
|
| 1533 |
+
694
|
| 1534 |
+
],
|
| 1535 |
+
"page_idx": 7
|
| 1536 |
+
},
|
| 1537 |
+
{
|
| 1538 |
+
"type": "text",
|
| 1539 |
+
"text": "B Experiment Details",
|
| 1540 |
+
"text_level": 1,
|
| 1541 |
+
"bbox": [
|
| 1542 |
+
114,
|
| 1543 |
+
705,
|
| 1544 |
+
319,
|
| 1545 |
+
721
|
| 1546 |
+
],
|
| 1547 |
+
"page_idx": 7
|
| 1548 |
+
},
|
| 1549 |
+
{
|
| 1550 |
+
"type": "text",
|
| 1551 |
+
"text": "B.1Baselines",
|
| 1552 |
+
"text_level": 1,
|
| 1553 |
+
"bbox": [
|
| 1554 |
+
114,
|
| 1555 |
+
727,
|
| 1556 |
+
236,
|
| 1557 |
+
740
|
| 1558 |
+
],
|
| 1559 |
+
"page_idx": 7
|
| 1560 |
+
},
|
| 1561 |
+
{
|
| 1562 |
+
"type": "text",
|
| 1563 |
+
"text": "We compare AGR with the following baselines: RNP (2016), a original RNP sampling method.",
|
| 1564 |
+
"bbox": [
|
| 1565 |
+
112,
|
| 1566 |
+
744,
|
| 1567 |
+
468,
|
| 1568 |
+
775
|
| 1569 |
+
],
|
| 1570 |
+
"page_idx": 7
|
| 1571 |
+
},
|
| 1572 |
+
{
|
| 1573 |
+
"type": "text",
|
| 1574 |
+
"text": "HardKuma (2019), a kumaraswamy-distribution-based sampling method.",
|
| 1575 |
+
"bbox": [
|
| 1576 |
+
112,
|
| 1577 |
+
776,
|
| 1578 |
+
487,
|
| 1579 |
+
807
|
| 1580 |
+
],
|
| 1581 |
+
"page_idx": 7
|
| 1582 |
+
},
|
| 1583 |
+
{
|
| 1584 |
+
"type": "text",
|
| 1585 |
+
"text": "CAR (2019), a game theoretic-based approach to class-dependent rationalization.",
|
| 1586 |
+
"bbox": [
|
| 1587 |
+
112,
|
| 1588 |
+
809,
|
| 1589 |
+
485,
|
| 1590 |
+
840
|
| 1591 |
+
],
|
| 1592 |
+
"page_idx": 7
|
| 1593 |
+
},
|
| 1594 |
+
{
|
| 1595 |
+
"type": "text",
|
| 1596 |
+
"text": "Information Bottleneck (IB) (2020), a model utilizing IB objective for balancing performance and rationale length.",
|
| 1597 |
+
"bbox": [
|
| 1598 |
+
112,
|
| 1599 |
+
841,
|
| 1600 |
+
487,
|
| 1601 |
+
888
|
| 1602 |
+
],
|
| 1603 |
+
"page_idx": 7
|
| 1604 |
+
},
|
| 1605 |
+
{
|
| 1606 |
+
"type": "text",
|
| 1607 |
+
"text": "INVRAT (2020), a method that introduces an environment-agnostic predictor.",
|
| 1608 |
+
"bbox": [
|
| 1609 |
+
112,
|
| 1610 |
+
889,
|
| 1611 |
+
485,
|
| 1612 |
+
921
|
| 1613 |
+
],
|
| 1614 |
+
"page_idx": 7
|
| 1615 |
+
},
|
| 1616 |
+
{
|
| 1617 |
+
"type": "table",
|
| 1618 |
+
"img_path": "images/8cd097cc081126349f0facf20c37ddafccd1c9b6cecc4f178d4b6d3b693ef2aa.jpg",
|
| 1619 |
+
"table_caption": [],
|
| 1620 |
+
"table_footnote": [],
|
| 1621 |
+
"table_body": "<table><tr><td rowspan=\"2\" colspan=\"2\">Datasets</td><td colspan=\"2\">Train</td><td colspan=\"2\">Dev</td><td colspan=\"2\">Annotation</td></tr><tr><td>Pos</td><td>Neg</td><td>Pos</td><td>Neg</td><td>Pos</td><td>Neg</td></tr><tr><td rowspan=\"3\">BeerAdvocate</td><td>Appearance</td><td>202385</td><td>12897</td><td>28488</td><td>1318</td><td>923</td><td>13</td></tr><tr><td>Aroma</td><td>172299</td><td>30564</td><td>24494</td><td>3396</td><td>848</td><td>29</td></tr><tr><td>Palate</td><td>176038</td><td>27639</td><td>24837</td><td>3203</td><td>785</td><td>20</td></tr><tr><td rowspan=\"3\">HotelReview</td><td>Location</td><td>7236</td><td>7236</td><td>906</td><td>906</td><td>104</td><td>96</td></tr><tr><td>Service</td><td>50742</td><td>50742</td><td>6344</td><td>6344</td><td>101</td><td>99</td></tr><tr><td>Cleanliness</td><td>75049</td><td>75049</td><td>9382</td><td>9382</td><td>99</td><td>101</td></tr></table>",
|
| 1622 |
+
"bbox": [
|
| 1623 |
+
515,
|
| 1624 |
+
80,
|
| 1625 |
+
878,
|
| 1626 |
+
166
|
| 1627 |
+
],
|
| 1628 |
+
"page_idx": 7
|
| 1629 |
+
},
|
| 1630 |
+
{
|
| 1631 |
+
"type": "text",
|
| 1632 |
+
"text": "Table 7: Statistics of datasets used in this paper.",
|
| 1633 |
+
"bbox": [
|
| 1634 |
+
531,
|
| 1635 |
+
175,
|
| 1636 |
+
855,
|
| 1637 |
+
190
|
| 1638 |
+
],
|
| 1639 |
+
"page_idx": 7
|
| 1640 |
+
},
|
| 1641 |
+
{
|
| 1642 |
+
"type": "text",
|
| 1643 |
+
"text": "DMR (2021), which proposes a teacher-student distillation framework to align input distribution.",
|
| 1644 |
+
"bbox": [
|
| 1645 |
+
507,
|
| 1646 |
+
199,
|
| 1647 |
+
880,
|
| 1648 |
+
231
|
| 1649 |
+
],
|
| 1650 |
+
"page_idx": 7
|
| 1651 |
+
},
|
| 1652 |
+
{
|
| 1653 |
+
"type": "text",
|
| 1654 |
+
"text": "A2R (2021), a method that introducing a soft rationale to predictor.",
|
| 1655 |
+
"bbox": [
|
| 1656 |
+
507,
|
| 1657 |
+
231,
|
| 1658 |
+
882,
|
| 1659 |
+
263
|
| 1660 |
+
],
|
| 1661 |
+
"page_idx": 7
|
| 1662 |
+
},
|
| 1663 |
+
{
|
| 1664 |
+
"type": "text",
|
| 1665 |
+
"text": "DARE (2022), which introduces a guider into predictor to encapsulate more information from the input.",
|
| 1666 |
+
"bbox": [
|
| 1667 |
+
507,
|
| 1668 |
+
263,
|
| 1669 |
+
882,
|
| 1670 |
+
311
|
| 1671 |
+
],
|
| 1672 |
+
"page_idx": 7
|
| 1673 |
+
},
|
| 1674 |
+
{
|
| 1675 |
+
"type": "text",
|
| 1676 |
+
"text": "FR (2022), a method using a unified encoder for generator and predictor.",
|
| 1677 |
+
"bbox": [
|
| 1678 |
+
507,
|
| 1679 |
+
312,
|
| 1680 |
+
880,
|
| 1681 |
+
344
|
| 1682 |
+
],
|
| 1683 |
+
"page_idx": 7
|
| 1684 |
+
},
|
| 1685 |
+
{
|
| 1686 |
+
"type": "text",
|
| 1687 |
+
"text": "Inter-RAT (2023), which develops an interventional rationalization to discover the causal rationales.",
|
| 1688 |
+
"bbox": [
|
| 1689 |
+
507,
|
| 1690 |
+
344,
|
| 1691 |
+
882,
|
| 1692 |
+
391
|
| 1693 |
+
],
|
| 1694 |
+
"page_idx": 7
|
| 1695 |
+
},
|
| 1696 |
+
{
|
| 1697 |
+
"type": "text",
|
| 1698 |
+
"text": "MGR (2023b), a method leveraging multiple generators to select rationales.",
|
| 1699 |
+
"bbox": [
|
| 1700 |
+
507,
|
| 1701 |
+
393,
|
| 1702 |
+
882,
|
| 1703 |
+
423
|
| 1704 |
+
],
|
| 1705 |
+
"page_idx": 7
|
| 1706 |
+
},
|
| 1707 |
+
{
|
| 1708 |
+
"type": "text",
|
| 1709 |
+
"text": "B.2 Datasets",
|
| 1710 |
+
"text_level": 1,
|
| 1711 |
+
"bbox": [
|
| 1712 |
+
509,
|
| 1713 |
+
436,
|
| 1714 |
+
626,
|
| 1715 |
+
449
|
| 1716 |
+
],
|
| 1717 |
+
"page_idx": 7
|
| 1718 |
+
},
|
| 1719 |
+
{
|
| 1720 |
+
"type": "text",
|
| 1721 |
+
"text": "Following previous research (Huang et al., 2021; Yue et al., 2023; Liu et al., 2023b), we obtain BeerAdvocate and HotelReview datasets. BeerAdvocate (McAuley et al., 2012) and HotelReview (Wang et al., 2010) are publicly available from existing work. As shown in Table 7, the specific splitting details of the two datasets are presented.",
|
| 1722 |
+
"bbox": [
|
| 1723 |
+
507,
|
| 1724 |
+
454,
|
| 1725 |
+
882,
|
| 1726 |
+
565
|
| 1727 |
+
],
|
| 1728 |
+
"page_idx": 7
|
| 1729 |
+
},
|
| 1730 |
+
{
|
| 1731 |
+
"type": "text",
|
| 1732 |
+
"text": "B.3 Implementation",
|
| 1733 |
+
"text_level": 1,
|
| 1734 |
+
"bbox": [
|
| 1735 |
+
509,
|
| 1736 |
+
577,
|
| 1737 |
+
685,
|
| 1738 |
+
590
|
| 1739 |
+
],
|
| 1740 |
+
"page_idx": 7
|
| 1741 |
+
},
|
| 1742 |
+
{
|
| 1743 |
+
"type": "text",
|
| 1744 |
+
"text": "To fairly compare with previous works and validate the effectiveness of the approach proposed, we utilize the 100-dimension Glove (Pennington et al., 2014) as the word embedding and the 200-dimension GRUs (Cho et al., 2014) encoder to build the generator $f_{G}(\\cdot)$ in the AGR architecture. Further generator $f_{G}(\\cdot)$ follows Equation 1 for cooperative optimization with predictor $f_{P}(\\cdot)$ . Meanwhile, we construct the policy network $q_{\\phi}(\\cdot)$ to collaborate with the generator $f_{G}(\\cdot)$ and predictor $f_{P}(\\cdot)$ to learn candidate actions in different training states, including the representation learning of action candidates and the sampling of actions. We use Adam (Kingma and Ba, 2015) as the optimizer.",
|
| 1745 |
+
"bbox": [
|
| 1746 |
+
507,
|
| 1747 |
+
596,
|
| 1748 |
+
884,
|
| 1749 |
+
820
|
| 1750 |
+
],
|
| 1751 |
+
"page_idx": 7
|
| 1752 |
+
},
|
| 1753 |
+
{
|
| 1754 |
+
"type": "text",
|
| 1755 |
+
"text": "C Additional Examples",
|
| 1756 |
+
"text_level": 1,
|
| 1757 |
+
"bbox": [
|
| 1758 |
+
507,
|
| 1759 |
+
832,
|
| 1760 |
+
729,
|
| 1761 |
+
848
|
| 1762 |
+
],
|
| 1763 |
+
"page_idx": 7
|
| 1764 |
+
},
|
| 1765 |
+
{
|
| 1766 |
+
"type": "text",
|
| 1767 |
+
"text": "As shown in Table 8, we provide more examples of selected rationale from the Beer-Aroma and Hotel-Location two aspects, where their sparsity is set to be about $20\\%$ and $10\\%$ , respectively.",
|
| 1768 |
+
"bbox": [
|
| 1769 |
+
507,
|
| 1770 |
+
854,
|
| 1771 |
+
882,
|
| 1772 |
+
919
|
| 1773 |
+
],
|
| 1774 |
+
"page_idx": 7
|
| 1775 |
+
},
|
| 1776 |
+
{
|
| 1777 |
+
"type": "page_number",
|
| 1778 |
+
"text": "517",
|
| 1779 |
+
"bbox": [
|
| 1780 |
+
485,
|
| 1781 |
+
928,
|
| 1782 |
+
515,
|
| 1783 |
+
940
|
| 1784 |
+
],
|
| 1785 |
+
"page_idx": 7
|
| 1786 |
+
},
|
| 1787 |
+
{
|
| 1788 |
+
"type": "table",
|
| 1789 |
+
"img_path": "images/43c1f97f18bb6b6b1b72c893988edc0312206b9521590d7ffd121d2cf9eff526.jpg",
|
| 1790 |
+
"table_caption": [
|
| 1791 |
+
"Table 8: Examples of generated rationales. Human-annotated rationales are underlined. Rationales from three models are highlighted in blue, respectively."
|
| 1792 |
+
],
|
| 1793 |
+
"table_footnote": [],
|
| 1794 |
+
"table_body": "<table><tr><td>FR (2022)</td><td>MGR (2023b)</td><td>AGR (Ours)</td></tr><tr><td>Aspect: Beer-Aroma\nLabel: Positive, Pred: Positive\nText: had this at bocktown with wvbeergeek and jasonm , came in a 750ml caged and corked the corked banged out of sight as soon as the cage was undone .seved into a tulip glass between the 3 of us hazy , deep copper , mahagony , hard to get a really good look at the color at bocktown . off white head hard to pour without a glass full of fluffy everlasting head . left lot of thick webbing all over the inside of the glass , sticky looking . great aroma can't seem to keep it away from the nose\n. sweet , dark , tart fruit notes , some sour cherry , earthy , spicy , with hints of currants , clove , allspice also nutty , with some belgium yeast . lots of sweet booziness from the start , vinious , dark fruityness with plum notes . the fruityness was remisent of dried fruit.lots of spicyness lots of clove.also nutty and earthy . finished clean , spicy and very sugary . syrupy , big full mouthfeel , smooth and very creamy with lots of jucyness . a beer to sip , but very enjoyable , wish i had the whole bottle to drink would be no problem . a must try beer if you like this style . seems like a beer that would age very well .</td><td>Aspect: Beer-Aroma\nLabel: Positive, Pred: Positive\nText: had this at bocktown with wvbeergeek and jasonm , came in a 750ml caged and corked the corked banged out of sight as soon as the cage was undone . .seved into a tulip glass between the 3 of us hazy , deep copper , mahagony , hard to get a really good look at the color at bocktown . off white head hard to pour without a glass full of fluffy everlasting head . left lot of thick webbing all over the inside of the glass , sticky looking . great aroma can't seem to keep it away from the nose\n. sweet , dark , tart fruit notes , some sour cherry , earthy . spicy , with hints of currants , clove , allspice also nutty , with some belgium yeast . lots of sweet booziness from the start , vinious , dark fruityness with plum notes . the fruityness was remisent of dried fruit.lots of spicyness lots of clove.also nutty and earthy . finished clean , spicy and very sugary . syrupy , big full mouthfeel , smooth and very creamy with lots ofjucyness . a beer to sip , but very enjoyable , wish i had the whole bottle to drink would be no problem . a must try beer if you like this style . seems like a beer that would age very well .</td><td>Aspect: Beer-Aroma\nLabel: Positive, Pred: Positive\nText: had this at bocktown with wvbeergeek and jasonm , came in a 750ml caged and corked the corked banged out of light as soon as the cage was undone . .seved into a tulip glass between the 3 of us hazy , deep copper , mahagony , hard to get a really good look at the color at bocktown . off white head hard to pour without a glass full of fluffy everlasting head . left lot of thick webbing all over the inside of the glass , sticky looking . great aroma can't seem to keep it away from the nose\n. sweet , dark , tart fruitNotes , some sour cherry , earthy , spicy , with hints of currants , clove , allspice also nutty , with some belgium yeast . lots of sweet booziness from the start , vinious , dark fruityness with plum notes . the fruityness was remisent of dried fruit.lots of spicyness lots of clove.also nutty and earthy . finished clean , spicy and very sugary . syrupy , big full mouthfeel, smooth and very creamy with lots ofjucyness . a beer to sip , but very enjoyable , wish i had the whole bottle to drink would be no problem . a must try beer if you like this style . seems like a beer that would age very well .</td></tr><tr><td>Aspect: Hotel-Location\nLabel: Negative, Pred: Negative\nText: we stayed at the dona palace for 3 nights and while the location is central , it is also more crowded and noisy . the windows of the room we stayed in did not have adequate sound proofing , noise from the canal and outside would wake us up early in the morning . the breakfast was a nice bonus though , the two waitresses serving the room were always gracious and helpful . the front desk personnel however were rude and abrupt , so that was n't pleasant to deal with . the rooms are dated and had a musty smell . the bed was uncomfortable , blankets were rough , and the shower drain did not work very well . overall , i probably wound not stay here again .</td><td>Aspect: Hotel-Location\nLabel: Negative, Pred: Negative\nText: we stayed at the dona palace for 3 nights and while the location is central , it is also more crowded and noisy . the windows of the room we stayed in did not have adequate sound proofing , noise from the canal and outside would wake us up early in the morning . the breakfast was a nice bonus though , the two waitresses serving the room were always gracious and helpful . the front desk personnel However were rude and abrupt , so that was n't pleasant to deal with . the rooms are dated and had a musty smell . the bed was uncomfortable , blankets were rough , and the shower drain did not work very well . overall , i probably wound not stay here again .</td><td>Aspect: Hotel-Location\nLabel: Negative, Pred: Negative\nText: we stayed at the dona palace for 3 nights and while the location is central , it is also more crowded and noisy . the windowsof the room we stayed in did not have adequate sound proofing , noise from the canal and outside would wake us up early in the morning . the breakfast was a nice bonus though , the two waitresses serving the room were always gracious and helpful . the front desk personnel however were rude and abrupt , so that was n't pleasant to deal with . the rooms are dated and had a musty smell . the bed was uncomfortable , blankets were rough , and the shower drain did not work very well . Overall , i probably wound not stay here again .</td></tr></table>",
|
| 1795 |
+
"bbox": [
|
| 1796 |
+
119,
|
| 1797 |
+
191,
|
| 1798 |
+
878,
|
| 1799 |
+
846
|
| 1800 |
+
],
|
| 1801 |
+
"page_idx": 8
|
| 1802 |
+
},
|
| 1803 |
+
{
|
| 1804 |
+
"type": "page_number",
|
| 1805 |
+
"text": "518",
|
| 1806 |
+
"bbox": [
|
| 1807 |
+
485,
|
| 1808 |
+
928,
|
| 1809 |
+
515,
|
| 1810 |
+
940
|
| 1811 |
+
],
|
| 1812 |
+
"page_idx": 8
|
| 1813 |
+
}
|
| 1814 |
+
]
|
2024/AGR_ Reinforced Causal Agent-Guided Self-explaining Rationalization/19a5d2a6-e0ae-4231-a1c8-5819b81e5755_model.json
ADDED
|
@@ -0,0 +1,2143 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
[
|
| 3 |
+
{
|
| 4 |
+
"type": "title",
|
| 5 |
+
"bbox": [
|
| 6 |
+
0.13,
|
| 7 |
+
0.091,
|
| 8 |
+
0.868,
|
| 9 |
+
0.112
|
| 10 |
+
],
|
| 11 |
+
"angle": 0,
|
| 12 |
+
"content": "AGR: Reinforced Causal Agent-Guided Self-explaining Rationalization"
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"bbox": [
|
| 17 |
+
0.2,
|
| 18 |
+
0.129,
|
| 19 |
+
0.796,
|
| 20 |
+
0.146
|
| 21 |
+
],
|
| 22 |
+
"angle": 0,
|
| 23 |
+
"content": "Yunxiao Zhao\\(^{1}\\), Zhiqiang Wang\\(^{1,2*}\\), Xiaoli Li\\(^{3}\\), Jiye Liang\\(^{1,2}\\), Ru Li\\(^{1,2*}\\)"
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"bbox": [
|
| 28 |
+
0.145,
|
| 29 |
+
0.147,
|
| 30 |
+
0.852,
|
| 31 |
+
0.162
|
| 32 |
+
],
|
| 33 |
+
"angle": 0,
|
| 34 |
+
"content": "1. School of Computer and Information Technology, Shanxi University, Taiyuan, China"
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"bbox": [
|
| 39 |
+
0.149,
|
| 40 |
+
0.163,
|
| 41 |
+
0.846,
|
| 42 |
+
0.195
|
| 43 |
+
],
|
| 44 |
+
"angle": 0,
|
| 45 |
+
"content": "2. Key Laboratory of Computational Intelligence and Chinese Information Processing of Ministry of Education, Shanxi University, Taiyuan, China"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"bbox": [
|
| 50 |
+
0.186,
|
| 51 |
+
0.196,
|
| 52 |
+
0.81,
|
| 53 |
+
0.23
|
| 54 |
+
],
|
| 55 |
+
"angle": 0,
|
| 56 |
+
"content": "3. Institute for Infocomm Research, A*Star, Singapore yunxiaomr@163.com, {wangzq,ljy,liru}@sxu.edu.cn, xlli@ntu.edu.sg"
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "list",
|
| 60 |
+
"bbox": [
|
| 61 |
+
0.145,
|
| 62 |
+
0.147,
|
| 63 |
+
0.852,
|
| 64 |
+
0.23
|
| 65 |
+
],
|
| 66 |
+
"angle": 0,
|
| 67 |
+
"content": null
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "title",
|
| 71 |
+
"bbox": [
|
| 72 |
+
0.261,
|
| 73 |
+
0.261,
|
| 74 |
+
0.341,
|
| 75 |
+
0.277
|
| 76 |
+
],
|
| 77 |
+
"angle": 0,
|
| 78 |
+
"content": "Abstract"
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"bbox": [
|
| 83 |
+
0.142,
|
| 84 |
+
0.287,
|
| 85 |
+
0.461,
|
| 86 |
+
0.557
|
| 87 |
+
],
|
| 88 |
+
"angle": 0,
|
| 89 |
+
"content": "Most existing rationalization approaches are susceptible to degeneration accumulation due to a lack of effective control over the learning direction of the model during training. To address this issue, we propose a novel approach AGR (Agent-Guided Rationalization), guiding the next action of the model based on its current training state. Specifically, we introduce causal intervention calculus to quantify the causal effects inherent during rationale training, and utilize reinforcement learning process to refine the learning bias of them. Furthermore, we pretrain an agent within this reinforced causal environment to guide the next step of the model. We theoretically demonstrate that a good model needs the desired guidance, and empirically show the effectiveness of our approach, outperforming existing state-of-the-art methods on BeerAdvocate and HotelReview datasets."
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "title",
|
| 93 |
+
"bbox": [
|
| 94 |
+
0.115,
|
| 95 |
+
0.568,
|
| 96 |
+
0.26,
|
| 97 |
+
0.584
|
| 98 |
+
],
|
| 99 |
+
"angle": 0,
|
| 100 |
+
"content": "1 Introduction"
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "text",
|
| 104 |
+
"bbox": [
|
| 105 |
+
0.113,
|
| 106 |
+
0.594,
|
| 107 |
+
0.49,
|
| 108 |
+
0.786
|
| 109 |
+
],
|
| 110 |
+
"angle": 0,
|
| 111 |
+
"content": "To explain the prediction of neural networks, selective rationalization task (Lei et al., 2016; Yu et al., 2019, 2021) has been studied in recent years. As shown in Figure 1, it aims to select a small and human-intelligible subset (i.e., rationale) from the input to support and explain the prediction results when yielding them. As an interpretable diagram, rationalization holds significant potential for elucidating the decision-making process of predictive models, building trust, and deriving insightful and pertinent insights (Yuan et al., 2020; Zhang et al., 2023; Deng et al., 2023)."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"type": "text",
|
| 115 |
+
"bbox": [
|
| 116 |
+
0.113,
|
| 117 |
+
0.788,
|
| 118 |
+
0.49,
|
| 119 |
+
0.9
|
| 120 |
+
],
|
| 121 |
+
"angle": 0,
|
| 122 |
+
"content": "Various approaches have been proposed for rationalization, spanning from early rationale sampling-based methods (Bao et al., 2018; Bastings et al., 2019; Paranjape et al., 2020) to the extra-component-based methods (De Cao et al., 2020; Huang et al., 2021; Yu et al., 2021; Liu et al., 2022; Yue et al., 2022; Liu et al., 2023a). These"
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"type": "image",
|
| 126 |
+
"bbox": [
|
| 127 |
+
0.513,
|
| 128 |
+
0.259,
|
| 129 |
+
0.88,
|
| 130 |
+
0.318
|
| 131 |
+
],
|
| 132 |
+
"angle": 0,
|
| 133 |
+
"content": null
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"type": "image_caption",
|
| 137 |
+
"bbox": [
|
| 138 |
+
0.509,
|
| 139 |
+
0.326,
|
| 140 |
+
0.884,
|
| 141 |
+
0.385
|
| 142 |
+
],
|
| 143 |
+
"angle": 0,
|
| 144 |
+
"content": "Figure 1: The standard selective rationalization, where \\( X, Z, \\hat{Y}, Y \\) represent the input text, rationale, prediction and the groundtruth label, respectively. The red text indicates the small and human-intelligible subset."
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"type": "text",
|
| 148 |
+
"bbox": [
|
| 149 |
+
0.508,
|
| 150 |
+
0.395,
|
| 151 |
+
0.885,
|
| 152 |
+
0.572
|
| 153 |
+
],
|
| 154 |
+
"angle": 0,
|
| 155 |
+
"content": "methods predominantly concentrate on improving the performance of rationalization models by either refining the sampling directly or aligning additional information beyond the rationale, resulting in impressive results. However, to the best of our knowledge, the current methods are prone to degeneration accumulation since they usually do not discern whether the generator during training has produced unmeaningful or flawed rationales; instead, they directly pass them to the predictor even if generated rationales are degraded."
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"type": "text",
|
| 159 |
+
"bbox": [
|
| 160 |
+
0.508,
|
| 161 |
+
0.572,
|
| 162 |
+
0.884,
|
| 163 |
+
0.749
|
| 164 |
+
],
|
| 165 |
+
"angle": 0,
|
| 166 |
+
"content": "For instance, the underlined rationale in Figure 1 is degraded, as the word appearance alone does not reliably determine the sentiment polarity of input \\( X \\). But the predictor overfits to this uninformative rationale and classifies the sentiment according to whether \"appearance\" is included in the rationale. Consequently, when the predictor receives degraded rationales, it steers the model towards an undesirable direction (aka., learning bias). Thus, optimizing this bias during training is crucial for ensuring the model's generalization performance."
|
| 167 |
+
},
|
| 168 |
+
{
|
| 169 |
+
"type": "text",
|
| 170 |
+
"bbox": [
|
| 171 |
+
0.508,
|
| 172 |
+
0.75,
|
| 173 |
+
0.885,
|
| 174 |
+
0.863
|
| 175 |
+
],
|
| 176 |
+
"angle": 0,
|
| 177 |
+
"content": "The proposed methods (Chang et al., 2020; Zhang et al., 2023; Yue et al., 2023) fall short in considering rationalization optimization comprehensively, neglecting existing causality during rationale learning. Although they often employ causal theory to uncover relationships between rationale pieces, they struggle to directly optimize"
|
| 178 |
+
},
|
| 179 |
+
{
|
| 180 |
+
"type": "page_footnote",
|
| 181 |
+
"bbox": [
|
| 182 |
+
0.508,
|
| 183 |
+
0.872,
|
| 184 |
+
0.887,
|
| 185 |
+
0.922
|
| 186 |
+
],
|
| 187 |
+
"angle": 0,
|
| 188 |
+
"content": "Degeneration over rationalization is a highly challenging problem, which means the predictor may overfit to meaningless rationales generated by the not yet well-trained generator (Yu et al., 2019; Liu et al., 2023b,d)."
|
| 189 |
+
},
|
| 190 |
+
{
|
| 191 |
+
"type": "page_footnote",
|
| 192 |
+
"bbox": [
|
| 193 |
+
0.137,
|
| 194 |
+
0.906,
|
| 195 |
+
0.28,
|
| 196 |
+
0.921
|
| 197 |
+
],
|
| 198 |
+
"angle": 0,
|
| 199 |
+
"content": "* Corresponding author"
|
| 200 |
+
},
|
| 201 |
+
{
|
| 202 |
+
"type": "page_number",
|
| 203 |
+
"bbox": [
|
| 204 |
+
0.486,
|
| 205 |
+
0.928,
|
| 206 |
+
0.517,
|
| 207 |
+
0.941
|
| 208 |
+
],
|
| 209 |
+
"angle": 0,
|
| 210 |
+
"content": "510"
|
| 211 |
+
},
|
| 212 |
+
{
|
| 213 |
+
"type": "footer",
|
| 214 |
+
"bbox": [
|
| 215 |
+
0.097,
|
| 216 |
+
0.946,
|
| 217 |
+
0.902,
|
| 218 |
+
0.974
|
| 219 |
+
],
|
| 220 |
+
"angle": 0,
|
| 221 |
+
"content": "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 510-518 August 11-16, 2024 ©2024 Association for Computational Linguistics"
|
| 222 |
+
}
|
| 223 |
+
],
|
| 224 |
+
[
|
| 225 |
+
{
|
| 226 |
+
"type": "text",
|
| 227 |
+
"bbox": [
|
| 228 |
+
0.113,
|
| 229 |
+
0.085,
|
| 230 |
+
0.49,
|
| 231 |
+
0.261
|
| 232 |
+
],
|
| 233 |
+
"angle": 0,
|
| 234 |
+
"content": "the cooperative game dynamics between the generator and predictor during training. As shown in Figure 1, optimizing rationale from \"appearance\" to \"appearance: light yellow to almost clear\" necessitates evaluating the causal impact on target prediction, guiding the model's subsequent optimization. Thus, if we could construct a guiding signal to reward or penalize the learning behavior of the model, this would significantly reduce the model's learning bias during training, alleviating the problem of degeneration accumulation."
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"type": "text",
|
| 238 |
+
"bbox": [
|
| 239 |
+
0.117,
|
| 240 |
+
0.262,
|
| 241 |
+
0.49,
|
| 242 |
+
0.518
|
| 243 |
+
],
|
| 244 |
+
"angle": 0,
|
| 245 |
+
"content": "To address the above problems, we propose a novel rationalization method named AGR (Agent-Guided Rationalization), which leverages a reinforced causal agent to guide the cooperative game optimization during rationale training, as shown in Figure 2. In particular, 1) we quantify the causal effects in the rationale optimization process, and design a reinforcement learning (RL) process (e.g., Markov decision) to refine the learning bias during training. 2) We further pretrain an agent within reinforced causal environment to guide next actions by a system of rewards. We also theoretically illustrate that a robust model needs the desired guidance. 3) Experimental results demonstrate the effectiveness of our approach, surpassing state-of-the-art methods on BeerAdvocate and HotelReview datasets."
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "title",
|
| 249 |
+
"bbox": [
|
| 250 |
+
0.114,
|
| 251 |
+
0.531,
|
| 252 |
+
0.338,
|
| 253 |
+
0.545
|
| 254 |
+
],
|
| 255 |
+
"angle": 0,
|
| 256 |
+
"content": "2 Problem Formulation"
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "text",
|
| 260 |
+
"bbox": [
|
| 261 |
+
0.113,
|
| 262 |
+
0.555,
|
| 263 |
+
0.49,
|
| 264 |
+
0.683
|
| 265 |
+
],
|
| 266 |
+
"angle": 0,
|
| 267 |
+
"content": "Notation. Following previous research (Liu et al., 2023b,c,d), we consider the classification problem and denote the generator and predictor as \\( f_{G}(\\cdot) \\) and \\( f_{P}(\\cdot) \\), with \\( \\theta_{g} \\) and \\( \\theta_{p} \\) representing their parameters. The input text \\( X = [x_{1}, x_{2}, \\dots, x_{l}] (1 \\leq i \\leq l) \\) consists of tokens \\( x_{i} \\), where \\( l \\) is the number of tokens. The label of \\( X \\) is a one-hot vector \\( Y \\in \\{0,1\\}^{c} \\), where \\( c \\) is the number of categories."
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "text",
|
| 271 |
+
"bbox": [
|
| 272 |
+
0.114,
|
| 273 |
+
0.684,
|
| 274 |
+
0.49,
|
| 275 |
+
0.813
|
| 276 |
+
],
|
| 277 |
+
"angle": 0,
|
| 278 |
+
"content": "Cooperative game for rationalization. The \\( f_{G}(\\cdot) \\) selects the most informative pieces from \\( X \\) by a sequence of binary mask \\( M = [m_1, \\dots, m_l] \\in \\{0, 1\\}^l \\). Then, it forms the rationale \\( Z = M \\odot X = [m_1 x_1, m_2 x_2, \\dots, m_l x_l] \\), where the informativeness of \\( Z \\) is measured by the negative cross entropy \\( -H(Y, \\hat{Y}) \\). Consequently, the \\( f_{G}(\\cdot) \\) and \\( f_{P}(\\cdot) \\) are optimized cooperatively by"
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"type": "equation",
|
| 282 |
+
"bbox": [
|
| 283 |
+
0.122,
|
| 284 |
+
0.823,
|
| 285 |
+
0.488,
|
| 286 |
+
0.85
|
| 287 |
+
],
|
| 288 |
+
"angle": 0,
|
| 289 |
+
"content": "\\[\n\\min _ {\\theta_ {g}, \\theta_ {p}} \\mathcal {H} (Y, \\hat {Y} \\mid f _ {G} (X)), s. t. \\hat {Y} = f _ {P} \\left(f _ {G} (X)\\right). \\tag {1}\n\\]"
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"type": "text",
|
| 293 |
+
"bbox": [
|
| 294 |
+
0.113,
|
| 295 |
+
0.854,
|
| 296 |
+
0.49,
|
| 297 |
+
0.922
|
| 298 |
+
],
|
| 299 |
+
"angle": 0,
|
| 300 |
+
"content": "In addition, rationales are usually constrained by compact and coherent regularization terms \\(\\Omega(M) = \\lambda_1\\left|\\frac{\\|M\\|_1}{l} - s\\right| + \\lambda_2\\sum_t|m_t - m_{t-1}|\\) (Chang et al., 2020), where \\(s\\) is a pre-defined sparsity level."
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"type": "image",
|
| 304 |
+
"bbox": [
|
| 305 |
+
0.516,
|
| 306 |
+
0.081,
|
| 307 |
+
0.88,
|
| 308 |
+
0.25
|
| 309 |
+
],
|
| 310 |
+
"angle": 0,
|
| 311 |
+
"content": null
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"type": "image_caption",
|
| 315 |
+
"bbox": [
|
| 316 |
+
0.508,
|
| 317 |
+
0.26,
|
| 318 |
+
0.884,
|
| 319 |
+
0.305
|
| 320 |
+
],
|
| 321 |
+
"angle": 0,
|
| 322 |
+
"content": "Figure 2: The architecture of AGR. \\( X \\) and \\( \\hat{Y} \\) are the input and output. \\( S_{i} \\) is the \\( i \\)-th update state of rationale, while \\( \\widetilde{S}_{i} \\) is the state after guidance by the agent."
|
| 323 |
+
},
|
| 324 |
+
{
|
| 325 |
+
"type": "title",
|
| 326 |
+
"bbox": [
|
| 327 |
+
0.509,
|
| 328 |
+
0.315,
|
| 329 |
+
0.761,
|
| 330 |
+
0.332
|
| 331 |
+
],
|
| 332 |
+
"angle": 0,
|
| 333 |
+
"content": "3 Reinforced Causal Agent"
|
| 334 |
+
},
|
| 335 |
+
{
|
| 336 |
+
"type": "text",
|
| 337 |
+
"bbox": [
|
| 338 |
+
0.508,
|
| 339 |
+
0.34,
|
| 340 |
+
0.884,
|
| 341 |
+
0.388
|
| 342 |
+
],
|
| 343 |
+
"angle": 0,
|
| 344 |
+
"content": "In this section, we present our reinforced causal agent, considering both causal effect and learning bias of degeneration during rationale training."
|
| 345 |
+
},
|
| 346 |
+
{
|
| 347 |
+
"type": "title",
|
| 348 |
+
"bbox": [
|
| 349 |
+
0.509,
|
| 350 |
+
0.399,
|
| 351 |
+
0.785,
|
| 352 |
+
0.414
|
| 353 |
+
],
|
| 354 |
+
"angle": 0,
|
| 355 |
+
"content": "3.1 Rationale Causal Attribution"
|
| 356 |
+
},
|
| 357 |
+
{
|
| 358 |
+
"type": "text",
|
| 359 |
+
"bbox": [
|
| 360 |
+
0.508,
|
| 361 |
+
0.42,
|
| 362 |
+
0.884,
|
| 363 |
+
0.451
|
| 364 |
+
],
|
| 365 |
+
"angle": 0,
|
| 366 |
+
"content": "Formally, we construct a rationale \\(\\mathcal{Z}_k^*\\) by maximizing an attribution metric \\(A(\\cdot)\\) in rationalization"
|
| 367 |
+
},
|
| 368 |
+
{
|
| 369 |
+
"type": "equation",
|
| 370 |
+
"bbox": [
|
| 371 |
+
0.575,
|
| 372 |
+
0.468,
|
| 373 |
+
0.884,
|
| 374 |
+
0.486
|
| 375 |
+
],
|
| 376 |
+
"angle": 0,
|
| 377 |
+
"content": "\\[\n\\mathcal {Z} _ {K} ^ {*} = \\arg \\max _ {\\mathcal {Z} _ {K} \\subseteq X} A \\left(\\mathcal {Z} _ {K} | \\hat {y} _ {c}\\right), \\tag {2}\n\\]"
|
| 378 |
+
},
|
| 379 |
+
{
|
| 380 |
+
"type": "text",
|
| 381 |
+
"bbox": [
|
| 382 |
+
0.508,
|
| 383 |
+
0.492,
|
| 384 |
+
0.884,
|
| 385 |
+
0.525
|
| 386 |
+
],
|
| 387 |
+
"angle": 0,
|
| 388 |
+
"content": "where \\(A(\\cdot)\\) measures the contribution of each candidate \\(\\mathcal{Z}_K\\) to the target prediction \\(\\hat{y}_c\\)."
|
| 389 |
+
},
|
| 390 |
+
{
|
| 391 |
+
"type": "text",
|
| 392 |
+
"bbox": [
|
| 393 |
+
0.508,
|
| 394 |
+
0.525,
|
| 395 |
+
0.884,
|
| 396 |
+
0.605
|
| 397 |
+
],
|
| 398 |
+
"angle": 0,
|
| 399 |
+
"content": "However, \\(A(\\mathcal{Z}_K|\\hat{y}_c)\\) needs to be quantified. To this end, we introduce causal intervention calculus \\(do(\\cdot)\\), including \\(do(Z = Z_K)\\) and \\(do(Z = \\emptyset)\\) (Pearl, 2009; Pearl et al., 2016), and reformulate the causal contribution from \\(\\varnothing\\) to \\(\\mathcal{Z}_K\\) by mutual information,"
|
| 400 |
+
},
|
| 401 |
+
{
|
| 402 |
+
"type": "equation",
|
| 403 |
+
"bbox": [
|
| 404 |
+
0.52,
|
| 405 |
+
0.618,
|
| 406 |
+
0.882,
|
| 407 |
+
0.635
|
| 408 |
+
],
|
| 409 |
+
"angle": 0,
|
| 410 |
+
"content": "\\[\nA \\left(\\mathcal {Z} _ {K} \\left| \\hat {y} _ {c}\\right) = I \\left(\\hat {y} _ {c}, d o \\left(\\mathcal {Z} _ {K}\\right)\\right) - I \\left(\\hat {y} _ {c}, d o (\\emptyset)\\right). \\right. \\tag {3}\n\\]"
|
| 411 |
+
},
|
| 412 |
+
{
|
| 413 |
+
"type": "title",
|
| 414 |
+
"bbox": [
|
| 415 |
+
0.509,
|
| 416 |
+
0.648,
|
| 417 |
+
0.804,
|
| 418 |
+
0.662
|
| 419 |
+
],
|
| 420 |
+
"angle": 0,
|
| 421 |
+
"content": "3.2 Markov Decision Process as RL"
|
| 422 |
+
},
|
| 423 |
+
{
|
| 424 |
+
"type": "text",
|
| 425 |
+
"bbox": [
|
| 426 |
+
0.508,
|
| 427 |
+
0.669,
|
| 428 |
+
0.884,
|
| 429 |
+
0.749
|
| 430 |
+
],
|
| 431 |
+
"angle": 0,
|
| 432 |
+
"content": "Equation 3 illustrates the procedure for deriving \\(\\mathcal{Z}_K\\) from an initial state of zero training. However, it may generate degraded rationales at step \\(i\\), where \\(0 < i < K\\). Thus we need to seek for quantifiable objectives between \\(\\mathcal{Z}_i\\) and \\(\\mathcal{Z}_{i+1}\\),"
|
| 433 |
+
},
|
| 434 |
+
{
|
| 435 |
+
"type": "equation",
|
| 436 |
+
"bbox": [
|
| 437 |
+
0.525,
|
| 438 |
+
0.762,
|
| 439 |
+
0.882,
|
| 440 |
+
0.78
|
| 441 |
+
],
|
| 442 |
+
"angle": 0,
|
| 443 |
+
"content": "\\[\n\\mathcal {Z} _ {i + 1} = \\arg \\max _ {\\mathcal {Z} _ {i + 1} \\in \\{X \\backslash \\mathcal {Z} _ {i} \\}} A \\left(\\mathcal {Z} _ {i + 1} \\mid \\mathcal {Z} _ {i}, \\hat {y} _ {c}\\right). \\tag {4}\n\\]"
|
| 444 |
+
},
|
| 445 |
+
{
|
| 446 |
+
"type": "text",
|
| 447 |
+
"bbox": [
|
| 448 |
+
0.509,
|
| 449 |
+
0.791,
|
| 450 |
+
0.884,
|
| 451 |
+
0.841
|
| 452 |
+
],
|
| 453 |
+
"angle": 0,
|
| 454 |
+
"content": "According to Equation 3, we have the causal contribution between \\(\\mathcal{Z}_i\\) and \\(\\mathcal{Z}_{i + 1}\\) .. \\(A(\\mathcal{Z}_{i + 1}|\\mathcal{Z}_i,\\hat{y}_c) =\\) \\(I(\\hat{y}_c,do(\\mathcal{Z}_{i + 1})) - I(\\hat{y}_c,do(\\mathcal{Z}_i))\\) . So,"
|
| 455 |
+
},
|
| 456 |
+
{
|
| 457 |
+
"type": "equation",
|
| 458 |
+
"bbox": [
|
| 459 |
+
0.523,
|
| 460 |
+
0.851,
|
| 461 |
+
0.882,
|
| 462 |
+
0.926
|
| 463 |
+
],
|
| 464 |
+
"angle": 0,
|
| 465 |
+
"content": "\\[\n\\begin{array}{l} A \\left(\\mathcal {Z} _ {i + 1} \\mid \\mathcal {Z} _ {i}, \\hat {y} _ {c}\\right) = - H \\left(\\hat {y} _ {c} \\mid \\mathcal {Z} _ {i + 1}\\right) + H \\left(\\hat {y} _ {c} \\mid \\mathcal {Z} _ {i}\\right) \\\\ = - H \\left(\\hat {y} _ {c} \\mid \\left\\{\\mathcal {Z} _ {i} \\cup \\left\\{z _ {i + 1} \\right\\} \\right\\}\\right) + H \\left(\\hat {y} _ {c} \\mid \\mathcal {Z} _ {i}\\right) \\tag {5} \\\\ = - p _ {\\theta} \\left(\\hat {y} _ {c} \\mid \\mathcal {Z}\\right) \\log \\frac {p _ {\\theta} \\left(\\hat {y} _ {c} \\mid \\mathcal {Z} _ {i}\\right)}{p _ {\\theta} \\left(\\hat {y} _ {c} \\mid \\{\\mathcal {Z} _ {i} \\cup \\left\\{z _ {i + 1} \\right\\} \\}\\right)}, \\\\ \\end{array}\n\\]"
|
| 466 |
+
},
|
| 467 |
+
{
|
| 468 |
+
"type": "page_number",
|
| 469 |
+
"bbox": [
|
| 470 |
+
0.486,
|
| 471 |
+
0.929,
|
| 472 |
+
0.514,
|
| 473 |
+
0.941
|
| 474 |
+
],
|
| 475 |
+
"angle": 0,
|
| 476 |
+
"content": "511"
|
| 477 |
+
}
|
| 478 |
+
],
|
| 479 |
+
[
|
| 480 |
+
{
|
| 481 |
+
"type": "text",
|
| 482 |
+
"bbox": [
|
| 483 |
+
0.113,
|
| 484 |
+
0.085,
|
| 485 |
+
0.489,
|
| 486 |
+
0.132
|
| 487 |
+
],
|
| 488 |
+
"angle": 0,
|
| 489 |
+
"content": "where \\( H(\\hat{y}_c|\\mathcal{Z}_i) \\) is the term of conditional entropy. As a result, Equation 5 explicitly quantifies \\( \\mathcal{Z}_{i + 1} \\)'s effect with previously obtained rationale \\( \\mathcal{Z}_i \\)."
|
| 490 |
+
},
|
| 491 |
+
{
|
| 492 |
+
"type": "text",
|
| 493 |
+
"bbox": [
|
| 494 |
+
0.113,
|
| 495 |
+
0.134,
|
| 496 |
+
0.489,
|
| 497 |
+
0.375
|
| 498 |
+
],
|
| 499 |
+
"angle": 0,
|
| 500 |
+
"content": "To further promote the cooperative game, we model the training process of rationale as a Markov decision process \\(\\mathbb{M} = \\{\\mathbb{S},\\mathbb{A},\\mathbb{P},\\mathbb{R}\\}\\), where \\(\\mathbb{S} = \\{s_i\\}\\) represents set of states abstracting the process of optimizing rationale during training, and \\(\\mathbb{A} = \\{a_i\\}\\) indicates the set of actions. In particular, The transition dynamics \\(\\mathbb{P}(s_{i + 1}|s_i,a_{i + 1})\\) specify how the state \\(s_{i + 1}\\) is updated from the prior state \\(s_i\\) by taking action \\(a_{i + 1}\\). Besides, \\(\\mathbb{R}(s_i,a_{i + 1})\\) quantifies the reward obtained after taking action \\(a_{i + 1}\\) based on the prior state \\(s_i\\). Therefore, cooperative training for rationale can be depicted as the sequence process \\((s_0,a_1,r_1,s_1,\\dots,a_K,r_K,s_K)\\), where the state \\(s_i\\) can be formulated by \\(s_i = Z_i\\) in the \\(i\\)-th update; \\(s_0 = Z_0\\) can be initiated by generator \\(f_{G}(\\cdot)\\)."
|
| 501 |
+
},
|
| 502 |
+
{
|
| 503 |
+
"type": "text",
|
| 504 |
+
"bbox": [
|
| 505 |
+
0.113,
|
| 506 |
+
0.376,
|
| 507 |
+
0.489,
|
| 508 |
+
0.487
|
| 509 |
+
],
|
| 510 |
+
"angle": 0,
|
| 511 |
+
"content": "Nevertheless, the above process exhibits a limitation in its inability to detect learning bias at any given state \\( s_i \\). To address this, we reformulate the sequence process as \\( (<s_0, \\widetilde{a}_0, \\widetilde{r}_0, \\widetilde{s}_0>, a_1, r_1, <s_1, \\widetilde{a}_1, \\widetilde{r}_1, \\widetilde{s}_1>, \\dots, a_K, r_K, <s_K, \\widetilde{a}_K, \\widetilde{r}_K, \\widetilde{s}_K>) \\), where \\( <s_i, \\widetilde{a}_i, \\widetilde{r}_i, \\widetilde{s}_i> \\) indicates process of transitioning from state \\( s_i \\) to \\( \\widetilde{s}_i \\) in the \\( i \\)-th update."
|
| 512 |
+
},
|
| 513 |
+
{
|
| 514 |
+
"type": "text",
|
| 515 |
+
"bbox": [
|
| 516 |
+
0.113,
|
| 517 |
+
0.488,
|
| 518 |
+
0.489,
|
| 519 |
+
0.535
|
| 520 |
+
],
|
| 521 |
+
"angle": 0,
|
| 522 |
+
"content": "Given the state \\( s_i = Z_i \\), we derive the available action space: \\( \\widetilde{\\mathbb{A}}_i = \\{X\\backslash Z_i\\} \\). The searched action can be represented as"
|
| 523 |
+
},
|
| 524 |
+
{
|
| 525 |
+
"type": "equation",
|
| 526 |
+
"bbox": [
|
| 527 |
+
0.27,
|
| 528 |
+
0.542,
|
| 529 |
+
0.488,
|
| 530 |
+
0.559
|
| 531 |
+
],
|
| 532 |
+
"angle": 0,
|
| 533 |
+
"content": "\\[\n\\widetilde {a} _ {i} = \\widetilde {z} _ {i}, \\tag {6}\n\\]"
|
| 534 |
+
},
|
| 535 |
+
{
|
| 536 |
+
"type": "text",
|
| 537 |
+
"bbox": [
|
| 538 |
+
0.113,
|
| 539 |
+
0.563,
|
| 540 |
+
0.489,
|
| 541 |
+
0.611
|
| 542 |
+
],
|
| 543 |
+
"angle": 0,
|
| 544 |
+
"content": "where \\(\\widetilde{z}_i\\in \\{X\\backslash Z_i\\}\\) indicates candidate rationale in action space. Having made the action \\(\\widetilde{a}_i\\) , the state transition is to merge \\(\\widetilde{z}_i\\) into \\(Z_{i}\\) , i.e., \\(\\widetilde{Z}_i = Z_i\\cup \\{\\widetilde{z}_i\\}\\)"
|
| 545 |
+
},
|
| 546 |
+
{
|
| 547 |
+
"type": "text",
|
| 548 |
+
"bbox": [
|
| 549 |
+
0.113,
|
| 550 |
+
0.611,
|
| 551 |
+
0.489,
|
| 552 |
+
0.653
|
| 553 |
+
],
|
| 554 |
+
"angle": 0,
|
| 555 |
+
"content": "To assess the effectiveness of the action \\(\\widetilde{a}_i\\) in mitigating the learning bias of the model, the reward \\(\\widetilde{\\mathbb{R}}_i(\\widetilde{s}_i,\\widetilde{a}_i)\\) at state \\(s_i\\) can be formulated as follows:"
|
| 556 |
+
},
|
| 557 |
+
{
|
| 558 |
+
"type": "equation",
|
| 559 |
+
"bbox": [
|
| 560 |
+
0.133,
|
| 561 |
+
0.657,
|
| 562 |
+
0.488,
|
| 563 |
+
0.684
|
| 564 |
+
],
|
| 565 |
+
"angle": 0,
|
| 566 |
+
"content": "\\[\n\\widetilde {\\mathbb {R}} _ {i} = \\left\\{ \\begin{array}{l l} A \\left(\\widetilde {z _ {i}} \\mid Z _ {i}, \\hat {y} _ {c} ^ {*}\\right) + 1, & i f f _ {P} \\left(Z _ {i} \\cup \\{\\widetilde {z _ {i}} \\}\\right) = \\hat {y} _ {c} ^ {*} \\\\ A \\left(\\widetilde {z _ {i}} \\mid Z _ {i}, \\hat {y} _ {c} ^ {*}\\right) - 1, & o t h e r w i s e. \\end{array} \\right. \\tag {7}\n\\]"
|
| 567 |
+
},
|
| 568 |
+
{
|
| 569 |
+
"type": "text",
|
| 570 |
+
"bbox": [
|
| 571 |
+
0.113,
|
| 572 |
+
0.688,
|
| 573 |
+
0.489,
|
| 574 |
+
0.737
|
| 575 |
+
],
|
| 576 |
+
"angle": 0,
|
| 577 |
+
"content": "According to Equation 5, although we can quantify the probabilities at states \\(\\widetilde{s}_i\\) and \\(s_i\\), and present the relevant reward \\(\\widetilde{\\mathbb{R}}_i\\), obtaining \\(y_c^*\\) poses a challenge."
|
| 578 |
+
},
|
| 579 |
+
{
|
| 580 |
+
"type": "title",
|
| 581 |
+
"bbox": [
|
| 582 |
+
0.114,
|
| 583 |
+
0.745,
|
| 584 |
+
0.298,
|
| 585 |
+
0.76
|
| 586 |
+
],
|
| 587 |
+
"angle": 0,
|
| 588 |
+
"content": "3.3 Pretrained Agent"
|
| 589 |
+
},
|
| 590 |
+
{
|
| 591 |
+
"type": "text",
|
| 592 |
+
"bbox": [
|
| 593 |
+
0.113,
|
| 594 |
+
0.766,
|
| 595 |
+
0.489,
|
| 596 |
+
0.862
|
| 597 |
+
],
|
| 598 |
+
"angle": 0,
|
| 599 |
+
"content": "To address the limitation, we propose a reinforced causal agent in the aforementioned causal and reinforcement learning framework to better align the probability distribution of the target prediction and theoretically justify the creation of an auxiliary agent targeting \\(\\hat{y}_c\\)."
|
| 600 |
+
},
|
| 601 |
+
{
|
| 602 |
+
"type": "text",
|
| 603 |
+
"bbox": [
|
| 604 |
+
0.113,
|
| 605 |
+
0.863,
|
| 606 |
+
0.489,
|
| 607 |
+
0.895
|
| 608 |
+
],
|
| 609 |
+
"angle": 0,
|
| 610 |
+
"content": "Pretrained Embedding. We pretrain the auxiliary agent, denoted as \\( f_{A}(\\cdot) \\), with"
|
| 611 |
+
},
|
| 612 |
+
{
|
| 613 |
+
"type": "equation",
|
| 614 |
+
"bbox": [
|
| 615 |
+
0.13,
|
| 616 |
+
0.899,
|
| 617 |
+
0.488,
|
| 618 |
+
0.925
|
| 619 |
+
],
|
| 620 |
+
"angle": 0,
|
| 621 |
+
"content": "\\[\n\\theta_ {A} ^ {*} = \\arg \\min _ {\\theta_ {A}} \\mathcal {H} (Y, \\hat {Y} | X), s. t. \\hat {Y} = f _ {A} (X), \\tag {8}\n\\]"
|
| 622 |
+
},
|
| 623 |
+
{
|
| 624 |
+
"type": "text",
|
| 625 |
+
"bbox": [
|
| 626 |
+
0.508,
|
| 627 |
+
0.085,
|
| 628 |
+
0.883,
|
| 629 |
+
0.116
|
| 630 |
+
],
|
| 631 |
+
"angle": 0,
|
| 632 |
+
"content": "where \\(\\theta_{A}\\) represents the parameters of the agent, and \\(\\theta_{A}^{*}\\) denotes the optimal solution."
|
| 633 |
+
},
|
| 634 |
+
{
|
| 635 |
+
"type": "text",
|
| 636 |
+
"bbox": [
|
| 637 |
+
0.508,
|
| 638 |
+
0.117,
|
| 639 |
+
0.883,
|
| 640 |
+
0.182
|
| 641 |
+
],
|
| 642 |
+
"angle": 0,
|
| 643 |
+
"content": "Theorem Analysis. Assuming \\(X, Z, Y\\), and \\(\\mathcal{A}\\) as random variables in rationalization representing the input, rationale, label, and auxiliary variable, respectively, we propose:"
|
| 644 |
+
},
|
| 645 |
+
{
|
| 646 |
+
"type": "text",
|
| 647 |
+
"bbox": [
|
| 648 |
+
0.508,
|
| 649 |
+
0.184,
|
| 650 |
+
0.884,
|
| 651 |
+
0.248
|
| 652 |
+
],
|
| 653 |
+
"angle": 0,
|
| 654 |
+
"content": "Lemma 1. Given \\(X\\), \\(Z\\), \\(Y\\), \\(\\hat{Y} = f_{P}(f_{G}(X))\\). Existing a guiding variable \\(\\mathcal{A}\\) could enable the predictor \\(f_{P}(\\cdot)\\) to achieve good predictions. That is, a solution for \\(\\mathcal{A}\\) exists, and \\(X\\) is a solution of \\(\\mathcal{A}\\)."
|
| 655 |
+
},
|
| 656 |
+
{
|
| 657 |
+
"type": "text",
|
| 658 |
+
"bbox": [
|
| 659 |
+
0.508,
|
| 660 |
+
0.252,
|
| 661 |
+
0.883,
|
| 662 |
+
0.316
|
| 663 |
+
],
|
| 664 |
+
"angle": 0,
|
| 665 |
+
"content": "The proof is provided in Appendix A. Lemma 1 suggests that constructing an auxiliary variable \\(\\mathcal{A}\\) aligned with \\(X\\) for rationalization contributes to the learning of a good prediction."
|
| 666 |
+
},
|
| 667 |
+
{
|
| 668 |
+
"type": "title",
|
| 669 |
+
"bbox": [
|
| 670 |
+
0.509,
|
| 671 |
+
0.326,
|
| 672 |
+
0.805,
|
| 673 |
+
0.342
|
| 674 |
+
],
|
| 675 |
+
"angle": 0,
|
| 676 |
+
"content": "4 Agent-Guided Rationalization"
|
| 677 |
+
},
|
| 678 |
+
{
|
| 679 |
+
"type": "text",
|
| 680 |
+
"bbox": [
|
| 681 |
+
0.508,
|
| 682 |
+
0.351,
|
| 683 |
+
0.884,
|
| 684 |
+
0.416
|
| 685 |
+
],
|
| 686 |
+
"angle": 0,
|
| 687 |
+
"content": "As depicted in Figure 2, following the establishment of the environment for the reinforced causal agent, we delineate the construction and training of the policy network \\( q_{\\phi} \\)."
|
| 688 |
+
},
|
| 689 |
+
{
|
| 690 |
+
"type": "title",
|
| 691 |
+
"bbox": [
|
| 692 |
+
0.509,
|
| 693 |
+
0.425,
|
| 694 |
+
0.783,
|
| 695 |
+
0.44
|
| 696 |
+
],
|
| 697 |
+
"angle": 0,
|
| 698 |
+
"content": "4.1 Policy Network Architecture"
|
| 699 |
+
},
|
| 700 |
+
{
|
| 701 |
+
"type": "text",
|
| 702 |
+
"bbox": [
|
| 703 |
+
0.508,
|
| 704 |
+
0.445,
|
| 705 |
+
0.882,
|
| 706 |
+
0.478
|
| 707 |
+
],
|
| 708 |
+
"angle": 0,
|
| 709 |
+
"content": "It takes the pair of intermediate state \\(\\mathcal{Z}_i\\) and \\(\\hat{y}_c\\) provided by \\(f_{A}(\\cdot)\\) as input. Formally,"
|
| 710 |
+
},
|
| 711 |
+
{
|
| 712 |
+
"type": "equation",
|
| 713 |
+
"bbox": [
|
| 714 |
+
0.635,
|
| 715 |
+
0.484,
|
| 716 |
+
0.883,
|
| 717 |
+
0.502
|
| 718 |
+
],
|
| 719 |
+
"angle": 0,
|
| 720 |
+
"content": "\\[\n\\widetilde {z} _ {i} \\sim q _ {\\phi} \\left(\\mathcal {Z} _ {i}, \\hat {y} _ {c}\\right), \\tag {9}\n\\]"
|
| 721 |
+
},
|
| 722 |
+
{
|
| 723 |
+
"type": "text",
|
| 724 |
+
"bbox": [
|
| 725 |
+
0.508,
|
| 726 |
+
0.508,
|
| 727 |
+
0.884,
|
| 728 |
+
0.556
|
| 729 |
+
],
|
| 730 |
+
"angle": 0,
|
| 731 |
+
"content": "where \\(\\theta_{\\phi}\\) is the trainable parameters of the policy network, and \\(\\widetilde{z}_i\\) is generated according to the probability of next action \\(\\mathbb{P}_{\\phi}(\\widetilde{z}_i|\\mathcal{Z}_i,\\hat{y}_c)\\)."
|
| 732 |
+
},
|
| 733 |
+
{
|
| 734 |
+
"type": "text",
|
| 735 |
+
"bbox": [
|
| 736 |
+
0.508,
|
| 737 |
+
0.557,
|
| 738 |
+
0.884,
|
| 739 |
+
0.636
|
| 740 |
+
],
|
| 741 |
+
"angle": 0,
|
| 742 |
+
"content": "Representation learning of action candidates. With the space of action candidates \\(\\widetilde{\\mathbb{A}}_i = X\\backslash \\mathcal{Z}_i\\) our policy network first learns the representation for each action candidate \\(\\widetilde{a}_{i}^{(j)}(0 < j < N)\\), where \\(N\\) is the number of candidates."
|
| 743 |
+
},
|
| 744 |
+
{
|
| 745 |
+
"type": "text",
|
| 746 |
+
"bbox": [
|
| 747 |
+
0.508,
|
| 748 |
+
0.639,
|
| 749 |
+
0.882,
|
| 750 |
+
0.671
|
| 751 |
+
],
|
| 752 |
+
"angle": 0,
|
| 753 |
+
"content": "Then, we employ the encoder to encode \\( X \\backslash \\mathcal{Z}_i \\) for obtaining the action representation of \\( \\widetilde{z}_i \\) by"
|
| 754 |
+
},
|
| 755 |
+
{
|
| 756 |
+
"type": "equation",
|
| 757 |
+
"bbox": [
|
| 758 |
+
0.608,
|
| 759 |
+
0.677,
|
| 760 |
+
0.883,
|
| 761 |
+
0.695
|
| 762 |
+
],
|
| 763 |
+
"angle": 0,
|
| 764 |
+
"content": "\\[\ne _ {\\widetilde {z} _ {i}} = \\operatorname {e n c o d e r} \\left(X \\backslash \\mathcal {Z} _ {i}\\right), \\tag {10}\n\\]"
|
| 765 |
+
},
|
| 766 |
+
{
|
| 767 |
+
"type": "text",
|
| 768 |
+
"bbox": [
|
| 769 |
+
0.508,
|
| 770 |
+
0.701,
|
| 771 |
+
0.882,
|
| 772 |
+
0.732
|
| 773 |
+
],
|
| 774 |
+
"angle": 0,
|
| 775 |
+
"content": "utilizing bidirectional Gated Recurrent Units (GRUs) (Cho et al., 2014) as the encoder."
|
| 776 |
+
},
|
| 777 |
+
{
|
| 778 |
+
"type": "text",
|
| 779 |
+
"bbox": [
|
| 780 |
+
0.508,
|
| 781 |
+
0.733,
|
| 782 |
+
0.882,
|
| 783 |
+
0.797
|
| 784 |
+
],
|
| 785 |
+
"angle": 0,
|
| 786 |
+
"content": "Sampling of action. The policy network aims to select a singular action \\(\\widetilde{a}_i = \\widetilde{z}_i\\) from the search space, prioritizing its relevance to the current state \\(s_i = \\mathcal{Z}_i\\). This selection process is modeled as:"
|
| 787 |
+
},
|
| 788 |
+
{
|
| 789 |
+
"type": "equation",
|
| 790 |
+
"bbox": [
|
| 791 |
+
0.601,
|
| 792 |
+
0.804,
|
| 793 |
+
0.883,
|
| 794 |
+
0.822
|
| 795 |
+
],
|
| 796 |
+
"angle": 0,
|
| 797 |
+
"content": "\\[\np _ {\\widetilde {z} _ {i}} = M L P \\left(\\left[ \\boldsymbol {e} _ {\\widetilde {z} _ {i}}; \\boldsymbol {e} _ {\\mathcal {Z} _ {i}} \\right]\\right), \\tag {11}\n\\]"
|
| 798 |
+
},
|
| 799 |
+
{
|
| 800 |
+
"type": "text",
|
| 801 |
+
"bbox": [
|
| 802 |
+
0.508,
|
| 803 |
+
0.828,
|
| 804 |
+
0.883,
|
| 805 |
+
0.876
|
| 806 |
+
],
|
| 807 |
+
"angle": 0,
|
| 808 |
+
"content": "where \\(e_{\\mathcal{Z}_i}\\) indicates the current rationale's representation. The selection probability for each action candidate within \\(\\widetilde{\\mathbb{A}}_i\\) is computed using"
|
| 809 |
+
},
|
| 810 |
+
{
|
| 811 |
+
"type": "equation",
|
| 812 |
+
"bbox": [
|
| 813 |
+
0.57,
|
| 814 |
+
0.882,
|
| 815 |
+
0.883,
|
| 816 |
+
0.901
|
| 817 |
+
],
|
| 818 |
+
"angle": 0,
|
| 819 |
+
"content": "\\[\n\\mathbb {P} _ {\\phi} \\left(\\widetilde {z} _ {i} \\mid \\mathcal {Z} _ {i}, \\hat {y} _ {c}\\right) = \\operatorname {s o f t m a x} _ {\\widetilde {\\mathbb {A}} _ {i}} \\left(p _ {\\widetilde {z} _ {i}}\\right), \\tag {12}\n\\]"
|
| 820 |
+
},
|
| 821 |
+
{
|
| 822 |
+
"type": "text",
|
| 823 |
+
"bbox": [
|
| 824 |
+
0.509,
|
| 825 |
+
0.906,
|
| 826 |
+
0.839,
|
| 827 |
+
0.921
|
| 828 |
+
],
|
| 829 |
+
"angle": 0,
|
| 830 |
+
"content": "where \\(\\phi\\) is the parameters collected of MLP."
|
| 831 |
+
},
|
| 832 |
+
{
|
| 833 |
+
"type": "page_number",
|
| 834 |
+
"bbox": [
|
| 835 |
+
0.486,
|
| 836 |
+
0.928,
|
| 837 |
+
0.517,
|
| 838 |
+
0.941
|
| 839 |
+
],
|
| 840 |
+
"angle": 0,
|
| 841 |
+
"content": "512"
|
| 842 |
+
}
|
| 843 |
+
],
|
| 844 |
+
[
|
| 845 |
+
{
|
| 846 |
+
"type": "table",
|
| 847 |
+
"bbox": [
|
| 848 |
+
0.137,
|
| 849 |
+
0.082,
|
| 850 |
+
0.865,
|
| 851 |
+
0.228
|
| 852 |
+
],
|
| 853 |
+
"angle": 0,
|
| 854 |
+
"content": "<table><tr><td rowspan=\"2\">Methods</td><td rowspan=\"2\">S</td><td colspan=\"3\">Appearance</td><td colspan=\"3\">Aroma</td><td colspan=\"3\">Palate</td></tr><tr><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>RNP (Lei et al., 2016)</td><td>20</td><td>39.4</td><td>44.9</td><td>42.0</td><td>37.5</td><td>51.9</td><td>43.5</td><td>21.6</td><td>38.9</td><td>27.8</td></tr><tr><td>HardKuma (Bastings et al., 2019)</td><td>20</td><td>64.9</td><td>69.2</td><td>67.0</td><td>37.0</td><td>55.8</td><td>44.5</td><td>14.6</td><td>22.3</td><td>17.7</td></tr><tr><td>IB (Paranjape et al., 2020)</td><td>20</td><td>59.3</td><td>69.0</td><td>63.8</td><td>38.6</td><td>55.5</td><td>45.6</td><td>21.6</td><td>48.5</td><td>29.9</td></tr><tr><td>INVRAT (Chang et al., 2020)</td><td>20</td><td>58.9</td><td>67.2</td><td>62.8</td><td>29.3</td><td>52.1</td><td>37.5</td><td>24.0</td><td>55.2</td><td>33.5</td></tr><tr><td>DARE (Yue et al., 2022)</td><td>20</td><td>63.7</td><td>71.8</td><td>67.5</td><td>41.0</td><td>61.5</td><td>49.3</td><td>24.4</td><td>54.9</td><td>33.8</td></tr><tr><td>FR (Liu et al., 2022)</td><td>20</td><td>74.9</td><td>84.9</td><td>79.6</td><td>58.7</td><td>73.3</td><td>65.2</td><td>36.6</td><td>59.4</td><td>45.3</td></tr><tr><td>Inter-RAT (Yue et al., 2023)</td><td>20</td><td>62.0</td><td>76.7</td><td>68.6</td><td>44.2</td><td>65.4</td><td>52.8</td><td>26.3</td><td>59.1</td><td>36.4</td></tr><tr><td>MGR (Liu et al., 2023b)</td><td>20</td><td>76.3</td><td>83.6</td><td>79.8</td><td>64.4</td><td>81.3</td><td>71.9</td><td>47.1</td><td>73.1</td><td>57.3</td></tr><tr><td>AGR(Ours)</td><td>20</td><td>83.7</td><td>87.5</td><td>85.6</td><td>67.5</td><td>81.4</td><td>73.8</td><td>47.6</td><td>77.7</td><td>59.0</td></tr></table>"
|
| 855 |
+
},
|
| 856 |
+
{
|
| 857 |
+
"type": "table_caption",
|
| 858 |
+
"bbox": [
|
| 859 |
+
0.114,
|
| 860 |
+
0.237,
|
| 861 |
+
0.882,
|
| 862 |
+
0.251
|
| 863 |
+
],
|
| 864 |
+
"angle": 0,
|
| 865 |
+
"content": "Table 1: Results on BeerAdvocate, where Bold text indicates the best experimental results across different methods."
|
| 866 |
+
},
|
| 867 |
+
{
|
| 868 |
+
"type": "table",
|
| 869 |
+
"bbox": [
|
| 870 |
+
0.123,
|
| 871 |
+
0.268,
|
| 872 |
+
0.486,
|
| 873 |
+
0.338
|
| 874 |
+
],
|
| 875 |
+
"angle": 0,
|
| 876 |
+
"content": "<table><tr><td rowspan=\"2\">Methods</td><td colspan=\"4\">Appearance</td><td colspan=\"4\">Appearance</td><td colspan=\"4\">Appearance</td></tr><tr><td>S</td><td>P</td><td>R</td><td>F1</td><td>S</td><td>P</td><td>R</td><td>F1</td><td>S</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>RNP</td><td>10</td><td>32.4</td><td>18.6</td><td>23.6</td><td>20</td><td>39.4</td><td>44.9</td><td>42.0</td><td>30</td><td>24.2</td><td>41.2</td><td>30.5</td></tr><tr><td>DARE</td><td>10</td><td>63.9</td><td>42.8</td><td>51.3</td><td>20</td><td>63.7</td><td>71.8</td><td>67.5</td><td>30</td><td>45.5</td><td>80.6</td><td>58.1</td></tr><tr><td>FR</td><td>10</td><td>70.4</td><td>42.0</td><td>52.6</td><td>20</td><td>74.9</td><td>84.9</td><td>79.6</td><td>30</td><td>50.6</td><td>81.4</td><td>62.3</td></tr><tr><td>Inter-RAT</td><td>10</td><td>66.0</td><td>46.5</td><td>54.6</td><td>20</td><td>62.0</td><td>76.7</td><td>68.6</td><td>30</td><td>48.1</td><td>82.7</td><td>60.8</td></tr><tr><td>MGR</td><td>10</td><td>87.5</td><td>51.7</td><td>65.0</td><td>20</td><td>76.3</td><td>83.6</td><td>79.8</td><td>30</td><td>57.2</td><td>93.9</td><td>71.1</td></tr><tr><td>AGR</td><td>10</td><td>83.5</td><td>54.9</td><td>66.2</td><td>20</td><td>83.7</td><td>87.5</td><td>85.6</td><td>30</td><td>59.7</td><td>94.3</td><td>73.1</td></tr></table>"
|
| 877 |
+
},
|
| 878 |
+
{
|
| 879 |
+
"type": "table_caption",
|
| 880 |
+
"bbox": [
|
| 881 |
+
0.114,
|
| 882 |
+
0.348,
|
| 883 |
+
0.486,
|
| 884 |
+
0.362
|
| 885 |
+
],
|
| 886 |
+
"angle": 0,
|
| 887 |
+
"content": "Table 2: The different sparsity results on BeerAdvocate."
|
| 888 |
+
},
|
| 889 |
+
{
|
| 890 |
+
"type": "title",
|
| 891 |
+
"bbox": [
|
| 892 |
+
0.114,
|
| 893 |
+
0.381,
|
| 894 |
+
0.358,
|
| 895 |
+
0.397
|
| 896 |
+
],
|
| 897 |
+
"angle": 0,
|
| 898 |
+
"content": "4.2 Policy Gradient Training"
|
| 899 |
+
},
|
| 900 |
+
{
|
| 901 |
+
"type": "text",
|
| 902 |
+
"bbox": [
|
| 903 |
+
0.114,
|
| 904 |
+
0.402,
|
| 905 |
+
0.489,
|
| 906 |
+
0.466
|
| 907 |
+
],
|
| 908 |
+
"angle": 0,
|
| 909 |
+
"content": "Since discrete sampling within the policy network blocks gradients, we adopt policy gradient-based training framework REINFORCE (Sutton et al., 1999). The objective \\(\\max_{\\Omega}(\\mathbb{L})\\) is as follows:"
|
| 910 |
+
},
|
| 911 |
+
{
|
| 912 |
+
"type": "equation",
|
| 913 |
+
"bbox": [
|
| 914 |
+
0.124,
|
| 915 |
+
0.475,
|
| 916 |
+
0.488,
|
| 917 |
+
0.5
|
| 918 |
+
],
|
| 919 |
+
"angle": 0,
|
| 920 |
+
"content": "\\[\n\\max _ {\\phi} \\mathbb {E} _ {\\mathcal {Z} _ {i} \\in \\widetilde {\\mathbb {A}} _ {i}} \\mathbb {E} _ {i} \\left[ \\widetilde {\\mathbb {R}} \\left(\\mathcal {Z} _ {i}, \\bar {z} _ {i}\\right) \\log \\mathcal {P} _ {\\phi} \\left(\\bar {z} _ {i} \\mid \\mathcal {Z} _ {i}, \\hat {y} _ {c}\\right) \\right]. \\tag {13}\n\\]"
|
| 921 |
+
},
|
| 922 |
+
{
|
| 923 |
+
"type": "text",
|
| 924 |
+
"bbox": [
|
| 925 |
+
0.114,
|
| 926 |
+
0.51,
|
| 927 |
+
0.488,
|
| 928 |
+
0.526
|
| 929 |
+
],
|
| 930 |
+
"angle": 0,
|
| 931 |
+
"content": "The final task loss is a jointly optimized objective:"
|
| 932 |
+
},
|
| 933 |
+
{
|
| 934 |
+
"type": "equation",
|
| 935 |
+
"bbox": [
|
| 936 |
+
0.114,
|
| 937 |
+
0.535,
|
| 938 |
+
0.49,
|
| 939 |
+
0.574
|
| 940 |
+
],
|
| 941 |
+
"angle": 0,
|
| 942 |
+
"content": "\\[\n\\min _ {\\theta_ {g}, \\theta_ {p}} \\mathcal {H} (Y, \\hat {Y}) + \\Omega (M) - \\Omega (\\mathbb {L}), s. t. \\hat {Y} = f _ {P} \\left(f _ {G} (X)\\right) \\tag {14}\n\\]"
|
| 943 |
+
},
|
| 944 |
+
{
|
| 945 |
+
"type": "title",
|
| 946 |
+
"bbox": [
|
| 947 |
+
0.114,
|
| 948 |
+
0.586,
|
| 949 |
+
0.26,
|
| 950 |
+
0.603
|
| 951 |
+
],
|
| 952 |
+
"angle": 0,
|
| 953 |
+
"content": "5 Experiments"
|
| 954 |
+
},
|
| 955 |
+
{
|
| 956 |
+
"type": "title",
|
| 957 |
+
"bbox": [
|
| 958 |
+
0.114,
|
| 959 |
+
0.612,
|
| 960 |
+
0.433,
|
| 961 |
+
0.641
|
| 962 |
+
],
|
| 963 |
+
"angle": 0,
|
| 964 |
+
"content": "5.1 Datasets, Baselines and Evaluation Metrics"
|
| 965 |
+
},
|
| 966 |
+
{
|
| 967 |
+
"type": "text",
|
| 968 |
+
"bbox": [
|
| 969 |
+
0.113,
|
| 970 |
+
0.648,
|
| 971 |
+
0.489,
|
| 972 |
+
0.791
|
| 973 |
+
],
|
| 974 |
+
"angle": 0,
|
| 975 |
+
"content": "Datasets. We compare AGR using BeerAdvocate (McAuley et al., 2012) and HotelReview (Wang et al., 2010) datasets, which are two multi-aspect sentiment classification datasets widely used in rationalization. Following existing work, we obtain the data in the same way as Yue et al. (2023) for BeerAdvocate, and we preprocess HotelReview dataset in the same way as Huang et al. (2021) and Liu et al. (2023b)."
|
| 976 |
+
},
|
| 977 |
+
{
|
| 978 |
+
"type": "text",
|
| 979 |
+
"bbox": [
|
| 980 |
+
0.113,
|
| 981 |
+
0.794,
|
| 982 |
+
0.49,
|
| 983 |
+
0.922
|
| 984 |
+
],
|
| 985 |
+
"angle": 0,
|
| 986 |
+
"content": "Baselines. We compare with eight models for Beer-Advocate, including three sampling-based methods: RNP (Lei et al., 2016), HardKuma (Bastings et al., 2019), Information Bottleneck (IB) (Paranjape et al., 2020), and three extra-component-based methods: DARE (Yue et al., 2022), FR (Liu et al., 2022), MGR (Liu et al., 2023b), and two causal-based methods: INVRAT (Chang et al., 2020),"
|
| 987 |
+
},
|
| 988 |
+
{
|
| 989 |
+
"type": "table",
|
| 990 |
+
"bbox": [
|
| 991 |
+
0.521,
|
| 992 |
+
0.268,
|
| 993 |
+
0.879,
|
| 994 |
+
0.499
|
| 995 |
+
],
|
| 996 |
+
"angle": 0,
|
| 997 |
+
"content": "<table><tr><td colspan=\"2\">Methods</td><td>S</td><td>P</td><td>R</td><td>F1</td></tr><tr><td rowspan=\"5\">Location</td><td>RNP (Lei et al., 2016)</td><td>10.9</td><td>43.3</td><td>55.5</td><td>48.6</td></tr><tr><td>CAR (Chang et al., 2019)</td><td>10.6</td><td>46.6</td><td>58.1</td><td>51.7</td></tr><tr><td>DMR (Huang et al., 2021)</td><td>10.7</td><td>47.5</td><td>60.1</td><td>53.1</td></tr><tr><td>A2R (Yu et al., 2021)</td><td>8.5</td><td>43.1</td><td>43.2</td><td>43.1</td></tr><tr><td>MGR (Liu et al., 2023b)</td><td>9.7</td><td>52.5</td><td>60.5</td><td>56.2</td></tr><tr><td colspan=\"2\">AGR(Ours)</td><td>9.3</td><td>54.9</td><td>60.5</td><td>57.6</td></tr><tr><td colspan=\"2\"></td><td>S</td><td>P</td><td>R</td><td>F1</td></tr><tr><td rowspan=\"5\">Service</td><td>RNP (Lei et al., 2016)</td><td>11.0</td><td>40.0</td><td>38.2</td><td>39.1</td></tr><tr><td>CAR (Chang et al., 2019)</td><td>11.7</td><td>40.7</td><td>41.4</td><td>41.1</td></tr><tr><td>DMR (Huang et al., 2021)</td><td>11.6</td><td>43.0</td><td>43.6</td><td>43.3</td></tr><tr><td>A2R (Yu et al., 2021)</td><td>11.4</td><td>37.3</td><td>37.2</td><td>37.2</td></tr><tr><td>MGR (Liu et al., 2023b)</td><td>11.8</td><td>45.0</td><td>46.4</td><td>45.7</td></tr><tr><td colspan=\"2\">AGR(Ours)</td><td>12.3</td><td>45.9</td><td>49.3</td><td>47.6</td></tr><tr><td colspan=\"2\"></td><td>S</td><td>P</td><td>R</td><td>F1</td></tr><tr><td rowspan=\"5\">Cleanliness</td><td>RNP (Lei et al., 2016)</td><td>10.6</td><td>30.5</td><td>36.0</td><td>33.0</td></tr><tr><td>CAR (Chang et al., 2019)</td><td>9.9</td><td>32.3</td><td>35.7</td><td>33.9</td></tr><tr><td>DMR (Huang et al., 2021)</td><td>10.3</td><td>31.4</td><td>36.4</td><td>33.7</td></tr><tr><td>A2R (Yu et al., 2021)</td><td>8.9</td><td>33.2</td><td>33.3</td><td>33.3</td></tr><tr><td>MGR (Liu et al., 2023b)</td><td>10.5</td><td>37.6</td><td>44.5</td><td>40.7</td></tr><tr><td colspan=\"2\">AGR(Ours)</td><td>10.3</td><td>39.0</td><td>45.5</td><td>42.0</td></tr></table>"
|
| 998 |
+
},
|
| 999 |
+
{
|
| 1000 |
+
"type": "table_caption",
|
| 1001 |
+
"bbox": [
|
| 1002 |
+
0.523,
|
| 1003 |
+
0.508,
|
| 1004 |
+
0.868,
|
| 1005 |
+
0.523
|
| 1006 |
+
],
|
| 1007 |
+
"angle": 0,
|
| 1008 |
+
"content": "Table 3: The experimental results on HotelReview."
|
| 1009 |
+
},
|
| 1010 |
+
{
|
| 1011 |
+
"type": "text",
|
| 1012 |
+
"bbox": [
|
| 1013 |
+
0.508,
|
| 1014 |
+
0.543,
|
| 1015 |
+
0.884,
|
| 1016 |
+
0.623
|
| 1017 |
+
],
|
| 1018 |
+
"angle": 0,
|
| 1019 |
+
"content": "Inter-RAT (Yue et al., 2023). For HotelReview dataset, we compare with five models, including RNP (Lei et al., 2016), CAR (Chang et al., 2019), DMR (Huang et al., 2021), A2R (Yu et al., 2021), and MGR (Liu et al., 2023b)."
|
| 1020 |
+
},
|
| 1021 |
+
{
|
| 1022 |
+
"type": "text",
|
| 1023 |
+
"bbox": [
|
| 1024 |
+
0.508,
|
| 1025 |
+
0.625,
|
| 1026 |
+
0.884,
|
| 1027 |
+
0.737
|
| 1028 |
+
],
|
| 1029 |
+
"angle": 0,
|
| 1030 |
+
"content": "Evaluation Metrics. Following (Huang et al., 2021; Yu et al., 2021; Yue et al., 2023; Liu et al., 2023b), we focus on the quality of rationales, and adopt Precision (P), Recall (R), and F1-score (F1) as metrics. We perform the best results on the validation set before testing on the test set. The Appendix B provides further details in this section."
|
| 1031 |
+
},
|
| 1032 |
+
{
|
| 1033 |
+
"type": "title",
|
| 1034 |
+
"bbox": [
|
| 1035 |
+
0.509,
|
| 1036 |
+
0.753,
|
| 1037 |
+
0.759,
|
| 1038 |
+
0.769
|
| 1039 |
+
],
|
| 1040 |
+
"angle": 0,
|
| 1041 |
+
"content": "5.2 Performance Comparison"
|
| 1042 |
+
},
|
| 1043 |
+
{
|
| 1044 |
+
"type": "text",
|
| 1045 |
+
"bbox": [
|
| 1046 |
+
0.508,
|
| 1047 |
+
0.776,
|
| 1048 |
+
0.884,
|
| 1049 |
+
0.887
|
| 1050 |
+
],
|
| 1051 |
+
"angle": 0,
|
| 1052 |
+
"content": "Results on BeerAdvocate. As shown in Table 1, our proposed method AGR outperforms all the eight baselines in terms of three aspects for BeerAdvocate dataset. Furthermore, in sparsity experiments (Table 2), AGR consistently outperforms the latest state-of-the-art results, affirming its effectiveness for selective rationalization."
|
| 1053 |
+
},
|
| 1054 |
+
{
|
| 1055 |
+
"type": "text",
|
| 1056 |
+
"bbox": [
|
| 1057 |
+
0.509,
|
| 1058 |
+
0.89,
|
| 1059 |
+
0.884,
|
| 1060 |
+
0.922
|
| 1061 |
+
],
|
| 1062 |
+
"angle": 0,
|
| 1063 |
+
"content": "Results on HotelReview. Table 3 shows that our model once again obtains the best performance"
|
| 1064 |
+
},
|
| 1065 |
+
{
|
| 1066 |
+
"type": "page_number",
|
| 1067 |
+
"bbox": [
|
| 1068 |
+
0.486,
|
| 1069 |
+
0.929,
|
| 1070 |
+
0.516,
|
| 1071 |
+
0.941
|
| 1072 |
+
],
|
| 1073 |
+
"angle": 0,
|
| 1074 |
+
"content": "513"
|
| 1075 |
+
}
|
| 1076 |
+
],
|
| 1077 |
+
[
|
| 1078 |
+
{
|
| 1079 |
+
"type": "table_caption",
|
| 1080 |
+
"bbox": [
|
| 1081 |
+
0.114,
|
| 1082 |
+
0.083,
|
| 1083 |
+
0.883,
|
| 1084 |
+
0.111
|
| 1085 |
+
],
|
| 1086 |
+
"angle": 0,
|
| 1087 |
+
"content": "Table 4: Examples of generated rationales. Human-annotated rationales are underlined. Rationales from three models are highlighted in blue and are denoted as \\( {Z}_{1},{Z}_{2} \\) and \\( {Z}_{3} \\) respectively."
|
| 1088 |
+
},
|
| 1089 |
+
{
|
| 1090 |
+
"type": "table",
|
| 1091 |
+
"bbox": [
|
| 1092 |
+
0.12,
|
| 1093 |
+
0.123,
|
| 1094 |
+
0.879,
|
| 1095 |
+
0.466
|
| 1096 |
+
],
|
| 1097 |
+
"angle": 0,
|
| 1098 |
+
"content": "<table><tr><td>FR (2022)</td><td>MGR (2023b)</td><td>AGR (Ours)</td></tr><tr><td>Aspect: Beer-Appearance\nLabel: Positive, Pred: Positive\nText: i picked this beer up on a whim as i was in the mood for a good coffee stout and the siren-like figure somehow told me this is the beer for you . a bit freaky , but i went with it . i was impressed from the very first pour . like any stout , the color is a dark molasses black . but … the head was thick and dense with good retention . the coffee aroma was intense ! the roasted goodness almost overwhelms my sense of smell .the roasted coffee flavors are the first things that i could taste along with hints of chocolate . however , i can tell there ’s more complexity here than my palette can decipher . the coffee flavors bring bitterness but it ’s not over powering as the sweetness of the malt cuts the bitterness quite nicely the beer has carbonation but once the bubbles have escaped the beer gives a creamy , velvety feel and finish . the alcohol was very well hidden in this beer which is scary …</td><td>Aspect: Beer-Appearance\nLabel: Positive, Pred: Positive\nText: i picked this beer up on a whim as i was in the mood for a good coffee stout and the siren-like figure somehow told me this is the beer for you . a bit freaky , but i went with it . i was impressed from the very first pour . like any stout , the color is a dark molasses black . but … the head was thick and dense with good retention . the caffeine aroma was intense ! the roasted goodness almost overwhelms my sense of smell .the roasted coffee flavors are the first things that i could taste along with hints of chocolate . however , i can tell there ’s more complexity here than my palette can decipher . the coffee flavors bring bitterness but it ’s not over powering as the sweetness of the malt cuts the bitterness quite nicely the beer has carbonation but once the bubbles have escaped the beer gives a creamy , velvety feel and finish . the alcoholwas very well hidden in this beer which is scary …</td><td>Aspect: Beer-Appearance\nLabel: Positive, Pred: Positive\nText: i picked this beer up on a whim as i was in the mood for a good coffee stout and the siren-like figure somehow told me this is the beer for you . a bit freaky , but i went with it . i was impressed from the very first pour . like any stout , the color is a dark molasses black . but … theHead was thick and dense with good retention . the caffeine aroma was intense ! the roasted goodness almost overwhelms my sense of smell .the roasted coffee flavors are the first things that i could taste along with hints of chocolate . however , i can tell there ’s more complexity here than my palette can decipher . the coffee flavors bring bitterness but it ’s not over powering as the sweetness of the malt cuts the bitterness quite nicely the beer has carbonation but once the bubbles have escaped the beer gives a creamy . velvety feel and finish . the alcohol was very well hidden in this beer which is scary …</td></tr></table>"
|
| 1099 |
+
},
|
| 1100 |
+
{
|
| 1101 |
+
"type": "table",
|
| 1102 |
+
"bbox": [
|
| 1103 |
+
0.165,
|
| 1104 |
+
0.478,
|
| 1105 |
+
0.443,
|
| 1106 |
+
0.553
|
| 1107 |
+
],
|
| 1108 |
+
"angle": 0,
|
| 1109 |
+
"content": "<table><tr><td rowspan=\"2\">Methods</td><td colspan=\"4\">Appearance</td></tr><tr><td>S</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>AGR</td><td>20</td><td>83.7</td><td>87.5</td><td>85.6</td></tr><tr><td>-w/o causal.</td><td>20</td><td>81.5</td><td>87.8</td><td>84.5</td></tr><tr><td>-w/o embedd.</td><td>20</td><td>81.9</td><td>86.9</td><td>84.3</td></tr><tr><td>-w/o both</td><td>20</td><td>74.3</td><td>85.2</td><td>79.4</td></tr></table>"
|
| 1110 |
+
},
|
| 1111 |
+
{
|
| 1112 |
+
"type": "table_caption",
|
| 1113 |
+
"bbox": [
|
| 1114 |
+
0.14,
|
| 1115 |
+
0.564,
|
| 1116 |
+
0.461,
|
| 1117 |
+
0.577
|
| 1118 |
+
],
|
| 1119 |
+
"angle": 0,
|
| 1120 |
+
"content": "Table 5: Ablation studies on the BeerAdvocate."
|
| 1121 |
+
},
|
| 1122 |
+
{
|
| 1123 |
+
"type": "text",
|
| 1124 |
+
"bbox": [
|
| 1125 |
+
0.114,
|
| 1126 |
+
0.596,
|
| 1127 |
+
0.448,
|
| 1128 |
+
0.611
|
| 1129 |
+
],
|
| 1130 |
+
"angle": 0,
|
| 1131 |
+
"content": "across all multi-aspects datasets consistently."
|
| 1132 |
+
},
|
| 1133 |
+
{
|
| 1134 |
+
"type": "text",
|
| 1135 |
+
"bbox": [
|
| 1136 |
+
0.113,
|
| 1137 |
+
0.614,
|
| 1138 |
+
0.489,
|
| 1139 |
+
0.756
|
| 1140 |
+
],
|
| 1141 |
+
"angle": 0,
|
| 1142 |
+
"content": "Ablation Studies. To further verify the effectiveness of AGR, we conduct the ablation experiments. As depicted in Table 5, removing either the optimized objective of causal effectiveness (referred to as causal.), the pretrained agent embedding (referred to as embedd.), or both, results in a notable decline in AGR's performance, underscoring the critical roles played by our proposed key components in AGR method."
|
| 1143 |
+
},
|
| 1144 |
+
{
|
| 1145 |
+
"type": "text",
|
| 1146 |
+
"bbox": [
|
| 1147 |
+
0.113,
|
| 1148 |
+
0.761,
|
| 1149 |
+
0.49,
|
| 1150 |
+
0.922
|
| 1151 |
+
],
|
| 1152 |
+
"angle": 0,
|
| 1153 |
+
"content": "Further Analyses. Firstly, we compare AGR with FR and MGR, providing the visualized examples. For example, we can observe from Table 4 that although all three methods are able to focus on the appearance aspect, FR and MGR still exhibit some degeneration (since the selective rationale still has some distance from the target prediction). However, AGR utilizes causal calculus to capture the causal variations between \\( Z_{1} \\) and \\( Z_{2} \\), as well as between \\( Z_{2} \\) and \\( Z_{3} \\), regarding the target prediction,"
|
| 1154 |
+
},
|
| 1155 |
+
{
|
| 1156 |
+
"type": "text",
|
| 1157 |
+
"bbox": [
|
| 1158 |
+
0.508,
|
| 1159 |
+
0.48,
|
| 1160 |
+
0.885,
|
| 1161 |
+
0.625
|
| 1162 |
+
],
|
| 1163 |
+
"angle": 0,
|
| 1164 |
+
"content": "thereby gradually mitigating this degeneration during the training process. The Appendix C presents more visualized examples. Secondly, similar to (Liu et al., 2023b), we also compare the complexity of AGR with other models. As shown in Table 6, we can see that the complexity of AGR has been somewhat improved compared to latest work; however, there is still room for further improvement. This will be a key focus of future research."
|
| 1165 |
+
},
|
| 1166 |
+
{
|
| 1167 |
+
"type": "table",
|
| 1168 |
+
"bbox": [
|
| 1169 |
+
0.516,
|
| 1170 |
+
0.636,
|
| 1171 |
+
0.883,
|
| 1172 |
+
0.698
|
| 1173 |
+
],
|
| 1174 |
+
"angle": 0,
|
| 1175 |
+
"content": "<table><tr><td></td><td>RNP</td><td>FR</td><td>AGR</td><td>CAR</td></tr><tr><td>modules parameters</td><td>1gen+1pred 2×</td><td>1gen+1pred 2×</td><td>1gen+1pred+1agent 3×</td><td>1gen+2pred 3×</td></tr><tr><td></td><td>DARE</td><td>CAR</td><td>DMR</td><td>MGR</td></tr><tr><td>modules parameters</td><td>1gen+1pred+guider 3×</td><td>1gen+2pred 3×</td><td>1gen+3pred 4×</td><td>3gen+1pred 4×</td></tr></table>"
|
| 1176 |
+
},
|
| 1177 |
+
{
|
| 1178 |
+
"type": "table_caption",
|
| 1179 |
+
"bbox": [
|
| 1180 |
+
0.508,
|
| 1181 |
+
0.707,
|
| 1182 |
+
0.884,
|
| 1183 |
+
0.737
|
| 1184 |
+
],
|
| 1185 |
+
"angle": 0,
|
| 1186 |
+
"content": "Table 6: The complexity of different models. \"gen\": generator. \"pred\": predictor."
|
| 1187 |
+
},
|
| 1188 |
+
{
|
| 1189 |
+
"type": "title",
|
| 1190 |
+
"bbox": [
|
| 1191 |
+
0.509,
|
| 1192 |
+
0.752,
|
| 1193 |
+
0.642,
|
| 1194 |
+
0.767
|
| 1195 |
+
],
|
| 1196 |
+
"angle": 0,
|
| 1197 |
+
"content": "6 Conclusion"
|
| 1198 |
+
},
|
| 1199 |
+
{
|
| 1200 |
+
"type": "text",
|
| 1201 |
+
"bbox": [
|
| 1202 |
+
0.508,
|
| 1203 |
+
0.777,
|
| 1204 |
+
0.885,
|
| 1205 |
+
0.92
|
| 1206 |
+
],
|
| 1207 |
+
"angle": 0,
|
| 1208 |
+
"content": "In this paper, we propose AGR, a reinforced causal agent-based rationalization approach to guide the cooperative game optimization during rationale training. Our theoretical insights underscore the necessity of this guidance signal for accurate predictions. Empirical evaluations on two widely-used benchmarks indicate the effectiveness of our proposed approach, surpassing existing state-of-the-art methods for selective rationalization."
|
| 1209 |
+
},
|
| 1210 |
+
{
|
| 1211 |
+
"type": "page_number",
|
| 1212 |
+
"bbox": [
|
| 1213 |
+
0.486,
|
| 1214 |
+
0.929,
|
| 1215 |
+
0.516,
|
| 1216 |
+
0.941
|
| 1217 |
+
],
|
| 1218 |
+
"angle": 0,
|
| 1219 |
+
"content": "514"
|
| 1220 |
+
}
|
| 1221 |
+
],
|
| 1222 |
+
[
|
| 1223 |
+
{
|
| 1224 |
+
"type": "title",
|
| 1225 |
+
"bbox": [
|
| 1226 |
+
0.115,
|
| 1227 |
+
0.085,
|
| 1228 |
+
0.221,
|
| 1229 |
+
0.099
|
| 1230 |
+
],
|
| 1231 |
+
"angle": 0,
|
| 1232 |
+
"content": "Limitations"
|
| 1233 |
+
},
|
| 1234 |
+
{
|
| 1235 |
+
"type": "text",
|
| 1236 |
+
"bbox": [
|
| 1237 |
+
0.113,
|
| 1238 |
+
0.11,
|
| 1239 |
+
0.493,
|
| 1240 |
+
0.368
|
| 1241 |
+
],
|
| 1242 |
+
"angle": 0,
|
| 1243 |
+
"content": "There are still some limitations that need further improvement in the future. Firstly, optimizing cooperative game of rationalization during training brings great significance to the model performance, but how to more efficiently search for meaningful actions within a larger search space for good rationales remains the next direction to explore. Nextly, this work does not involve the debiasing techniques of data-level. Considering the debiasing technique may be a good way to further improve the results. In addition, as the latest research (Chen et al., 2022; Liu et al., 2023a,b) has shown that it is still a challenging task to finetune pretrained language models on the cooperative game framework. Therefore, how to incorporate the cooperative framework and (large) language models is a research interest."
|
| 1244 |
+
},
|
| 1245 |
+
{
|
| 1246 |
+
"type": "title",
|
| 1247 |
+
"bbox": [
|
| 1248 |
+
0.115,
|
| 1249 |
+
0.379,
|
| 1250 |
+
0.267,
|
| 1251 |
+
0.395
|
| 1252 |
+
],
|
| 1253 |
+
"angle": 0,
|
| 1254 |
+
"content": "Ethics Statement"
|
| 1255 |
+
},
|
| 1256 |
+
{
|
| 1257 |
+
"type": "text",
|
| 1258 |
+
"bbox": [
|
| 1259 |
+
0.114,
|
| 1260 |
+
0.405,
|
| 1261 |
+
0.49,
|
| 1262 |
+
0.454
|
| 1263 |
+
],
|
| 1264 |
+
"angle": 0,
|
| 1265 |
+
"content": "This paper does not involve the presentation of a new dataset and the utilization of demographic or identity characteristics information."
|
| 1266 |
+
},
|
| 1267 |
+
{
|
| 1268 |
+
"type": "title",
|
| 1269 |
+
"bbox": [
|
| 1270 |
+
0.115,
|
| 1271 |
+
0.465,
|
| 1272 |
+
0.287,
|
| 1273 |
+
0.482
|
| 1274 |
+
],
|
| 1275 |
+
"angle": 0,
|
| 1276 |
+
"content": "Acknowledgements"
|
| 1277 |
+
},
|
| 1278 |
+
{
|
| 1279 |
+
"type": "text",
|
| 1280 |
+
"bbox": [
|
| 1281 |
+
0.113,
|
| 1282 |
+
0.491,
|
| 1283 |
+
0.49,
|
| 1284 |
+
0.603
|
| 1285 |
+
],
|
| 1286 |
+
"angle": 0,
|
| 1287 |
+
"content": "We would like to thank all the anonymous reviewers for their valuable feedback. This work was supported by the National Natural Science Foundation of China (Nos.62376144, 62272285, 62076155) and the Science and Technology Cooperation and Exchange Special Project of Shanxi Province (No.202204041101016)."
|
| 1288 |
+
},
|
| 1289 |
+
{
|
| 1290 |
+
"type": "title",
|
| 1291 |
+
"bbox": [
|
| 1292 |
+
0.115,
|
| 1293 |
+
0.63,
|
| 1294 |
+
0.215,
|
| 1295 |
+
0.645
|
| 1296 |
+
],
|
| 1297 |
+
"angle": 0,
|
| 1298 |
+
"content": "References"
|
| 1299 |
+
},
|
| 1300 |
+
{
|
| 1301 |
+
"type": "ref_text",
|
| 1302 |
+
"bbox": [
|
| 1303 |
+
0.117,
|
| 1304 |
+
0.653,
|
| 1305 |
+
0.49,
|
| 1306 |
+
0.733
|
| 1307 |
+
],
|
| 1308 |
+
"angle": 0,
|
| 1309 |
+
"content": "Yujia Bao, Shiyu Chang, Mo Yu, and Regina Barzilay. 2018. Deriving machine attention from human rationales. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1903-1913, Brussels, Belgium. Association for Computational Linguistics."
|
| 1310 |
+
},
|
| 1311 |
+
{
|
| 1312 |
+
"type": "ref_text",
|
| 1313 |
+
"bbox": [
|
| 1314 |
+
0.117,
|
| 1315 |
+
0.742,
|
| 1316 |
+
0.49,
|
| 1317 |
+
0.821
|
| 1318 |
+
],
|
| 1319 |
+
"angle": 0,
|
| 1320 |
+
"content": "Jasmijn Bastings, Wilker Aziz, and Ivan Titov. 2019. Interpretable neural predictions with differentiable binary variables. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2963-2977, Florence, Italy. Association for Computational Linguistics."
|
| 1321 |
+
},
|
| 1322 |
+
{
|
| 1323 |
+
"type": "ref_text",
|
| 1324 |
+
"bbox": [
|
| 1325 |
+
0.117,
|
| 1326 |
+
0.831,
|
| 1327 |
+
0.49,
|
| 1328 |
+
0.884
|
| 1329 |
+
],
|
| 1330 |
+
"angle": 0,
|
| 1331 |
+
"content": "Shiyu Chang, Yang Zhang, Mo Yu, and Tommi Jaakkola. 2019. A game theoretic approach to class-wise selective rationalization. Advances in neural information processing systems, 32."
|
| 1332 |
+
},
|
| 1333 |
+
{
|
| 1334 |
+
"type": "ref_text",
|
| 1335 |
+
"bbox": [
|
| 1336 |
+
0.117,
|
| 1337 |
+
0.894,
|
| 1338 |
+
0.49,
|
| 1339 |
+
0.921
|
| 1340 |
+
],
|
| 1341 |
+
"angle": 0,
|
| 1342 |
+
"content": "Shiyu Chang, Yang Zhang, Mo Yu, and Tommi Jaakkola. 2020. Invariant rationalization. In International"
|
| 1343 |
+
},
|
| 1344 |
+
{
|
| 1345 |
+
"type": "list",
|
| 1346 |
+
"bbox": [
|
| 1347 |
+
0.117,
|
| 1348 |
+
0.653,
|
| 1349 |
+
0.49,
|
| 1350 |
+
0.921
|
| 1351 |
+
],
|
| 1352 |
+
"angle": 0,
|
| 1353 |
+
"content": null
|
| 1354 |
+
},
|
| 1355 |
+
{
|
| 1356 |
+
"type": "ref_text",
|
| 1357 |
+
"bbox": [
|
| 1358 |
+
0.529,
|
| 1359 |
+
0.086,
|
| 1360 |
+
0.884,
|
| 1361 |
+
0.113
|
| 1362 |
+
],
|
| 1363 |
+
"angle": 0,
|
| 1364 |
+
"content": "Conference on Machine Learning, pages 1448-1458. PMLR."
|
| 1365 |
+
},
|
| 1366 |
+
{
|
| 1367 |
+
"type": "ref_text",
|
| 1368 |
+
"bbox": [
|
| 1369 |
+
0.512,
|
| 1370 |
+
0.127,
|
| 1371 |
+
0.885,
|
| 1372 |
+
0.22
|
| 1373 |
+
],
|
| 1374 |
+
"angle": 0,
|
| 1375 |
+
"content": "Howard Chen, Jacqueline He, Karthik Narasimhan, and Danqi Chen. 2022. Can rationalization improve robustness? In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3792-3805, Seattle, United States. Association for Computational Linguistics."
|
| 1376 |
+
},
|
| 1377 |
+
{
|
| 1378 |
+
"type": "ref_text",
|
| 1379 |
+
"bbox": [
|
| 1380 |
+
0.512,
|
| 1381 |
+
0.232,
|
| 1382 |
+
0.885,
|
| 1383 |
+
0.351
|
| 1384 |
+
],
|
| 1385 |
+
"angle": 0,
|
| 1386 |
+
"content": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724-1734, Doha, Qatar. Association for Computational Linguistics."
|
| 1387 |
+
},
|
| 1388 |
+
{
|
| 1389 |
+
"type": "ref_text",
|
| 1390 |
+
"bbox": [
|
| 1391 |
+
0.512,
|
| 1392 |
+
0.364,
|
| 1393 |
+
0.885,
|
| 1394 |
+
0.457
|
| 1395 |
+
],
|
| 1396 |
+
"angle": 0,
|
| 1397 |
+
"content": "Nicola De Cao, Michael Sejr Schlichtkrull, Wilker Aziz, and Ivan Titov. 2020. How do decisions emerge across layers in neural models? interpretation with differentiable masking. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3243-3255, Online. Association for Computational Linguistics."
|
| 1398 |
+
},
|
| 1399 |
+
{
|
| 1400 |
+
"type": "ref_text",
|
| 1401 |
+
"bbox": [
|
| 1402 |
+
0.512,
|
| 1403 |
+
0.47,
|
| 1404 |
+
0.885,
|
| 1405 |
+
0.537
|
| 1406 |
+
],
|
| 1407 |
+
"angle": 0,
|
| 1408 |
+
"content": "Zhiying Deng, Jianjun Li, Zhiqiang Guo, and Guohui Li. 2023. Multi-aspect interest neighbor-augmented network for next-basket recommendation. ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1-5."
|
| 1409 |
+
},
|
| 1410 |
+
{
|
| 1411 |
+
"type": "ref_text",
|
| 1412 |
+
"bbox": [
|
| 1413 |
+
0.512,
|
| 1414 |
+
0.549,
|
| 1415 |
+
0.884,
|
| 1416 |
+
0.591
|
| 1417 |
+
],
|
| 1418 |
+
"angle": 0,
|
| 1419 |
+
"content": "Yongfeng Huang, Yujun Chen, Yulun Du, and Zhilin Yang. 2021. Distribution matching for rationalization. In AAAI Conference on Artificial Intelligence."
|
| 1420 |
+
},
|
| 1421 |
+
{
|
| 1422 |
+
"type": "ref_text",
|
| 1423 |
+
"bbox": [
|
| 1424 |
+
0.512,
|
| 1425 |
+
0.603,
|
| 1426 |
+
0.884,
|
| 1427 |
+
0.657
|
| 1428 |
+
],
|
| 1429 |
+
"angle": 0,
|
| 1430 |
+
"content": "Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), San Diego, CA, USA."
|
| 1431 |
+
},
|
| 1432 |
+
{
|
| 1433 |
+
"type": "ref_text",
|
| 1434 |
+
"bbox": [
|
| 1435 |
+
0.512,
|
| 1436 |
+
0.67,
|
| 1437 |
+
0.885,
|
| 1438 |
+
0.737
|
| 1439 |
+
],
|
| 1440 |
+
"angle": 0,
|
| 1441 |
+
"content": "Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107-117, Austin, Texas. Association for Computational Linguistics."
|
| 1442 |
+
},
|
| 1443 |
+
{
|
| 1444 |
+
"type": "ref_text",
|
| 1445 |
+
"bbox": [
|
| 1446 |
+
0.512,
|
| 1447 |
+
0.749,
|
| 1448 |
+
0.885,
|
| 1449 |
+
0.815
|
| 1450 |
+
],
|
| 1451 |
+
"angle": 0,
|
| 1452 |
+
"content": "Wei Liu, Haozhao Wang, Jun Wang, Zhiying Deng, YuanKai Zhang, Cheng Wang, and Ruixuan Li. 2023a. Enhancing the rationale-input alignment for self-explaining rationalization. arXiv preprint arXiv:2312.04103."
|
| 1453 |
+
},
|
| 1454 |
+
{
|
| 1455 |
+
"type": "ref_text",
|
| 1456 |
+
"bbox": [
|
| 1457 |
+
0.512,
|
| 1458 |
+
0.828,
|
| 1459 |
+
0.885,
|
| 1460 |
+
0.921
|
| 1461 |
+
],
|
| 1462 |
+
"angle": 0,
|
| 1463 |
+
"content": "Wei Liu, Haozhao Wang, Jun Wang, Ruixuan Li, Xinyang Li, YuanKai Zhang, and Yang Qiu. 2023b. MGR: Multi-generator based rationalization. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12771-12787, Toronto, Canada. Association for Computational Linguistics."
|
| 1464 |
+
},
|
| 1465 |
+
{
|
| 1466 |
+
"type": "list",
|
| 1467 |
+
"bbox": [
|
| 1468 |
+
0.512,
|
| 1469 |
+
0.086,
|
| 1470 |
+
0.885,
|
| 1471 |
+
0.921
|
| 1472 |
+
],
|
| 1473 |
+
"angle": 0,
|
| 1474 |
+
"content": null
|
| 1475 |
+
},
|
| 1476 |
+
{
|
| 1477 |
+
"type": "page_number",
|
| 1478 |
+
"bbox": [
|
| 1479 |
+
0.486,
|
| 1480 |
+
0.929,
|
| 1481 |
+
0.516,
|
| 1482 |
+
0.941
|
| 1483 |
+
],
|
| 1484 |
+
"angle": 0,
|
| 1485 |
+
"content": "515"
|
| 1486 |
+
}
|
| 1487 |
+
],
|
| 1488 |
+
[
|
| 1489 |
+
{
|
| 1490 |
+
"type": "ref_text",
|
| 1491 |
+
"bbox": [
|
| 1492 |
+
0.117,
|
| 1493 |
+
0.086,
|
| 1494 |
+
0.487,
|
| 1495 |
+
0.139
|
| 1496 |
+
],
|
| 1497 |
+
"angle": 0,
|
| 1498 |
+
"content": "Wei Liu, Haozhao Wang, Jun Wang, Ruixuan Li, Chao Yue, and YuanKai Zhang. 2022. Fr: Folded rationalization with a unified encoder. Advances in Neural Information Processing Systems, 35:6954-6966."
|
| 1499 |
+
},
|
| 1500 |
+
{
|
| 1501 |
+
"type": "ref_text",
|
| 1502 |
+
"bbox": [
|
| 1503 |
+
0.117,
|
| 1504 |
+
0.15,
|
| 1505 |
+
0.487,
|
| 1506 |
+
0.215
|
| 1507 |
+
],
|
| 1508 |
+
"angle": 0,
|
| 1509 |
+
"content": "Wei Liu, Jun Wang, Haozhao Wang, Ruixuan Li, Zhiying Deng, YuanKai Zhang, and Yang Qiu. 2023c. D-separation for causal self-explanation. In Thirty-seventh Conference on Neural Information Processing Systems."
|
| 1510 |
+
},
|
| 1511 |
+
{
|
| 1512 |
+
"type": "ref_text",
|
| 1513 |
+
"bbox": [
|
| 1514 |
+
0.117,
|
| 1515 |
+
0.227,
|
| 1516 |
+
0.486,
|
| 1517 |
+
0.317
|
| 1518 |
+
],
|
| 1519 |
+
"angle": 0,
|
| 1520 |
+
"content": "Wei Liu, Jun Wang, Haozhao Wang, Ruixuan Li, Yang Qiu, Yuankai Zhang, Jie Han, and Yixiong Zou. 2023d. Decoupled rationalization with asymmetric learning rates: A flexible lipschitz restraint. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 1535-1547."
|
| 1521 |
+
},
|
| 1522 |
+
{
|
| 1523 |
+
"type": "ref_text",
|
| 1524 |
+
"bbox": [
|
| 1525 |
+
0.117,
|
| 1526 |
+
0.33,
|
| 1527 |
+
0.487,
|
| 1528 |
+
0.383
|
| 1529 |
+
],
|
| 1530 |
+
"angle": 0,
|
| 1531 |
+
"content": "Julian McAuley, Jure Leskovec, and Dan Jurafsky. 2012. Learning attitudes and attributes from multi-aspect reviews. 2012 IEEE 12th International Conference on Data Mining, pages 1020-1025."
|
| 1532 |
+
},
|
| 1533 |
+
{
|
| 1534 |
+
"type": "ref_text",
|
| 1535 |
+
"bbox": [
|
| 1536 |
+
0.117,
|
| 1537 |
+
0.394,
|
| 1538 |
+
0.487,
|
| 1539 |
+
0.485
|
| 1540 |
+
],
|
| 1541 |
+
"angle": 0,
|
| 1542 |
+
"content": "Bhargavi Paranjape, Mandar Joshi, John Thickstun, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. An information bottleneck approach for controlling conciseness in rationale extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 1938-1952, Online. Association for Computational Linguistics."
|
| 1543 |
+
},
|
| 1544 |
+
{
|
| 1545 |
+
"type": "ref_text",
|
| 1546 |
+
"bbox": [
|
| 1547 |
+
0.117,
|
| 1548 |
+
0.497,
|
| 1549 |
+
0.486,
|
| 1550 |
+
0.524
|
| 1551 |
+
],
|
| 1552 |
+
"angle": 0,
|
| 1553 |
+
"content": "Judea Pearl. 2009. Causality. Cambridge university press."
|
| 1554 |
+
},
|
| 1555 |
+
{
|
| 1556 |
+
"type": "ref_text",
|
| 1557 |
+
"bbox": [
|
| 1558 |
+
0.117,
|
| 1559 |
+
0.535,
|
| 1560 |
+
0.487,
|
| 1561 |
+
0.574
|
| 1562 |
+
],
|
| 1563 |
+
"angle": 0,
|
| 1564 |
+
"content": "Judea Pearl, Madelyn Glymour, and Nicholas P Jewell. 2016. Causal inference in statistics: A primer. John Wiley & Sons."
|
| 1565 |
+
},
|
| 1566 |
+
{
|
| 1567 |
+
"type": "ref_text",
|
| 1568 |
+
"bbox": [
|
| 1569 |
+
0.117,
|
| 1570 |
+
0.586,
|
| 1571 |
+
0.487,
|
| 1572 |
+
0.664
|
| 1573 |
+
],
|
| 1574 |
+
"angle": 0,
|
| 1575 |
+
"content": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics."
|
| 1576 |
+
},
|
| 1577 |
+
{
|
| 1578 |
+
"type": "ref_text",
|
| 1579 |
+
"bbox": [
|
| 1580 |
+
0.117,
|
| 1581 |
+
0.675,
|
| 1582 |
+
0.487,
|
| 1583 |
+
0.741
|
| 1584 |
+
],
|
| 1585 |
+
"angle": 0,
|
| 1586 |
+
"content": "Richard S Sutton, David McAllester, Satinder Singh, and Yishay Mansour. 1999. Policy gradient methods for reinforcement learning with function approximation. Advances in neural information processing systems, 12."
|
| 1587 |
+
},
|
| 1588 |
+
{
|
| 1589 |
+
"type": "ref_text",
|
| 1590 |
+
"bbox": [
|
| 1591 |
+
0.117,
|
| 1592 |
+
0.752,
|
| 1593 |
+
0.487,
|
| 1594 |
+
0.844
|
| 1595 |
+
],
|
| 1596 |
+
"angle": 0,
|
| 1597 |
+
"content": "Hongning Wang, Yue Lu, and Chengxiang Zhai. 2010. Latent aspect rating analysis on review text data: A rating regression approach. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '10, page 783-792, New York, NY, USA. Association for Computing Machinery."
|
| 1598 |
+
},
|
| 1599 |
+
{
|
| 1600 |
+
"type": "ref_text",
|
| 1601 |
+
"bbox": [
|
| 1602 |
+
0.117,
|
| 1603 |
+
0.855,
|
| 1604 |
+
0.487,
|
| 1605 |
+
0.921
|
| 1606 |
+
],
|
| 1607 |
+
"angle": 0,
|
| 1608 |
+
"content": "Mo Yu, Shiyu Chang, Yang Zhang, and Tommi S Jaakkola. 2019. Rethinking cooperative rationalization: Introspective extraction and complement control. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing."
|
| 1609 |
+
},
|
| 1610 |
+
{
|
| 1611 |
+
"type": "list",
|
| 1612 |
+
"bbox": [
|
| 1613 |
+
0.117,
|
| 1614 |
+
0.086,
|
| 1615 |
+
0.487,
|
| 1616 |
+
0.921
|
| 1617 |
+
],
|
| 1618 |
+
"angle": 0,
|
| 1619 |
+
"content": null
|
| 1620 |
+
},
|
| 1621 |
+
{
|
| 1622 |
+
"type": "ref_text",
|
| 1623 |
+
"bbox": [
|
| 1624 |
+
0.511,
|
| 1625 |
+
0.086,
|
| 1626 |
+
0.882,
|
| 1627 |
+
0.139
|
| 1628 |
+
],
|
| 1629 |
+
"angle": 0,
|
| 1630 |
+
"content": "Mo Yu, Yang Zhang, Shiyu Chang, and Tommi Jaakkola. 2021. Understanding interlocking dynamics of cooperative rationalization. Advances in Neural Information Processing Systems, 34:12822-12835."
|
| 1631 |
+
},
|
| 1632 |
+
{
|
| 1633 |
+
"type": "ref_text",
|
| 1634 |
+
"bbox": [
|
| 1635 |
+
0.511,
|
| 1636 |
+
0.149,
|
| 1637 |
+
0.882,
|
| 1638 |
+
0.201
|
| 1639 |
+
],
|
| 1640 |
+
"angle": 0,
|
| 1641 |
+
"content": "Hao Yuan, Lei Cai, Xia Hu, Jie Wang, and Shuiwang Ji. 2020. Interpreting image classifiers by generating discrete masks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(4)."
|
| 1642 |
+
},
|
| 1643 |
+
{
|
| 1644 |
+
"type": "ref_text",
|
| 1645 |
+
"bbox": [
|
| 1646 |
+
0.511,
|
| 1647 |
+
0.212,
|
| 1648 |
+
0.882,
|
| 1649 |
+
0.264
|
| 1650 |
+
],
|
| 1651 |
+
"angle": 0,
|
| 1652 |
+
"content": "Linan Yue, Qi Liu, Yichao Du, Yanqing An, Li Wang, and Enhong Chen. 2022. Dare: Disentanglement-augmented rationale extraction. Advances in Neural Information Processing Systems, 35:26603-26617."
|
| 1653 |
+
},
|
| 1654 |
+
{
|
| 1655 |
+
"type": "ref_text",
|
| 1656 |
+
"bbox": [
|
| 1657 |
+
0.511,
|
| 1658 |
+
0.274,
|
| 1659 |
+
0.882,
|
| 1660 |
+
0.353
|
| 1661 |
+
],
|
| 1662 |
+
"angle": 0,
|
| 1663 |
+
"content": "Linan Yue, Qi Liu, Li Wang, Yanqing An, Yichao Du, and Zhenya Huang. 2023. Interventional rationalization. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 11404-11418, Singapore. Association for Computational Linguistics."
|
| 1664 |
+
},
|
| 1665 |
+
{
|
| 1666 |
+
"type": "ref_text",
|
| 1667 |
+
"bbox": [
|
| 1668 |
+
0.511,
|
| 1669 |
+
0.363,
|
| 1670 |
+
0.882,
|
| 1671 |
+
0.429
|
| 1672 |
+
],
|
| 1673 |
+
"angle": 0,
|
| 1674 |
+
"content": "Wenbo Zhang, Tong Wu, Yunlong Wang, Yong Cai, and Hengrui Cai. 2023. Towards trustworthy explanation: on causal rationalization. In Proceedings of the 40th International Conference on Machine Learning. JMLR.org."
|
| 1675 |
+
},
|
| 1676 |
+
{
|
| 1677 |
+
"type": "list",
|
| 1678 |
+
"bbox": [
|
| 1679 |
+
0.511,
|
| 1680 |
+
0.086,
|
| 1681 |
+
0.882,
|
| 1682 |
+
0.429
|
| 1683 |
+
],
|
| 1684 |
+
"angle": 0,
|
| 1685 |
+
"content": null
|
| 1686 |
+
},
|
| 1687 |
+
{
|
| 1688 |
+
"type": "page_number",
|
| 1689 |
+
"bbox": [
|
| 1690 |
+
0.486,
|
| 1691 |
+
0.929,
|
| 1692 |
+
0.516,
|
| 1693 |
+
0.94
|
| 1694 |
+
],
|
| 1695 |
+
"angle": 0,
|
| 1696 |
+
"content": "516"
|
| 1697 |
+
}
|
| 1698 |
+
],
|
| 1699 |
+
[
|
| 1700 |
+
{
|
| 1701 |
+
"type": "title",
|
| 1702 |
+
"bbox": [
|
| 1703 |
+
0.115,
|
| 1704 |
+
0.085,
|
| 1705 |
+
0.31,
|
| 1706 |
+
0.1
|
| 1707 |
+
],
|
| 1708 |
+
"angle": 0,
|
| 1709 |
+
"content": "A Proof of Lemma 1"
|
| 1710 |
+
},
|
| 1711 |
+
{
|
| 1712 |
+
"type": "text",
|
| 1713 |
+
"bbox": [
|
| 1714 |
+
0.114,
|
| 1715 |
+
0.106,
|
| 1716 |
+
0.489,
|
| 1717 |
+
0.155
|
| 1718 |
+
],
|
| 1719 |
+
"angle": 0,
|
| 1720 |
+
"content": "Given random variables \\(X, Z, Y\\), and \\(\\mathcal{A}\\), where \\(\\mathcal{A}\\) is drawn from the distribution of \\(X\\). According to Section 2, to obtain a good predictor, we have"
|
| 1721 |
+
},
|
| 1722 |
+
{
|
| 1723 |
+
"type": "equation",
|
| 1724 |
+
"bbox": [
|
| 1725 |
+
0.152,
|
| 1726 |
+
0.165,
|
| 1727 |
+
0.488,
|
| 1728 |
+
0.192
|
| 1729 |
+
],
|
| 1730 |
+
"angle": 0,
|
| 1731 |
+
"content": "\\[\n\\min _ {\\theta_ {g}, \\theta_ {p}} \\mathcal {H} (Y, \\hat {Y}) = \\min _ {\\theta_ {g}, \\theta_ {p}} \\mathcal {H} (Y, f _ {P} (Z)), \\tag {15}\n\\]"
|
| 1732 |
+
},
|
| 1733 |
+
{
|
| 1734 |
+
"type": "text",
|
| 1735 |
+
"bbox": [
|
| 1736 |
+
0.114,
|
| 1737 |
+
0.195,
|
| 1738 |
+
0.489,
|
| 1739 |
+
0.291
|
| 1740 |
+
],
|
| 1741 |
+
"angle": 0,
|
| 1742 |
+
"content": "where \\( Z = f_{G}(X) \\). It means that we need to minimize \\( H(Y,Z) \\) (Liu et al., 2023b), i.e., to reduce more uncertainty and indicate the label \\( Y \\). We assume that exist variable \\( \\mathcal{A} \\) could make to reduce the uncertainty of learning \\( Y \\), then our goal is to make \\( H(Y,\\mathcal{A})\\leq H(Y,Z) \\)."
|
| 1743 |
+
},
|
| 1744 |
+
{
|
| 1745 |
+
"type": "text",
|
| 1746 |
+
"bbox": [
|
| 1747 |
+
0.114,
|
| 1748 |
+
0.292,
|
| 1749 |
+
0.489,
|
| 1750 |
+
0.322
|
| 1751 |
+
],
|
| 1752 |
+
"angle": 0,
|
| 1753 |
+
"content": "According to the mutual information formula, we can obtain:"
|
| 1754 |
+
},
|
| 1755 |
+
{
|
| 1756 |
+
"type": "equation",
|
| 1757 |
+
"bbox": [
|
| 1758 |
+
0.14,
|
| 1759 |
+
0.335,
|
| 1760 |
+
0.488,
|
| 1761 |
+
0.352
|
| 1762 |
+
],
|
| 1763 |
+
"angle": 0,
|
| 1764 |
+
"content": "\\[\nH (Y) - H (Y, \\mathcal {A}) \\geq H (Y) - H (Y, Z), \\tag {16}\n\\]"
|
| 1765 |
+
},
|
| 1766 |
+
{
|
| 1767 |
+
"type": "text",
|
| 1768 |
+
"bbox": [
|
| 1769 |
+
0.115,
|
| 1770 |
+
0.359,
|
| 1771 |
+
0.14,
|
| 1772 |
+
0.37
|
| 1773 |
+
],
|
| 1774 |
+
"angle": 0,
|
| 1775 |
+
"content": "so,"
|
| 1776 |
+
},
|
| 1777 |
+
{
|
| 1778 |
+
"type": "equation",
|
| 1779 |
+
"bbox": [
|
| 1780 |
+
0.227,
|
| 1781 |
+
0.372,
|
| 1782 |
+
0.487,
|
| 1783 |
+
0.388
|
| 1784 |
+
],
|
| 1785 |
+
"angle": 0,
|
| 1786 |
+
"content": "\\[\nI (Y, \\mathcal {A}) \\geq I (Y, Z). \\tag {17}\n\\]"
|
| 1787 |
+
},
|
| 1788 |
+
{
|
| 1789 |
+
"type": "text",
|
| 1790 |
+
"bbox": [
|
| 1791 |
+
0.114,
|
| 1792 |
+
0.389,
|
| 1793 |
+
0.487,
|
| 1794 |
+
0.451
|
| 1795 |
+
],
|
| 1796 |
+
"angle": 0,
|
| 1797 |
+
"content": "Next, since we have \\( X = \\{Z, X \\backslash Z\\} \\) where \\( X \\backslash Z \\) denotes the text derived from \\( X \\) and unrelated to the rationale, so we can obtain mutual information between \\( X \\) and \\( Y \\),"
|
| 1798 |
+
},
|
| 1799 |
+
{
|
| 1800 |
+
"type": "equation",
|
| 1801 |
+
"bbox": [
|
| 1802 |
+
0.192,
|
| 1803 |
+
0.462,
|
| 1804 |
+
0.487,
|
| 1805 |
+
0.498
|
| 1806 |
+
],
|
| 1807 |
+
"angle": 0,
|
| 1808 |
+
"content": "\\[\n\\begin{array}{l} I (Y; X) = I (Y; \\{Z, X \\backslash Z \\}) \\\\ = I (Y; Z) + I (Y; X \\backslash Z | Z) \\tag {18} \\\\ \\end{array}\n\\]"
|
| 1809 |
+
},
|
| 1810 |
+
{
|
| 1811 |
+
"type": "text",
|
| 1812 |
+
"bbox": [
|
| 1813 |
+
0.115,
|
| 1814 |
+
0.501,
|
| 1815 |
+
0.489,
|
| 1816 |
+
0.533
|
| 1817 |
+
],
|
| 1818 |
+
"angle": 0,
|
| 1819 |
+
"content": "According to the non-negativity of mutual information, we have \\( I(Y;X\\backslash Z|Z)\\geq 0 \\), so"
|
| 1820 |
+
},
|
| 1821 |
+
{
|
| 1822 |
+
"type": "equation",
|
| 1823 |
+
"bbox": [
|
| 1824 |
+
0.229,
|
| 1825 |
+
0.544,
|
| 1826 |
+
0.487,
|
| 1827 |
+
0.561
|
| 1828 |
+
],
|
| 1829 |
+
"angle": 0,
|
| 1830 |
+
"content": "\\[\nI (Y, X) \\geq I (Y, Z) \\tag {19}\n\\]"
|
| 1831 |
+
},
|
| 1832 |
+
{
|
| 1833 |
+
"type": "text",
|
| 1834 |
+
"bbox": [
|
| 1835 |
+
0.115,
|
| 1836 |
+
0.566,
|
| 1837 |
+
0.488,
|
| 1838 |
+
0.613
|
| 1839 |
+
],
|
| 1840 |
+
"angle": 0,
|
| 1841 |
+
"content": "Further, we denote \\( I(Y,X) = \\varepsilon_0 \\geq \\varepsilon_1 \\geq I(Y,Z) \\geq \\varepsilon_2 \\), where \\( \\varepsilon_1 \\) and \\( \\varepsilon_2 \\) indicate the upper and lower bounds of \\( I(Y,Z) \\), respectively."
|
| 1842 |
+
},
|
| 1843 |
+
{
|
| 1844 |
+
"type": "text",
|
| 1845 |
+
"bbox": [
|
| 1846 |
+
0.114,
|
| 1847 |
+
0.615,
|
| 1848 |
+
0.489,
|
| 1849 |
+
0.676
|
| 1850 |
+
],
|
| 1851 |
+
"angle": 0,
|
| 1852 |
+
"content": "Therefore, we can obtain that when \\(\\mathcal{A} = X\\), the equation \\(I(Y,\\mathcal{A}) = \\varepsilon_0 \\geq \\varepsilon_1 \\geq I(Y,Z)\\) is satisfied. That is to say, a solution for \\(\\mathcal{A}\\) exists, and \\(X\\) is a solution of \\(\\mathcal{A}\\)."
|
| 1853 |
+
},
|
| 1854 |
+
{
|
| 1855 |
+
"type": "text",
|
| 1856 |
+
"bbox": [
|
| 1857 |
+
0.133,
|
| 1858 |
+
0.679,
|
| 1859 |
+
0.407,
|
| 1860 |
+
0.695
|
| 1861 |
+
],
|
| 1862 |
+
"angle": 0,
|
| 1863 |
+
"content": "The proof of Lemma 1 is completed."
|
| 1864 |
+
},
|
| 1865 |
+
{
|
| 1866 |
+
"type": "title",
|
| 1867 |
+
"bbox": [
|
| 1868 |
+
0.115,
|
| 1869 |
+
0.706,
|
| 1870 |
+
0.32,
|
| 1871 |
+
0.722
|
| 1872 |
+
],
|
| 1873 |
+
"angle": 0,
|
| 1874 |
+
"content": "B Experiment Details"
|
| 1875 |
+
},
|
| 1876 |
+
{
|
| 1877 |
+
"type": "title",
|
| 1878 |
+
"bbox": [
|
| 1879 |
+
0.115,
|
| 1880 |
+
0.728,
|
| 1881 |
+
0.237,
|
| 1882 |
+
0.741
|
| 1883 |
+
],
|
| 1884 |
+
"angle": 0,
|
| 1885 |
+
"content": "B.1Baselines"
|
| 1886 |
+
},
|
| 1887 |
+
{
|
| 1888 |
+
"type": "text",
|
| 1889 |
+
"bbox": [
|
| 1890 |
+
0.114,
|
| 1891 |
+
0.745,
|
| 1892 |
+
0.469,
|
| 1893 |
+
0.776
|
| 1894 |
+
],
|
| 1895 |
+
"angle": 0,
|
| 1896 |
+
"content": "We compare AGR with the following baselines: RNP (2016), a original RNP sampling method."
|
| 1897 |
+
},
|
| 1898 |
+
{
|
| 1899 |
+
"type": "text",
|
| 1900 |
+
"bbox": [
|
| 1901 |
+
0.114,
|
| 1902 |
+
0.777,
|
| 1903 |
+
0.488,
|
| 1904 |
+
0.808
|
| 1905 |
+
],
|
| 1906 |
+
"angle": 0,
|
| 1907 |
+
"content": "HardKuma (2019), a kumaraswamy-distribution-based sampling method."
|
| 1908 |
+
},
|
| 1909 |
+
{
|
| 1910 |
+
"type": "text",
|
| 1911 |
+
"bbox": [
|
| 1912 |
+
0.114,
|
| 1913 |
+
0.81,
|
| 1914 |
+
0.487,
|
| 1915 |
+
0.841
|
| 1916 |
+
],
|
| 1917 |
+
"angle": 0,
|
| 1918 |
+
"content": "CAR (2019), a game theoretic-based approach to class-dependent rationalization."
|
| 1919 |
+
},
|
| 1920 |
+
{
|
| 1921 |
+
"type": "text",
|
| 1922 |
+
"bbox": [
|
| 1923 |
+
0.114,
|
| 1924 |
+
0.842,
|
| 1925 |
+
0.488,
|
| 1926 |
+
0.889
|
| 1927 |
+
],
|
| 1928 |
+
"angle": 0,
|
| 1929 |
+
"content": "Information Bottleneck (IB) (2020), a model utilizing IB objective for balancing performance and rationale length."
|
| 1930 |
+
},
|
| 1931 |
+
{
|
| 1932 |
+
"type": "text",
|
| 1933 |
+
"bbox": [
|
| 1934 |
+
0.114,
|
| 1935 |
+
0.89,
|
| 1936 |
+
0.487,
|
| 1937 |
+
0.922
|
| 1938 |
+
],
|
| 1939 |
+
"angle": 0,
|
| 1940 |
+
"content": "INVRAT (2020), a method that introduces an environment-agnostic predictor."
|
| 1941 |
+
},
|
| 1942 |
+
{
|
| 1943 |
+
"type": "table",
|
| 1944 |
+
"bbox": [
|
| 1945 |
+
0.516,
|
| 1946 |
+
0.082,
|
| 1947 |
+
0.879,
|
| 1948 |
+
0.167
|
| 1949 |
+
],
|
| 1950 |
+
"angle": 0,
|
| 1951 |
+
"content": "<table><tr><td rowspan=\"2\" colspan=\"2\">Datasets</td><td colspan=\"2\">Train</td><td colspan=\"2\">Dev</td><td colspan=\"2\">Annotation</td></tr><tr><td>Pos</td><td>Neg</td><td>Pos</td><td>Neg</td><td>Pos</td><td>Neg</td></tr><tr><td rowspan=\"3\">BeerAdvocate</td><td>Appearance</td><td>202385</td><td>12897</td><td>28488</td><td>1318</td><td>923</td><td>13</td></tr><tr><td>Aroma</td><td>172299</td><td>30564</td><td>24494</td><td>3396</td><td>848</td><td>29</td></tr><tr><td>Palate</td><td>176038</td><td>27639</td><td>24837</td><td>3203</td><td>785</td><td>20</td></tr><tr><td rowspan=\"3\">HotelReview</td><td>Location</td><td>7236</td><td>7236</td><td>906</td><td>906</td><td>104</td><td>96</td></tr><tr><td>Service</td><td>50742</td><td>50742</td><td>6344</td><td>6344</td><td>101</td><td>99</td></tr><tr><td>Cleanliness</td><td>75049</td><td>75049</td><td>9382</td><td>9382</td><td>99</td><td>101</td></tr></table>"
|
| 1952 |
+
},
|
| 1953 |
+
{
|
| 1954 |
+
"type": "table_caption",
|
| 1955 |
+
"bbox": [
|
| 1956 |
+
0.532,
|
| 1957 |
+
0.176,
|
| 1958 |
+
0.857,
|
| 1959 |
+
0.191
|
| 1960 |
+
],
|
| 1961 |
+
"angle": 0,
|
| 1962 |
+
"content": "Table 7: Statistics of datasets used in this paper."
|
| 1963 |
+
},
|
| 1964 |
+
{
|
| 1965 |
+
"type": "text",
|
| 1966 |
+
"bbox": [
|
| 1967 |
+
0.508,
|
| 1968 |
+
0.2,
|
| 1969 |
+
0.882,
|
| 1970 |
+
0.232
|
| 1971 |
+
],
|
| 1972 |
+
"angle": 0,
|
| 1973 |
+
"content": "DMR (2021), which proposes a teacher-student distillation framework to align input distribution."
|
| 1974 |
+
},
|
| 1975 |
+
{
|
| 1976 |
+
"type": "text",
|
| 1977 |
+
"bbox": [
|
| 1978 |
+
0.508,
|
| 1979 |
+
0.233,
|
| 1980 |
+
0.884,
|
| 1981 |
+
0.264
|
| 1982 |
+
],
|
| 1983 |
+
"angle": 0,
|
| 1984 |
+
"content": "A2R (2021), a method that introducing a soft rationale to predictor."
|
| 1985 |
+
},
|
| 1986 |
+
{
|
| 1987 |
+
"type": "text",
|
| 1988 |
+
"bbox": [
|
| 1989 |
+
0.508,
|
| 1990 |
+
0.265,
|
| 1991 |
+
0.884,
|
| 1992 |
+
0.312
|
| 1993 |
+
],
|
| 1994 |
+
"angle": 0,
|
| 1995 |
+
"content": "DARE (2022), which introduces a guider into predictor to encapsulate more information from the input."
|
| 1996 |
+
},
|
| 1997 |
+
{
|
| 1998 |
+
"type": "text",
|
| 1999 |
+
"bbox": [
|
| 2000 |
+
0.509,
|
| 2001 |
+
0.313,
|
| 2002 |
+
0.882,
|
| 2003 |
+
0.345
|
| 2004 |
+
],
|
| 2005 |
+
"angle": 0,
|
| 2006 |
+
"content": "FR (2022), a method using a unified encoder for generator and predictor."
|
| 2007 |
+
},
|
| 2008 |
+
{
|
| 2009 |
+
"type": "text",
|
| 2010 |
+
"bbox": [
|
| 2011 |
+
0.508,
|
| 2012 |
+
0.346,
|
| 2013 |
+
0.884,
|
| 2014 |
+
0.392
|
| 2015 |
+
],
|
| 2016 |
+
"angle": 0,
|
| 2017 |
+
"content": "Inter-RAT (2023), which develops an interventional rationalization to discover the causal rationales."
|
| 2018 |
+
},
|
| 2019 |
+
{
|
| 2020 |
+
"type": "text",
|
| 2021 |
+
"bbox": [
|
| 2022 |
+
0.508,
|
| 2023 |
+
0.394,
|
| 2024 |
+
0.884,
|
| 2025 |
+
0.424
|
| 2026 |
+
],
|
| 2027 |
+
"angle": 0,
|
| 2028 |
+
"content": "MGR (2023b), a method leveraging multiple generators to select rationales."
|
| 2029 |
+
},
|
| 2030 |
+
{
|
| 2031 |
+
"type": "title",
|
| 2032 |
+
"bbox": [
|
| 2033 |
+
0.51,
|
| 2034 |
+
0.437,
|
| 2035 |
+
0.627,
|
| 2036 |
+
0.45
|
| 2037 |
+
],
|
| 2038 |
+
"angle": 0,
|
| 2039 |
+
"content": "B.2 Datasets"
|
| 2040 |
+
},
|
| 2041 |
+
{
|
| 2042 |
+
"type": "text",
|
| 2043 |
+
"bbox": [
|
| 2044 |
+
0.508,
|
| 2045 |
+
0.455,
|
| 2046 |
+
0.884,
|
| 2047 |
+
0.567
|
| 2048 |
+
],
|
| 2049 |
+
"angle": 0,
|
| 2050 |
+
"content": "Following previous research (Huang et al., 2021; Yue et al., 2023; Liu et al., 2023b), we obtain BeerAdvocate and HotelReview datasets. BeerAdvocate (McAuley et al., 2012) and HotelReview (Wang et al., 2010) are publicly available from existing work. As shown in Table 7, the specific splitting details of the two datasets are presented."
|
| 2051 |
+
},
|
| 2052 |
+
{
|
| 2053 |
+
"type": "title",
|
| 2054 |
+
"bbox": [
|
| 2055 |
+
0.51,
|
| 2056 |
+
0.578,
|
| 2057 |
+
0.686,
|
| 2058 |
+
0.592
|
| 2059 |
+
],
|
| 2060 |
+
"angle": 0,
|
| 2061 |
+
"content": "B.3 Implementation"
|
| 2062 |
+
},
|
| 2063 |
+
{
|
| 2064 |
+
"type": "text",
|
| 2065 |
+
"bbox": [
|
| 2066 |
+
0.508,
|
| 2067 |
+
0.597,
|
| 2068 |
+
0.885,
|
| 2069 |
+
0.821
|
| 2070 |
+
],
|
| 2071 |
+
"angle": 0,
|
| 2072 |
+
"content": "To fairly compare with previous works and validate the effectiveness of the approach proposed, we utilize the 100-dimension Glove (Pennington et al., 2014) as the word embedding and the 200-dimension GRUs (Cho et al., 2014) encoder to build the generator \\( f_{G}(\\cdot) \\) in the AGR architecture. Further generator \\( f_{G}(\\cdot) \\) follows Equation 1 for cooperative optimization with predictor \\( f_{P}(\\cdot) \\). Meanwhile, we construct the policy network \\( q_{\\phi}(\\cdot) \\) to collaborate with the generator \\( f_{G}(\\cdot) \\) and predictor \\( f_{P}(\\cdot) \\) to learn candidate actions in different training states, including the representation learning of action candidates and the sampling of actions. We use Adam (Kingma and Ba, 2015) as the optimizer."
|
| 2073 |
+
},
|
| 2074 |
+
{
|
| 2075 |
+
"type": "title",
|
| 2076 |
+
"bbox": [
|
| 2077 |
+
0.509,
|
| 2078 |
+
0.833,
|
| 2079 |
+
0.731,
|
| 2080 |
+
0.85
|
| 2081 |
+
],
|
| 2082 |
+
"angle": 0,
|
| 2083 |
+
"content": "C Additional Examples"
|
| 2084 |
+
},
|
| 2085 |
+
{
|
| 2086 |
+
"type": "text",
|
| 2087 |
+
"bbox": [
|
| 2088 |
+
0.508,
|
| 2089 |
+
0.856,
|
| 2090 |
+
0.884,
|
| 2091 |
+
0.92
|
| 2092 |
+
],
|
| 2093 |
+
"angle": 0,
|
| 2094 |
+
"content": "As shown in Table 8, we provide more examples of selected rationale from the Beer-Aroma and Hotel-Location two aspects, where their sparsity is set to be about \\(20\\%\\) and \\(10\\%\\), respectively."
|
| 2095 |
+
},
|
| 2096 |
+
{
|
| 2097 |
+
"type": "page_number",
|
| 2098 |
+
"bbox": [
|
| 2099 |
+
0.486,
|
| 2100 |
+
0.929,
|
| 2101 |
+
0.516,
|
| 2102 |
+
0.941
|
| 2103 |
+
],
|
| 2104 |
+
"angle": 0,
|
| 2105 |
+
"content": "517"
|
| 2106 |
+
}
|
| 2107 |
+
],
|
| 2108 |
+
[
|
| 2109 |
+
{
|
| 2110 |
+
"type": "table_caption",
|
| 2111 |
+
"bbox": [
|
| 2112 |
+
0.114,
|
| 2113 |
+
0.152,
|
| 2114 |
+
0.883,
|
| 2115 |
+
0.182
|
| 2116 |
+
],
|
| 2117 |
+
"angle": 0,
|
| 2118 |
+
"content": "Table 8: Examples of generated rationales. Human-annotated rationales are underlined. Rationales from three models are highlighted in blue, respectively."
|
| 2119 |
+
},
|
| 2120 |
+
{
|
| 2121 |
+
"type": "table",
|
| 2122 |
+
"bbox": [
|
| 2123 |
+
0.12,
|
| 2124 |
+
0.192,
|
| 2125 |
+
0.879,
|
| 2126 |
+
0.847
|
| 2127 |
+
],
|
| 2128 |
+
"angle": 0,
|
| 2129 |
+
"content": "<table><tr><td>FR (2022)</td><td>MGR (2023b)</td><td>AGR (Ours)</td></tr><tr><td>Aspect: Beer-Aroma\nLabel: Positive, Pred: Positive\nText: had this at bocktown with wvbeergeek and jasonm , came in a 750ml caged and corked the corked banged out of sight as soon as the cage was undone .seved into a tulip glass between the 3 of us hazy , deep copper , mahagony , hard to get a really good look at the color at bocktown . off white head hard to pour without a glass full of fluffy everlasting head . left lot of thick webbing all over the inside of the glass , sticky looking . great aroma can't seem to keep it away from the nose\n. sweet , dark , tart fruit notes , some sour cherry , earthy , spicy , with hints of currants , clove , allspice also nutty , with some belgium yeast . lots of sweet booziness from the start , vinious , dark fruityness with plum notes . the fruityness was remisent of dried fruit.lots of spicyness lots of clove.also nutty and earthy . finished clean , spicy and very sugary . syrupy , big full mouthfeel , smooth and very creamy with lots of jucyness . a beer to sip , but very enjoyable , wish i had the whole bottle to drink would be no problem . a must try beer if you like this style . seems like a beer that would age very well .</td><td>Aspect: Beer-Aroma\nLabel: Positive, Pred: Positive\nText: had this at bocktown with wvbeergeek and jasonm , came in a 750ml caged and corked the corked banged out of sight as soon as the cage was undone . .seved into a tulip glass between the 3 of us hazy , deep copper , mahagony , hard to get a really good look at the color at bocktown . off white head hard to pour without a glass full of fluffy everlasting head . left lot of thick webbing all over the inside of the glass , sticky looking . great aroma can't seem to keep it away from the nose\n. sweet , dark , tart fruit notes , some sour cherry , earthy . spicy , with hints of currants , clove , allspice also nutty , with some belgium yeast . lots of sweet booziness from the start , vinious , dark fruityness with plum notes . the fruityness was remisent of dried fruit.lots of spicyness lots of clove.also nutty and earthy . finished clean , spicy and very sugary . syrupy , big full mouthfeel , smooth and very creamy with lots ofjucyness . a beer to sip , but very enjoyable , wish i had the whole bottle to drink would be no problem . a must try beer if you like this style . seems like a beer that would age very well .</td><td>Aspect: Beer-Aroma\nLabel: Positive, Pred: Positive\nText: had this at bocktown with wvbeergeek and jasonm , came in a 750ml caged and corked the corked banged out of light as soon as the cage was undone . .seved into a tulip glass between the 3 of us hazy , deep copper , mahagony , hard to get a really good look at the color at bocktown . off white head hard to pour without a glass full of fluffy everlasting head . left lot of thick webbing all over the inside of the glass , sticky looking . great aroma can't seem to keep it away from the nose\n. sweet , dark , tart fruitNotes , some sour cherry , earthy , spicy , with hints of currants , clove , allspice also nutty , with some belgium yeast . lots of sweet booziness from the start , vinious , dark fruityness with plum notes . the fruityness was remisent of dried fruit.lots of spicyness lots of clove.also nutty and earthy . finished clean , spicy and very sugary . syrupy , big full mouthfeel, smooth and very creamy with lots ofjucyness . a beer to sip , but very enjoyable , wish i had the whole bottle to drink would be no problem . a must try beer if you like this style . seems like a beer that would age very well .</td></tr><tr><td>Aspect: Hotel-Location\nLabel: Negative, Pred: Negative\nText: we stayed at the dona palace for 3 nights and while the location is central , it is also more crowded and noisy . the windows of the room we stayed in did not have adequate sound proofing , noise from the canal and outside would wake us up early in the morning . the breakfast was a nice bonus though , the two waitresses serving the room were always gracious and helpful . the front desk personnel however were rude and abrupt , so that was n't pleasant to deal with . the rooms are dated and had a musty smell . the bed was uncomfortable , blankets were rough , and the shower drain did not work very well . overall , i probably wound not stay here again .</td><td>Aspect: Hotel-Location\nLabel: Negative, Pred: Negative\nText: we stayed at the dona palace for 3 nights and while the location is central , it is also more crowded and noisy . the windows of the room we stayed in did not have adequate sound proofing , noise from the canal and outside would wake us up early in the morning . the breakfast was a nice bonus though , the two waitresses serving the room were always gracious and helpful . the front desk personnel However were rude and abrupt , so that was n't pleasant to deal with . the rooms are dated and had a musty smell . the bed was uncomfortable , blankets were rough , and the shower drain did not work very well . overall , i probably wound not stay here again .</td><td>Aspect: Hotel-Location\nLabel: Negative, Pred: Negative\nText: we stayed at the dona palace for 3 nights and while the location is central , it is also more crowded and noisy . the windowsof the room we stayed in did not have adequate sound proofing , noise from the canal and outside would wake us up early in the morning . the breakfast was a nice bonus though , the two waitresses serving the room were always gracious and helpful . the front desk personnel however were rude and abrupt , so that was n't pleasant to deal with . the rooms are dated and had a musty smell . the bed was uncomfortable , blankets were rough , and the shower drain did not work very well . Overall , i probably wound not stay here again .</td></tr></table>"
|
| 2130 |
+
},
|
| 2131 |
+
{
|
| 2132 |
+
"type": "page_number",
|
| 2133 |
+
"bbox": [
|
| 2134 |
+
0.486,
|
| 2135 |
+
0.929,
|
| 2136 |
+
0.516,
|
| 2137 |
+
0.941
|
| 2138 |
+
],
|
| 2139 |
+
"angle": 0,
|
| 2140 |
+
"content": "518"
|
| 2141 |
+
}
|
| 2142 |
+
]
|
| 2143 |
+
]
|
2024/AGR_ Reinforced Causal Agent-Guided Self-explaining Rationalization/19a5d2a6-e0ae-4231-a1c8-5819b81e5755_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:ef353e82f8c0cee93864b4e9b0db6cd1b3a879f9b1288a31ca60944b5e89f87e
|
| 3 |
+
size 797592
|
2024/AGR_ Reinforced Causal Agent-Guided Self-explaining Rationalization/full.md
ADDED
|
@@ -0,0 +1,380 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# AGR: Reinforced Causal Agent-Guided Self-explaining Rationalization
|
| 2 |
+
|
| 3 |
+
Yunxiao Zhao $^{1}$ , Zhiqiang Wang $^{1,2*}$ , Xiaoli Li $^{3}$ , Jiye Liang $^{1,2}$ , Ru Li $^{1,2*}$
|
| 4 |
+
|
| 5 |
+
1. School of Computer and Information Technology, Shanxi University, Taiyuan, China
|
| 6 |
+
2. Key Laboratory of Computational Intelligence and Chinese Information Processing of Ministry of Education, Shanxi University, Taiyuan, China
|
| 7 |
+
3. Institute for Infocomm Research, A*Star, Singapore yunxiaomr@163.com, {wangzq,ljy,liru}@sxu.edu.cn, xlli@ntu.edu.sg
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
Most existing rationalization approaches are susceptible to degeneration accumulation due to a lack of effective control over the learning direction of the model during training. To address this issue, we propose a novel approach AGR (Agent-Guided Rationalization), guiding the next action of the model based on its current training state. Specifically, we introduce causal intervention calculus to quantify the causal effects inherent during rationale training, and utilize reinforcement learning process to refine the learning bias of them. Furthermore, we pretrain an agent within this reinforced causal environment to guide the next step of the model. We theoretically demonstrate that a good model needs the desired guidance, and empirically show the effectiveness of our approach, outperforming existing state-of-the-art methods on BeerAdvocate and HotelReview datasets.
|
| 12 |
+
|
| 13 |
+
# 1 Introduction
|
| 14 |
+
|
| 15 |
+
To explain the prediction of neural networks, selective rationalization task (Lei et al., 2016; Yu et al., 2019, 2021) has been studied in recent years. As shown in Figure 1, it aims to select a small and human-intelligible subset (i.e., rationale) from the input to support and explain the prediction results when yielding them. As an interpretable diagram, rationalization holds significant potential for elucidating the decision-making process of predictive models, building trust, and deriving insightful and pertinent insights (Yuan et al., 2020; Zhang et al., 2023; Deng et al., 2023).
|
| 16 |
+
|
| 17 |
+
Various approaches have been proposed for rationalization, spanning from early rationale sampling-based methods (Bao et al., 2018; Bastings et al., 2019; Paranjape et al., 2020) to the extra-component-based methods (De Cao et al., 2020; Huang et al., 2021; Yu et al., 2021; Liu et al., 2022; Yue et al., 2022; Liu et al., 2023a). These
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
Figure 1: The standard selective rationalization, where $X, Z, \hat{Y}, Y$ represent the input text, rationale, prediction and the groundtruth label, respectively. The red text indicates the small and human-intelligible subset.
|
| 21 |
+
|
| 22 |
+
methods predominantly concentrate on improving the performance of rationalization models by either refining the sampling directly or aligning additional information beyond the rationale, resulting in impressive results. However, to the best of our knowledge, the current methods are prone to degeneration accumulation since they usually do not discern whether the generator during training has produced unmeaningful or flawed rationales; instead, they directly pass them to the predictor even if generated rationales are degraded.
|
| 23 |
+
|
| 24 |
+
For instance, the underlined rationale in Figure 1 is degraded, as the word appearance alone does not reliably determine the sentiment polarity of input $X$ . But the predictor overfits to this uninformative rationale and classifies the sentiment according to whether "appearance" is included in the rationale. Consequently, when the predictor receives degraded rationales, it steers the model towards an undesirable direction (aka., learning bias). Thus, optimizing this bias during training is crucial for ensuring the model's generalization performance.
|
| 25 |
+
|
| 26 |
+
The proposed methods (Chang et al., 2020; Zhang et al., 2023; Yue et al., 2023) fall short in considering rationalization optimization comprehensively, neglecting existing causality during rationale learning. Although they often employ causal theory to uncover relationships between rationale pieces, they struggle to directly optimize
|
| 27 |
+
|
| 28 |
+
the cooperative game dynamics between the generator and predictor during training. As shown in Figure 1, optimizing rationale from "appearance" to "appearance: light yellow to almost clear" necessitates evaluating the causal impact on target prediction, guiding the model's subsequent optimization. Thus, if we could construct a guiding signal to reward or penalize the learning behavior of the model, this would significantly reduce the model's learning bias during training, alleviating the problem of degeneration accumulation.
|
| 29 |
+
|
| 30 |
+
To address the above problems, we propose a novel rationalization method named AGR (Agent-Guided Rationalization), which leverages a reinforced causal agent to guide the cooperative game optimization during rationale training, as shown in Figure 2. In particular, 1) we quantify the causal effects in the rationale optimization process, and design a reinforcement learning (RL) process (e.g., Markov decision) to refine the learning bias during training. 2) We further pretrain an agent within reinforced causal environment to guide next actions by a system of rewards. We also theoretically illustrate that a robust model needs the desired guidance. 3) Experimental results demonstrate the effectiveness of our approach, surpassing state-of-the-art methods on BeerAdvocate and HotelReview datasets.
|
| 31 |
+
|
| 32 |
+
# 2 Problem Formulation
|
| 33 |
+
|
| 34 |
+
Notation. Following previous research (Liu et al., 2023b,c,d), we consider the classification problem and denote the generator and predictor as $f_{G}(\cdot)$ and $f_{P}(\cdot)$ , with $\theta_{g}$ and $\theta_{p}$ representing their parameters. The input text $X = [x_{1}, x_{2}, \dots, x_{l}] (1 \leq i \leq l)$ consists of tokens $x_{i}$ , where $l$ is the number of tokens. The label of $X$ is a one-hot vector $Y \in \{0,1\}^{c}$ , where $c$ is the number of categories.
|
| 35 |
+
|
| 36 |
+
Cooperative game for rationalization. The $f_{G}(\cdot)$ selects the most informative pieces from $X$ by a sequence of binary mask $M = [m_1, \dots, m_l] \in \{0, 1\}^l$ . Then, it forms the rationale $Z = M \odot X = [m_1 x_1, m_2 x_2, \dots, m_l x_l]$ , where the informativeness of $Z$ is measured by the negative cross entropy $-H(Y, \hat{Y})$ . Consequently, the $f_{G}(\cdot)$ and $f_{P}(\cdot)$ are optimized cooperatively by
|
| 37 |
+
|
| 38 |
+
$$
|
| 39 |
+
\min _ {\theta_ {g}, \theta_ {p}} \mathcal {H} (Y, \hat {Y} \mid f _ {G} (X)), s. t. \hat {Y} = f _ {P} \left(f _ {G} (X)\right). \tag {1}
|
| 40 |
+
$$
|
| 41 |
+
|
| 42 |
+
In addition, rationales are usually constrained by compact and coherent regularization terms $\Omega(M) = \lambda_1\left|\frac{\|M\|_1}{l} - s\right| + \lambda_2\sum_t|m_t - m_{t-1}|$ (Chang et al., 2020), where $s$ is a pre-defined sparsity level.
|
| 43 |
+
|
| 44 |
+

|
| 45 |
+
Figure 2: The architecture of AGR. $X$ and $\hat{Y}$ are the input and output. $S_{i}$ is the $i$ -th update state of rationale, while $\widetilde{S}_{i}$ is the state after guidance by the agent.
|
| 46 |
+
|
| 47 |
+
# 3 Reinforced Causal Agent
|
| 48 |
+
|
| 49 |
+
In this section, we present our reinforced causal agent, considering both causal effect and learning bias of degeneration during rationale training.
|
| 50 |
+
|
| 51 |
+
# 3.1 Rationale Causal Attribution
|
| 52 |
+
|
| 53 |
+
Formally, we construct a rationale $\mathcal{Z}_k^*$ by maximizing an attribution metric $A(\cdot)$ in rationalization
|
| 54 |
+
|
| 55 |
+
$$
|
| 56 |
+
\mathcal {Z} _ {K} ^ {*} = \arg \max _ {\mathcal {Z} _ {K} \subseteq X} A \left(\mathcal {Z} _ {K} | \hat {y} _ {c}\right), \tag {2}
|
| 57 |
+
$$
|
| 58 |
+
|
| 59 |
+
where $A(\cdot)$ measures the contribution of each candidate $\mathcal{Z}_K$ to the target prediction $\hat{y}_c$ .
|
| 60 |
+
|
| 61 |
+
However, $A(\mathcal{Z}_K|\hat{y}_c)$ needs to be quantified. To this end, we introduce causal intervention calculus $do(\cdot)$ , including $do(Z = Z_K)$ and $do(Z = \emptyset)$ (Pearl, 2009; Pearl et al., 2016), and reformulate the causal contribution from $\varnothing$ to $\mathcal{Z}_K$ by mutual information,
|
| 62 |
+
|
| 63 |
+
$$
|
| 64 |
+
A \left(\mathcal {Z} _ {K} \left| \hat {y} _ {c}\right) = I \left(\hat {y} _ {c}, d o \left(\mathcal {Z} _ {K}\right)\right) - I \left(\hat {y} _ {c}, d o (\emptyset)\right). \right. \tag {3}
|
| 65 |
+
$$
|
| 66 |
+
|
| 67 |
+
# 3.2 Markov Decision Process as RL
|
| 68 |
+
|
| 69 |
+
Equation 3 illustrates the procedure for deriving $\mathcal{Z}_K$ from an initial state of zero training. However, it may generate degraded rationales at step $i$ , where $0 < i < K$ . Thus we need to seek for quantifiable objectives between $\mathcal{Z}_i$ and $\mathcal{Z}_{i+1}$ ,
|
| 70 |
+
|
| 71 |
+
$$
|
| 72 |
+
\mathcal {Z} _ {i + 1} = \arg \max _ {\mathcal {Z} _ {i + 1} \in \{X \backslash \mathcal {Z} _ {i} \}} A \left(\mathcal {Z} _ {i + 1} \mid \mathcal {Z} _ {i}, \hat {y} _ {c}\right). \tag {4}
|
| 73 |
+
$$
|
| 74 |
+
|
| 75 |
+
According to Equation 3, we have the causal contribution between $\mathcal{Z}_i$ and $\mathcal{Z}_{i + 1}$ .. $A(\mathcal{Z}_{i + 1}|\mathcal{Z}_i,\hat{y}_c) =$ $I(\hat{y}_c,do(\mathcal{Z}_{i + 1})) - I(\hat{y}_c,do(\mathcal{Z}_i))$ . So,
|
| 76 |
+
|
| 77 |
+
$$
|
| 78 |
+
\begin{array}{l} A \left(\mathcal {Z} _ {i + 1} \mid \mathcal {Z} _ {i}, \hat {y} _ {c}\right) = - H \left(\hat {y} _ {c} \mid \mathcal {Z} _ {i + 1}\right) + H \left(\hat {y} _ {c} \mid \mathcal {Z} _ {i}\right) \\ = - H \left(\hat {y} _ {c} \mid \left\{\mathcal {Z} _ {i} \cup \left\{z _ {i + 1} \right\} \right\}\right) + H \left(\hat {y} _ {c} \mid \mathcal {Z} _ {i}\right) \tag {5} \\ = - p _ {\theta} \left(\hat {y} _ {c} \mid \mathcal {Z}\right) \log \frac {p _ {\theta} \left(\hat {y} _ {c} \mid \mathcal {Z} _ {i}\right)}{p _ {\theta} \left(\hat {y} _ {c} \mid \{\mathcal {Z} _ {i} \cup \left\{z _ {i + 1} \right\} \}\right)}, \\ \end{array}
|
| 79 |
+
$$
|
| 80 |
+
|
| 81 |
+
where $H(\hat{y}_c|\mathcal{Z}_i)$ is the term of conditional entropy. As a result, Equation 5 explicitly quantifies $\mathcal{Z}_{i + 1}$ 's effect with previously obtained rationale $\mathcal{Z}_i$ .
|
| 82 |
+
|
| 83 |
+
To further promote the cooperative game, we model the training process of rationale as a Markov decision process $\mathbb{M} = \{\mathbb{S},\mathbb{A},\mathbb{P},\mathbb{R}\}$ , where $\mathbb{S} = \{s_i\}$ represents set of states abstracting the process of optimizing rationale during training, and $\mathbb{A} = \{a_i\}$ indicates the set of actions. In particular, The transition dynamics $\mathbb{P}(s_{i + 1}|s_i,a_{i + 1})$ specify how the state $s_{i + 1}$ is updated from the prior state $s_i$ by taking action $a_{i + 1}$ . Besides, $\mathbb{R}(s_i,a_{i + 1})$ quantifies the reward obtained after taking action $a_{i + 1}$ based on the prior state $s_i$ . Therefore, cooperative training for rationale can be depicted as the sequence process $(s_0,a_1,r_1,s_1,\dots,a_K,r_K,s_K)$ , where the state $s_i$ can be formulated by $s_i = Z_i$ in the $i$ -th update; $s_0 = Z_0$ can be initiated by generator $f_{G}(\cdot)$ .
|
| 84 |
+
|
| 85 |
+
Nevertheless, the above process exhibits a limitation in its inability to detect learning bias at any given state $s_i$ . To address this, we reformulate the sequence process as $(<s_0, \widetilde{a}_0, \widetilde{r}_0, \widetilde{s}_0>, a_1, r_1, <s_1, \widetilde{a}_1, \widetilde{r}_1, \widetilde{s}_1>, \dots, a_K, r_K, <s_K, \widetilde{a}_K, \widetilde{r}_K, \widetilde{s}_K>)$ , where $<s_i, \widetilde{a}_i, \widetilde{r}_i, \widetilde{s}_i>$ indicates process of transitioning from state $s_i$ to $\widetilde{s}_i$ in the $i$ -th update.
|
| 86 |
+
|
| 87 |
+
Given the state $s_i = Z_i$ , we derive the available action space: $\widetilde{\mathbb{A}}_i = \{X\backslash Z_i\}$ . The searched action can be represented as
|
| 88 |
+
|
| 89 |
+
$$
|
| 90 |
+
\widetilde {a} _ {i} = \widetilde {z} _ {i}, \tag {6}
|
| 91 |
+
$$
|
| 92 |
+
|
| 93 |
+
where $\widetilde{z}_i\in \{X\backslash Z_i\}$ indicates candidate rationale in action space. Having made the action $\widetilde{a}_i$ , the state transition is to merge $\widetilde{z}_i$ into $Z_{i}$ , i.e., $\widetilde{Z}_i = Z_i\cup \{\widetilde{z}_i\}$
|
| 94 |
+
|
| 95 |
+
To assess the effectiveness of the action $\widetilde{a}_i$ in mitigating the learning bias of the model, the reward $\widetilde{\mathbb{R}}_i(\widetilde{s}_i,\widetilde{a}_i)$ at state $s_i$ can be formulated as follows:
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
\widetilde {\mathbb {R}} _ {i} = \left\{ \begin{array}{l l} A \left(\widetilde {z _ {i}} \mid Z _ {i}, \hat {y} _ {c} ^ {*}\right) + 1, & i f f _ {P} \left(Z _ {i} \cup \{\widetilde {z _ {i}} \}\right) = \hat {y} _ {c} ^ {*} \\ A \left(\widetilde {z _ {i}} \mid Z _ {i}, \hat {y} _ {c} ^ {*}\right) - 1, & o t h e r w i s e. \end{array} \right. \tag {7}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
According to Equation 5, although we can quantify the probabilities at states $\widetilde{s}_i$ and $s_i$ , and present the relevant reward $\widetilde{\mathbb{R}}_i$ , obtaining $y_c^*$ poses a challenge.
|
| 102 |
+
|
| 103 |
+
# 3.3 Pretrained Agent
|
| 104 |
+
|
| 105 |
+
To address the limitation, we propose a reinforced causal agent in the aforementioned causal and reinforcement learning framework to better align the probability distribution of the target prediction and theoretically justify the creation of an auxiliary agent targeting $\hat{y}_c$ .
|
| 106 |
+
|
| 107 |
+
Pretrained Embedding. We pretrain the auxiliary agent, denoted as $f_{A}(\cdot)$ , with
|
| 108 |
+
|
| 109 |
+
$$
|
| 110 |
+
\theta_ {A} ^ {*} = \arg \min _ {\theta_ {A}} \mathcal {H} (Y, \hat {Y} | X), s. t. \hat {Y} = f _ {A} (X), \tag {8}
|
| 111 |
+
$$
|
| 112 |
+
|
| 113 |
+
where $\theta_{A}$ represents the parameters of the agent, and $\theta_{A}^{*}$ denotes the optimal solution.
|
| 114 |
+
|
| 115 |
+
Theorem Analysis. Assuming $X, Z, Y$ , and $\mathcal{A}$ as random variables in rationalization representing the input, rationale, label, and auxiliary variable, respectively, we propose:
|
| 116 |
+
|
| 117 |
+
Lemma 1. Given $X$ , $Z$ , $Y$ , $\hat{Y} = f_{P}(f_{G}(X))$ . Existing a guiding variable $\mathcal{A}$ could enable the predictor $f_{P}(\cdot)$ to achieve good predictions. That is, a solution for $\mathcal{A}$ exists, and $X$ is a solution of $\mathcal{A}$ .
|
| 118 |
+
|
| 119 |
+
The proof is provided in Appendix A. Lemma 1 suggests that constructing an auxiliary variable $\mathcal{A}$ aligned with $X$ for rationalization contributes to the learning of a good prediction.
|
| 120 |
+
|
| 121 |
+
# 4 Agent-Guided Rationalization
|
| 122 |
+
|
| 123 |
+
As depicted in Figure 2, following the establishment of the environment for the reinforced causal agent, we delineate the construction and training of the policy network $q_{\phi}$ .
|
| 124 |
+
|
| 125 |
+
# 4.1 Policy Network Architecture
|
| 126 |
+
|
| 127 |
+
It takes the pair of intermediate state $\mathcal{Z}_i$ and $\hat{y}_c$ provided by $f_{A}(\cdot)$ as input. Formally,
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
\widetilde {z} _ {i} \sim q _ {\phi} \left(\mathcal {Z} _ {i}, \hat {y} _ {c}\right), \tag {9}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
where $\theta_{\phi}$ is the trainable parameters of the policy network, and $\widetilde{z}_i$ is generated according to the probability of next action $\mathbb{P}_{\phi}(\widetilde{z}_i|\mathcal{Z}_i,\hat{y}_c)$ .
|
| 134 |
+
|
| 135 |
+
Representation learning of action candidates. With the space of action candidates $\widetilde{\mathbb{A}}_i = X\backslash \mathcal{Z}_i$ our policy network first learns the representation for each action candidate $\widetilde{a}_{i}^{(j)}(0 < j < N)$ , where $N$ is the number of candidates.
|
| 136 |
+
|
| 137 |
+
Then, we employ the encoder to encode $X \backslash \mathcal{Z}_i$ for obtaining the action representation of $\widetilde{z}_i$ by
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
e _ {\widetilde {z} _ {i}} = \operatorname {e n c o d e r} \left(X \backslash \mathcal {Z} _ {i}\right), \tag {10}
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
utilizing bidirectional Gated Recurrent Units (GRUs) (Cho et al., 2014) as the encoder.
|
| 144 |
+
|
| 145 |
+
Sampling of action. The policy network aims to select a singular action $\widetilde{a}_i = \widetilde{z}_i$ from the search space, prioritizing its relevance to the current state $s_i = \mathcal{Z}_i$ . This selection process is modeled as:
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
p _ {\widetilde {z} _ {i}} = M L P \left(\left[ \boldsymbol {e} _ {\widetilde {z} _ {i}}; \boldsymbol {e} _ {\mathcal {Z} _ {i}} \right]\right), \tag {11}
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
where $e_{\mathcal{Z}_i}$ indicates the current rationale's representation. The selection probability for each action candidate within $\widetilde{\mathbb{A}}_i$ is computed using
|
| 152 |
+
|
| 153 |
+
$$
|
| 154 |
+
\mathbb {P} _ {\phi} \left(\widetilde {z} _ {i} \mid \mathcal {Z} _ {i}, \hat {y} _ {c}\right) = \operatorname {s o f t m a x} _ {\widetilde {\mathbb {A}} _ {i}} \left(p _ {\widetilde {z} _ {i}}\right), \tag {12}
|
| 155 |
+
$$
|
| 156 |
+
|
| 157 |
+
where $\phi$ is the parameters collected of MLP.
|
| 158 |
+
|
| 159 |
+
<table><tr><td rowspan="2">Methods</td><td rowspan="2">S</td><td colspan="3">Appearance</td><td colspan="3">Aroma</td><td colspan="3">Palate</td></tr><tr><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>RNP (Lei et al., 2016)</td><td>20</td><td>39.4</td><td>44.9</td><td>42.0</td><td>37.5</td><td>51.9</td><td>43.5</td><td>21.6</td><td>38.9</td><td>27.8</td></tr><tr><td>HardKuma (Bastings et al., 2019)</td><td>20</td><td>64.9</td><td>69.2</td><td>67.0</td><td>37.0</td><td>55.8</td><td>44.5</td><td>14.6</td><td>22.3</td><td>17.7</td></tr><tr><td>IB (Paranjape et al., 2020)</td><td>20</td><td>59.3</td><td>69.0</td><td>63.8</td><td>38.6</td><td>55.5</td><td>45.6</td><td>21.6</td><td>48.5</td><td>29.9</td></tr><tr><td>INVRAT (Chang et al., 2020)</td><td>20</td><td>58.9</td><td>67.2</td><td>62.8</td><td>29.3</td><td>52.1</td><td>37.5</td><td>24.0</td><td>55.2</td><td>33.5</td></tr><tr><td>DARE (Yue et al., 2022)</td><td>20</td><td>63.7</td><td>71.8</td><td>67.5</td><td>41.0</td><td>61.5</td><td>49.3</td><td>24.4</td><td>54.9</td><td>33.8</td></tr><tr><td>FR (Liu et al., 2022)</td><td>20</td><td>74.9</td><td>84.9</td><td>79.6</td><td>58.7</td><td>73.3</td><td>65.2</td><td>36.6</td><td>59.4</td><td>45.3</td></tr><tr><td>Inter-RAT (Yue et al., 2023)</td><td>20</td><td>62.0</td><td>76.7</td><td>68.6</td><td>44.2</td><td>65.4</td><td>52.8</td><td>26.3</td><td>59.1</td><td>36.4</td></tr><tr><td>MGR (Liu et al., 2023b)</td><td>20</td><td>76.3</td><td>83.6</td><td>79.8</td><td>64.4</td><td>81.3</td><td>71.9</td><td>47.1</td><td>73.1</td><td>57.3</td></tr><tr><td>AGR(Ours)</td><td>20</td><td>83.7</td><td>87.5</td><td>85.6</td><td>67.5</td><td>81.4</td><td>73.8</td><td>47.6</td><td>77.7</td><td>59.0</td></tr></table>
|
| 160 |
+
|
| 161 |
+
Table 1: Results on BeerAdvocate, where Bold text indicates the best experimental results across different methods.
|
| 162 |
+
|
| 163 |
+
<table><tr><td rowspan="2">Methods</td><td colspan="4">Appearance</td><td colspan="4">Appearance</td><td colspan="4">Appearance</td></tr><tr><td>S</td><td>P</td><td>R</td><td>F1</td><td>S</td><td>P</td><td>R</td><td>F1</td><td>S</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>RNP</td><td>10</td><td>32.4</td><td>18.6</td><td>23.6</td><td>20</td><td>39.4</td><td>44.9</td><td>42.0</td><td>30</td><td>24.2</td><td>41.2</td><td>30.5</td></tr><tr><td>DARE</td><td>10</td><td>63.9</td><td>42.8</td><td>51.3</td><td>20</td><td>63.7</td><td>71.8</td><td>67.5</td><td>30</td><td>45.5</td><td>80.6</td><td>58.1</td></tr><tr><td>FR</td><td>10</td><td>70.4</td><td>42.0</td><td>52.6</td><td>20</td><td>74.9</td><td>84.9</td><td>79.6</td><td>30</td><td>50.6</td><td>81.4</td><td>62.3</td></tr><tr><td>Inter-RAT</td><td>10</td><td>66.0</td><td>46.5</td><td>54.6</td><td>20</td><td>62.0</td><td>76.7</td><td>68.6</td><td>30</td><td>48.1</td><td>82.7</td><td>60.8</td></tr><tr><td>MGR</td><td>10</td><td>87.5</td><td>51.7</td><td>65.0</td><td>20</td><td>76.3</td><td>83.6</td><td>79.8</td><td>30</td><td>57.2</td><td>93.9</td><td>71.1</td></tr><tr><td>AGR</td><td>10</td><td>83.5</td><td>54.9</td><td>66.2</td><td>20</td><td>83.7</td><td>87.5</td><td>85.6</td><td>30</td><td>59.7</td><td>94.3</td><td>73.1</td></tr></table>
|
| 164 |
+
|
| 165 |
+
# 4.2 Policy Gradient Training
|
| 166 |
+
|
| 167 |
+
Since discrete sampling within the policy network blocks gradients, we adopt policy gradient-based training framework REINFORCE (Sutton et al., 1999). The objective $\max_{\Omega}(\mathbb{L})$ is as follows:
|
| 168 |
+
|
| 169 |
+
$$
|
| 170 |
+
\max _ {\phi} \mathbb {E} _ {\mathcal {Z} _ {i} \in \widetilde {\mathbb {A}} _ {i}} \mathbb {E} _ {i} \left[ \widetilde {\mathbb {R}} \left(\mathcal {Z} _ {i}, \bar {z} _ {i}\right) \log \mathcal {P} _ {\phi} \left(\bar {z} _ {i} \mid \mathcal {Z} _ {i}, \hat {y} _ {c}\right) \right]. \tag {13}
|
| 171 |
+
$$
|
| 172 |
+
|
| 173 |
+
The final task loss is a jointly optimized objective:
|
| 174 |
+
|
| 175 |
+
$$
|
| 176 |
+
\min _ {\theta_ {g}, \theta_ {p}} \mathcal {H} (Y, \hat {Y}) + \Omega (M) - \Omega (\mathbb {L}), s. t. \hat {Y} = f _ {P} \left(f _ {G} (X)\right) \tag {14}
|
| 177 |
+
$$
|
| 178 |
+
|
| 179 |
+
# 5 Experiments
|
| 180 |
+
|
| 181 |
+
# 5.1 Datasets, Baselines and Evaluation Metrics
|
| 182 |
+
|
| 183 |
+
Datasets. We compare AGR using BeerAdvocate (McAuley et al., 2012) and HotelReview (Wang et al., 2010) datasets, which are two multi-aspect sentiment classification datasets widely used in rationalization. Following existing work, we obtain the data in the same way as Yue et al. (2023) for BeerAdvocate, and we preprocess HotelReview dataset in the same way as Huang et al. (2021) and Liu et al. (2023b).
|
| 184 |
+
|
| 185 |
+
Baselines. We compare with eight models for Beer-Advocate, including three sampling-based methods: RNP (Lei et al., 2016), HardKuma (Bastings et al., 2019), Information Bottleneck (IB) (Paranjape et al., 2020), and three extra-component-based methods: DARE (Yue et al., 2022), FR (Liu et al., 2022), MGR (Liu et al., 2023b), and two causal-based methods: INVRAT (Chang et al., 2020),
|
| 186 |
+
|
| 187 |
+
Table 2: The different sparsity results on BeerAdvocate.
|
| 188 |
+
|
| 189 |
+
<table><tr><td colspan="2">Methods</td><td>S</td><td>P</td><td>R</td><td>F1</td></tr><tr><td rowspan="5">Location</td><td>RNP (Lei et al., 2016)</td><td>10.9</td><td>43.3</td><td>55.5</td><td>48.6</td></tr><tr><td>CAR (Chang et al., 2019)</td><td>10.6</td><td>46.6</td><td>58.1</td><td>51.7</td></tr><tr><td>DMR (Huang et al., 2021)</td><td>10.7</td><td>47.5</td><td>60.1</td><td>53.1</td></tr><tr><td>A2R (Yu et al., 2021)</td><td>8.5</td><td>43.1</td><td>43.2</td><td>43.1</td></tr><tr><td>MGR (Liu et al., 2023b)</td><td>9.7</td><td>52.5</td><td>60.5</td><td>56.2</td></tr><tr><td colspan="2">AGR(Ours)</td><td>9.3</td><td>54.9</td><td>60.5</td><td>57.6</td></tr><tr><td colspan="2"></td><td>S</td><td>P</td><td>R</td><td>F1</td></tr><tr><td rowspan="5">Service</td><td>RNP (Lei et al., 2016)</td><td>11.0</td><td>40.0</td><td>38.2</td><td>39.1</td></tr><tr><td>CAR (Chang et al., 2019)</td><td>11.7</td><td>40.7</td><td>41.4</td><td>41.1</td></tr><tr><td>DMR (Huang et al., 2021)</td><td>11.6</td><td>43.0</td><td>43.6</td><td>43.3</td></tr><tr><td>A2R (Yu et al., 2021)</td><td>11.4</td><td>37.3</td><td>37.2</td><td>37.2</td></tr><tr><td>MGR (Liu et al., 2023b)</td><td>11.8</td><td>45.0</td><td>46.4</td><td>45.7</td></tr><tr><td colspan="2">AGR(Ours)</td><td>12.3</td><td>45.9</td><td>49.3</td><td>47.6</td></tr><tr><td colspan="2"></td><td>S</td><td>P</td><td>R</td><td>F1</td></tr><tr><td rowspan="5">Cleanliness</td><td>RNP (Lei et al., 2016)</td><td>10.6</td><td>30.5</td><td>36.0</td><td>33.0</td></tr><tr><td>CAR (Chang et al., 2019)</td><td>9.9</td><td>32.3</td><td>35.7</td><td>33.9</td></tr><tr><td>DMR (Huang et al., 2021)</td><td>10.3</td><td>31.4</td><td>36.4</td><td>33.7</td></tr><tr><td>A2R (Yu et al., 2021)</td><td>8.9</td><td>33.2</td><td>33.3</td><td>33.3</td></tr><tr><td>MGR (Liu et al., 2023b)</td><td>10.5</td><td>37.6</td><td>44.5</td><td>40.7</td></tr><tr><td colspan="2">AGR(Ours)</td><td>10.3</td><td>39.0</td><td>45.5</td><td>42.0</td></tr></table>
|
| 190 |
+
|
| 191 |
+
Table 3: The experimental results on HotelReview.
|
| 192 |
+
|
| 193 |
+
Inter-RAT (Yue et al., 2023). For HotelReview dataset, we compare with five models, including RNP (Lei et al., 2016), CAR (Chang et al., 2019), DMR (Huang et al., 2021), A2R (Yu et al., 2021), and MGR (Liu et al., 2023b).
|
| 194 |
+
|
| 195 |
+
Evaluation Metrics. Following (Huang et al., 2021; Yu et al., 2021; Yue et al., 2023; Liu et al., 2023b), we focus on the quality of rationales, and adopt Precision (P), Recall (R), and F1-score (F1) as metrics. We perform the best results on the validation set before testing on the test set. The Appendix B provides further details in this section.
|
| 196 |
+
|
| 197 |
+
# 5.2 Performance Comparison
|
| 198 |
+
|
| 199 |
+
Results on BeerAdvocate. As shown in Table 1, our proposed method AGR outperforms all the eight baselines in terms of three aspects for BeerAdvocate dataset. Furthermore, in sparsity experiments (Table 2), AGR consistently outperforms the latest state-of-the-art results, affirming its effectiveness for selective rationalization.
|
| 200 |
+
|
| 201 |
+
Results on HotelReview. Table 3 shows that our model once again obtains the best performance
|
| 202 |
+
|
| 203 |
+
Table 4: Examples of generated rationales. Human-annotated rationales are underlined. Rationales from three models are highlighted in blue and are denoted as ${Z}_{1},{Z}_{2}$ and ${Z}_{3}$ respectively.
|
| 204 |
+
|
| 205 |
+
<table><tr><td>FR (2022)</td><td>MGR (2023b)</td><td>AGR (Ours)</td></tr><tr><td>Aspect: Beer-Appearance
|
| 206 |
+
Label: Positive, Pred: Positive
|
| 207 |
+
Text: i picked this beer up on a whim as i was in the mood for a good coffee stout and the siren-like figure somehow told me this is the beer for you . a bit freaky , but i went with it . i was impressed from the very first pour . like any stout , the color is a dark molasses black . but … the head was thick and dense with good retention . the coffee aroma was intense ! the roasted goodness almost overwhelms my sense of smell .the roasted coffee flavors are the first things that i could taste along with hints of chocolate . however , i can tell there ’s more complexity here than my palette can decipher . the coffee flavors bring bitterness but it ’s not over powering as the sweetness of the malt cuts the bitterness quite nicely the beer has carbonation but once the bubbles have escaped the beer gives a creamy , velvety feel and finish . the alcohol was very well hidden in this beer which is scary …</td><td>Aspect: Beer-Appearance
|
| 208 |
+
Label: Positive, Pred: Positive
|
| 209 |
+
Text: i picked this beer up on a whim as i was in the mood for a good coffee stout and the siren-like figure somehow told me this is the beer for you . a bit freaky , but i went with it . i was impressed from the very first pour . like any stout , the color is a dark molasses black . but … the head was thick and dense with good retention . the caffeine aroma was intense ! the roasted goodness almost overwhelms my sense of smell .the roasted coffee flavors are the first things that i could taste along with hints of chocolate . however , i can tell there ’s more complexity here than my palette can decipher . the coffee flavors bring bitterness but it ’s not over powering as the sweetness of the malt cuts the bitterness quite nicely the beer has carbonation but once the bubbles have escaped the beer gives a creamy , velvety feel and finish . the alcoholwas very well hidden in this beer which is scary …</td><td>Aspect: Beer-Appearance
|
| 210 |
+
Label: Positive, Pred: Positive
|
| 211 |
+
Text: i picked this beer up on a whim as i was in the mood for a good coffee stout and the siren-like figure somehow told me this is the beer for you . a bit freaky , but i went with it . i was impressed from the very first pour . like any stout , the color is a dark molasses black . but … theHead was thick and dense with good retention . the caffeine aroma was intense ! the roasted goodness almost overwhelms my sense of smell .the roasted coffee flavors are the first things that i could taste along with hints of chocolate . however , i can tell there ’s more complexity here than my palette can decipher . the coffee flavors bring bitterness but it ’s not over powering as the sweetness of the malt cuts the bitterness quite nicely the beer has carbonation but once the bubbles have escaped the beer gives a creamy . velvety feel and finish . the alcohol was very well hidden in this beer which is scary …</td></tr></table>
|
| 212 |
+
|
| 213 |
+
<table><tr><td rowspan="2">Methods</td><td colspan="4">Appearance</td></tr><tr><td>S</td><td>P</td><td>R</td><td>F1</td></tr><tr><td>AGR</td><td>20</td><td>83.7</td><td>87.5</td><td>85.6</td></tr><tr><td>-w/o causal.</td><td>20</td><td>81.5</td><td>87.8</td><td>84.5</td></tr><tr><td>-w/o embedd.</td><td>20</td><td>81.9</td><td>86.9</td><td>84.3</td></tr><tr><td>-w/o both</td><td>20</td><td>74.3</td><td>85.2</td><td>79.4</td></tr></table>
|
| 214 |
+
|
| 215 |
+
across all multi-aspects datasets consistently.
|
| 216 |
+
|
| 217 |
+
Ablation Studies. To further verify the effectiveness of AGR, we conduct the ablation experiments. As depicted in Table 5, removing either the optimized objective of causal effectiveness (referred to as causal.), the pretrained agent embedding (referred to as embedd.), or both, results in a notable decline in AGR's performance, underscoring the critical roles played by our proposed key components in AGR method.
|
| 218 |
+
|
| 219 |
+
Further Analyses. Firstly, we compare AGR with FR and MGR, providing the visualized examples. For example, we can observe from Table 4 that although all three methods are able to focus on the appearance aspect, FR and MGR still exhibit some degeneration (since the selective rationale still has some distance from the target prediction). However, AGR utilizes causal calculus to capture the causal variations between $Z_{1}$ and $Z_{2}$ , as well as between $Z_{2}$ and $Z_{3}$ , regarding the target prediction,
|
| 220 |
+
|
| 221 |
+
thereby gradually mitigating this degeneration during the training process. The Appendix C presents more visualized examples. Secondly, similar to (Liu et al., 2023b), we also compare the complexity of AGR with other models. As shown in Table 6, we can see that the complexity of AGR has been somewhat improved compared to latest work; however, there is still room for further improvement. This will be a key focus of future research.
|
| 222 |
+
|
| 223 |
+
Table 5: Ablation studies on the BeerAdvocate.
|
| 224 |
+
|
| 225 |
+
<table><tr><td></td><td>RNP</td><td>FR</td><td>AGR</td><td>CAR</td></tr><tr><td>modules parameters</td><td>1gen+1pred 2×</td><td>1gen+1pred 2×</td><td>1gen+1pred+1agent 3×</td><td>1gen+2pred 3×</td></tr><tr><td></td><td>DARE</td><td>CAR</td><td>DMR</td><td>MGR</td></tr><tr><td>modules parameters</td><td>1gen+1pred+guider 3×</td><td>1gen+2pred 3×</td><td>1gen+3pred 4×</td><td>3gen+1pred 4×</td></tr></table>
|
| 226 |
+
|
| 227 |
+
Table 6: The complexity of different models. "gen": generator. "pred": predictor.
|
| 228 |
+
|
| 229 |
+
# 6 Conclusion
|
| 230 |
+
|
| 231 |
+
In this paper, we propose AGR, a reinforced causal agent-based rationalization approach to guide the cooperative game optimization during rationale training. Our theoretical insights underscore the necessity of this guidance signal for accurate predictions. Empirical evaluations on two widely-used benchmarks indicate the effectiveness of our proposed approach, surpassing existing state-of-the-art methods for selective rationalization.
|
| 232 |
+
|
| 233 |
+
# Limitations
|
| 234 |
+
|
| 235 |
+
There are still some limitations that need further improvement in the future. Firstly, optimizing cooperative game of rationalization during training brings great significance to the model performance, but how to more efficiently search for meaningful actions within a larger search space for good rationales remains the next direction to explore. Nextly, this work does not involve the debiasing techniques of data-level. Considering the debiasing technique may be a good way to further improve the results. In addition, as the latest research (Chen et al., 2022; Liu et al., 2023a,b) has shown that it is still a challenging task to finetune pretrained language models on the cooperative game framework. Therefore, how to incorporate the cooperative framework and (large) language models is a research interest.
|
| 236 |
+
|
| 237 |
+
# Ethics Statement
|
| 238 |
+
|
| 239 |
+
This paper does not involve the presentation of a new dataset and the utilization of demographic or identity characteristics information.
|
| 240 |
+
|
| 241 |
+
# Acknowledgements
|
| 242 |
+
|
| 243 |
+
We would like to thank all the anonymous reviewers for their valuable feedback. This work was supported by the National Natural Science Foundation of China (Nos.62376144, 62272285, 62076155) and the Science and Technology Cooperation and Exchange Special Project of Shanxi Province (No.202204041101016).
|
| 244 |
+
|
| 245 |
+
# References
|
| 246 |
+
|
| 247 |
+
Yujia Bao, Shiyu Chang, Mo Yu, and Regina Barzilay. 2018. Deriving machine attention from human rationales. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1903-1913, Brussels, Belgium. Association for Computational Linguistics.
|
| 248 |
+
Jasmijn Bastings, Wilker Aziz, and Ivan Titov. 2019. Interpretable neural predictions with differentiable binary variables. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 2963-2977, Florence, Italy. Association for Computational Linguistics.
|
| 249 |
+
Shiyu Chang, Yang Zhang, Mo Yu, and Tommi Jaakkola. 2019. A game theoretic approach to class-wise selective rationalization. Advances in neural information processing systems, 32.
|
| 250 |
+
Shiyu Chang, Yang Zhang, Mo Yu, and Tommi Jaakkola. 2020. Invariant rationalization. In International
|
| 251 |
+
|
| 252 |
+
Conference on Machine Learning, pages 1448-1458. PMLR.
|
| 253 |
+
Howard Chen, Jacqueline He, Karthik Narasimhan, and Danqi Chen. 2022. Can rationalization improve robustness? In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3792-3805, Seattle, United States. Association for Computational Linguistics.
|
| 254 |
+
Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724-1734, Doha, Qatar. Association for Computational Linguistics.
|
| 255 |
+
Nicola De Cao, Michael Sejr Schlichtkrull, Wilker Aziz, and Ivan Titov. 2020. How do decisions emerge across layers in neural models? interpretation with differentiable masking. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3243-3255, Online. Association for Computational Linguistics.
|
| 256 |
+
Zhiying Deng, Jianjun Li, Zhiqiang Guo, and Guohui Li. 2023. Multi-aspect interest neighbor-augmented network for next-basket recommendation. ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1-5.
|
| 257 |
+
Yongfeng Huang, Yujun Chen, Yulun Du, and Zhilin Yang. 2021. Distribution matching for rationalization. In AAAI Conference on Artificial Intelligence.
|
| 258 |
+
Diederik Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations (ICLR), San Diego, CA, USA.
|
| 259 |
+
Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2016. Rationalizing neural predictions. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 107-117, Austin, Texas. Association for Computational Linguistics.
|
| 260 |
+
Wei Liu, Haozhao Wang, Jun Wang, Zhiying Deng, YuanKai Zhang, Cheng Wang, and Ruixuan Li. 2023a. Enhancing the rationale-input alignment for self-explaining rationalization. arXiv preprint arXiv:2312.04103.
|
| 261 |
+
Wei Liu, Haozhao Wang, Jun Wang, Ruixuan Li, Xinyang Li, YuanKai Zhang, and Yang Qiu. 2023b. MGR: Multi-generator based rationalization. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12771-12787, Toronto, Canada. Association for Computational Linguistics.
|
| 262 |
+
|
| 263 |
+
Wei Liu, Haozhao Wang, Jun Wang, Ruixuan Li, Chao Yue, and YuanKai Zhang. 2022. Fr: Folded rationalization with a unified encoder. Advances in Neural Information Processing Systems, 35:6954-6966.
|
| 264 |
+
Wei Liu, Jun Wang, Haozhao Wang, Ruixuan Li, Zhiying Deng, YuanKai Zhang, and Yang Qiu. 2023c. D-separation for causal self-explanation. In Thirty-seventh Conference on Neural Information Processing Systems.
|
| 265 |
+
Wei Liu, Jun Wang, Haozhao Wang, Ruixuan Li, Yang Qiu, Yuankai Zhang, Jie Han, and Yixiong Zou. 2023d. Decoupled rationalization with asymmetric learning rates: A flexible lipschitz restraint. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 1535-1547.
|
| 266 |
+
Julian McAuley, Jure Leskovec, and Dan Jurafsky. 2012. Learning attitudes and attributes from multi-aspect reviews. 2012 IEEE 12th International Conference on Data Mining, pages 1020-1025.
|
| 267 |
+
Bhargavi Paranjape, Mandar Joshi, John Thickstun, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020. An information bottleneck approach for controlling conciseness in rationale extraction. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, pages 1938-1952, Online. Association for Computational Linguistics.
|
| 268 |
+
Judea Pearl. 2009. Causality. Cambridge university press.
|
| 269 |
+
Judea Pearl, Madelyn Glymour, and Nicholas P Jewell. 2016. Causal inference in statistics: A primer. John Wiley & Sons.
|
| 270 |
+
Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.
|
| 271 |
+
Richard S Sutton, David McAllester, Satinder Singh, and Yishay Mansour. 1999. Policy gradient methods for reinforcement learning with function approximation. Advances in neural information processing systems, 12.
|
| 272 |
+
Hongning Wang, Yue Lu, and Chengxiang Zhai. 2010. Latent aspect rating analysis on review text data: A rating regression approach. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '10, page 783-792, New York, NY, USA. Association for Computing Machinery.
|
| 273 |
+
Mo Yu, Shiyu Chang, Yang Zhang, and Tommi S Jaakkola. 2019. Rethinking cooperative rationalization: Introspective extraction and complement control. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing.
|
| 274 |
+
|
| 275 |
+
Mo Yu, Yang Zhang, Shiyu Chang, and Tommi Jaakkola. 2021. Understanding interlocking dynamics of cooperative rationalization. Advances in Neural Information Processing Systems, 34:12822-12835.
|
| 276 |
+
Hao Yuan, Lei Cai, Xia Hu, Jie Wang, and Shuiwang Ji. 2020. Interpreting image classifiers by generating discrete masks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(4).
|
| 277 |
+
Linan Yue, Qi Liu, Yichao Du, Yanqing An, Li Wang, and Enhong Chen. 2022. Dare: Disentanglement-augmented rationale extraction. Advances in Neural Information Processing Systems, 35:26603-26617.
|
| 278 |
+
Linan Yue, Qi Liu, Li Wang, Yanqing An, Yichao Du, and Zhenya Huang. 2023. Interventional rationalization. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 11404-11418, Singapore. Association for Computational Linguistics.
|
| 279 |
+
Wenbo Zhang, Tong Wu, Yunlong Wang, Yong Cai, and Hengrui Cai. 2023. Towards trustworthy explanation: on causal rationalization. In Proceedings of the 40th International Conference on Machine Learning. JMLR.org.
|
| 280 |
+
|
| 281 |
+
# A Proof of Lemma 1
|
| 282 |
+
|
| 283 |
+
Given random variables $X, Z, Y$ , and $\mathcal{A}$ , where $\mathcal{A}$ is drawn from the distribution of $X$ . According to Section 2, to obtain a good predictor, we have
|
| 284 |
+
|
| 285 |
+
$$
|
| 286 |
+
\min _ {\theta_ {g}, \theta_ {p}} \mathcal {H} (Y, \hat {Y}) = \min _ {\theta_ {g}, \theta_ {p}} \mathcal {H} (Y, f _ {P} (Z)), \tag {15}
|
| 287 |
+
$$
|
| 288 |
+
|
| 289 |
+
where $Z = f_{G}(X)$ . It means that we need to minimize $H(Y,Z)$ (Liu et al., 2023b), i.e., to reduce more uncertainty and indicate the label $Y$ . We assume that exist variable $\mathcal{A}$ could make to reduce the uncertainty of learning $Y$ , then our goal is to make $H(Y,\mathcal{A})\leq H(Y,Z)$ .
|
| 290 |
+
|
| 291 |
+
According to the mutual information formula, we can obtain:
|
| 292 |
+
|
| 293 |
+
$$
|
| 294 |
+
H (Y) - H (Y, \mathcal {A}) \geq H (Y) - H (Y, Z), \tag {16}
|
| 295 |
+
$$
|
| 296 |
+
|
| 297 |
+
so,
|
| 298 |
+
|
| 299 |
+
$$
|
| 300 |
+
I (Y, \mathcal {A}) \geq I (Y, Z). \tag {17}
|
| 301 |
+
$$
|
| 302 |
+
|
| 303 |
+
Next, since we have $X = \{Z, X \backslash Z\}$ where $X \backslash Z$ denotes the text derived from $X$ and unrelated to the rationale, so we can obtain mutual information between $X$ and $Y$ ,
|
| 304 |
+
|
| 305 |
+
$$
|
| 306 |
+
\begin{array}{l} I (Y; X) = I (Y; \{Z, X \backslash Z \}) \\ = I (Y; Z) + I (Y; X \backslash Z | Z) \tag {18} \\ \end{array}
|
| 307 |
+
$$
|
| 308 |
+
|
| 309 |
+
According to the non-negativity of mutual information, we have $I(Y;X\backslash Z|Z)\geq 0$ , so
|
| 310 |
+
|
| 311 |
+
$$
|
| 312 |
+
I (Y, X) \geq I (Y, Z) \tag {19}
|
| 313 |
+
$$
|
| 314 |
+
|
| 315 |
+
Further, we denote $I(Y,X) = \varepsilon_0 \geq \varepsilon_1 \geq I(Y,Z) \geq \varepsilon_2$ , where $\varepsilon_1$ and $\varepsilon_2$ indicate the upper and lower bounds of $I(Y,Z)$ , respectively.
|
| 316 |
+
|
| 317 |
+
Therefore, we can obtain that when $\mathcal{A} = X$ , the equation $I(Y,\mathcal{A}) = \varepsilon_0 \geq \varepsilon_1 \geq I(Y,Z)$ is satisfied. That is to say, a solution for $\mathcal{A}$ exists, and $X$ is a solution of $\mathcal{A}$ .
|
| 318 |
+
|
| 319 |
+
The proof of Lemma 1 is completed.
|
| 320 |
+
|
| 321 |
+
# B Experiment Details
|
| 322 |
+
|
| 323 |
+
# B.1Baselines
|
| 324 |
+
|
| 325 |
+
We compare AGR with the following baselines: RNP (2016), a original RNP sampling method.
|
| 326 |
+
|
| 327 |
+
HardKuma (2019), a kumaraswamy-distribution-based sampling method.
|
| 328 |
+
|
| 329 |
+
CAR (2019), a game theoretic-based approach to class-dependent rationalization.
|
| 330 |
+
|
| 331 |
+
Information Bottleneck (IB) (2020), a model utilizing IB objective for balancing performance and rationale length.
|
| 332 |
+
|
| 333 |
+
INVRAT (2020), a method that introduces an environment-agnostic predictor.
|
| 334 |
+
|
| 335 |
+
<table><tr><td rowspan="2" colspan="2">Datasets</td><td colspan="2">Train</td><td colspan="2">Dev</td><td colspan="2">Annotation</td></tr><tr><td>Pos</td><td>Neg</td><td>Pos</td><td>Neg</td><td>Pos</td><td>Neg</td></tr><tr><td rowspan="3">BeerAdvocate</td><td>Appearance</td><td>202385</td><td>12897</td><td>28488</td><td>1318</td><td>923</td><td>13</td></tr><tr><td>Aroma</td><td>172299</td><td>30564</td><td>24494</td><td>3396</td><td>848</td><td>29</td></tr><tr><td>Palate</td><td>176038</td><td>27639</td><td>24837</td><td>3203</td><td>785</td><td>20</td></tr><tr><td rowspan="3">HotelReview</td><td>Location</td><td>7236</td><td>7236</td><td>906</td><td>906</td><td>104</td><td>96</td></tr><tr><td>Service</td><td>50742</td><td>50742</td><td>6344</td><td>6344</td><td>101</td><td>99</td></tr><tr><td>Cleanliness</td><td>75049</td><td>75049</td><td>9382</td><td>9382</td><td>99</td><td>101</td></tr></table>
|
| 336 |
+
|
| 337 |
+
Table 7: Statistics of datasets used in this paper.
|
| 338 |
+
|
| 339 |
+
DMR (2021), which proposes a teacher-student distillation framework to align input distribution.
|
| 340 |
+
|
| 341 |
+
A2R (2021), a method that introducing a soft rationale to predictor.
|
| 342 |
+
|
| 343 |
+
DARE (2022), which introduces a guider into predictor to encapsulate more information from the input.
|
| 344 |
+
|
| 345 |
+
FR (2022), a method using a unified encoder for generator and predictor.
|
| 346 |
+
|
| 347 |
+
Inter-RAT (2023), which develops an interventional rationalization to discover the causal rationales.
|
| 348 |
+
|
| 349 |
+
MGR (2023b), a method leveraging multiple generators to select rationales.
|
| 350 |
+
|
| 351 |
+
# B.2 Datasets
|
| 352 |
+
|
| 353 |
+
Following previous research (Huang et al., 2021; Yue et al., 2023; Liu et al., 2023b), we obtain BeerAdvocate and HotelReview datasets. BeerAdvocate (McAuley et al., 2012) and HotelReview (Wang et al., 2010) are publicly available from existing work. As shown in Table 7, the specific splitting details of the two datasets are presented.
|
| 354 |
+
|
| 355 |
+
# B.3 Implementation
|
| 356 |
+
|
| 357 |
+
To fairly compare with previous works and validate the effectiveness of the approach proposed, we utilize the 100-dimension Glove (Pennington et al., 2014) as the word embedding and the 200-dimension GRUs (Cho et al., 2014) encoder to build the generator $f_{G}(\cdot)$ in the AGR architecture. Further generator $f_{G}(\cdot)$ follows Equation 1 for cooperative optimization with predictor $f_{P}(\cdot)$ . Meanwhile, we construct the policy network $q_{\phi}(\cdot)$ to collaborate with the generator $f_{G}(\cdot)$ and predictor $f_{P}(\cdot)$ to learn candidate actions in different training states, including the representation learning of action candidates and the sampling of actions. We use Adam (Kingma and Ba, 2015) as the optimizer.
|
| 358 |
+
|
| 359 |
+
# C Additional Examples
|
| 360 |
+
|
| 361 |
+
As shown in Table 8, we provide more examples of selected rationale from the Beer-Aroma and Hotel-Location two aspects, where their sparsity is set to be about $20\%$ and $10\%$ , respectively.
|
| 362 |
+
|
| 363 |
+
Table 8: Examples of generated rationales. Human-annotated rationales are underlined. Rationales from three models are highlighted in blue, respectively.
|
| 364 |
+
|
| 365 |
+
<table><tr><td>FR (2022)</td><td>MGR (2023b)</td><td>AGR (Ours)</td></tr><tr><td>Aspect: Beer-Aroma
|
| 366 |
+
Label: Positive, Pred: Positive
|
| 367 |
+
Text: had this at bocktown with wvbeergeek and jasonm , came in a 750ml caged and corked the corked banged out of sight as soon as the cage was undone .seved into a tulip glass between the 3 of us hazy , deep copper , mahagony , hard to get a really good look at the color at bocktown . off white head hard to pour without a glass full of fluffy everlasting head . left lot of thick webbing all over the inside of the glass , sticky looking . great aroma can't seem to keep it away from the nose
|
| 368 |
+
. sweet , dark , tart fruit notes , some sour cherry , earthy , spicy , with hints of currants , clove , allspice also nutty , with some belgium yeast . lots of sweet booziness from the start , vinious , dark fruityness with plum notes . the fruityness was remisent of dried fruit.lots of spicyness lots of clove.also nutty and earthy . finished clean , spicy and very sugary . syrupy , big full mouthfeel , smooth and very creamy with lots of jucyness . a beer to sip , but very enjoyable , wish i had the whole bottle to drink would be no problem . a must try beer if you like this style . seems like a beer that would age very well .</td><td>Aspect: Beer-Aroma
|
| 369 |
+
Label: Positive, Pred: Positive
|
| 370 |
+
Text: had this at bocktown with wvbeergeek and jasonm , came in a 750ml caged and corked the corked banged out of sight as soon as the cage was undone . .seved into a tulip glass between the 3 of us hazy , deep copper , mahagony , hard to get a really good look at the color at bocktown . off white head hard to pour without a glass full of fluffy everlasting head . left lot of thick webbing all over the inside of the glass , sticky looking . great aroma can't seem to keep it away from the nose
|
| 371 |
+
. sweet , dark , tart fruit notes , some sour cherry , earthy . spicy , with hints of currants , clove , allspice also nutty , with some belgium yeast . lots of sweet booziness from the start , vinious , dark fruityness with plum notes . the fruityness was remisent of dried fruit.lots of spicyness lots of clove.also nutty and earthy . finished clean , spicy and very sugary . syrupy , big full mouthfeel , smooth and very creamy with lots ofjucyness . a beer to sip , but very enjoyable , wish i had the whole bottle to drink would be no problem . a must try beer if you like this style . seems like a beer that would age very well .</td><td>Aspect: Beer-Aroma
|
| 372 |
+
Label: Positive, Pred: Positive
|
| 373 |
+
Text: had this at bocktown with wvbeergeek and jasonm , came in a 750ml caged and corked the corked banged out of light as soon as the cage was undone . .seved into a tulip glass between the 3 of us hazy , deep copper , mahagony , hard to get a really good look at the color at bocktown . off white head hard to pour without a glass full of fluffy everlasting head . left lot of thick webbing all over the inside of the glass , sticky looking . great aroma can't seem to keep it away from the nose
|
| 374 |
+
. sweet , dark , tart fruitNotes , some sour cherry , earthy , spicy , with hints of currants , clove , allspice also nutty , with some belgium yeast . lots of sweet booziness from the start , vinious , dark fruityness with plum notes . the fruityness was remisent of dried fruit.lots of spicyness lots of clove.also nutty and earthy . finished clean , spicy and very sugary . syrupy , big full mouthfeel, smooth and very creamy with lots ofjucyness . a beer to sip , but very enjoyable , wish i had the whole bottle to drink would be no problem . a must try beer if you like this style . seems like a beer that would age very well .</td></tr><tr><td>Aspect: Hotel-Location
|
| 375 |
+
Label: Negative, Pred: Negative
|
| 376 |
+
Text: we stayed at the dona palace for 3 nights and while the location is central , it is also more crowded and noisy . the windows of the room we stayed in did not have adequate sound proofing , noise from the canal and outside would wake us up early in the morning . the breakfast was a nice bonus though , the two waitresses serving the room were always gracious and helpful . the front desk personnel however were rude and abrupt , so that was n't pleasant to deal with . the rooms are dated and had a musty smell . the bed was uncomfortable , blankets were rough , and the shower drain did not work very well . overall , i probably wound not stay here again .</td><td>Aspect: Hotel-Location
|
| 377 |
+
Label: Negative, Pred: Negative
|
| 378 |
+
Text: we stayed at the dona palace for 3 nights and while the location is central , it is also more crowded and noisy . the windows of the room we stayed in did not have adequate sound proofing , noise from the canal and outside would wake us up early in the morning . the breakfast was a nice bonus though , the two waitresses serving the room were always gracious and helpful . the front desk personnel However were rude and abrupt , so that was n't pleasant to deal with . the rooms are dated and had a musty smell . the bed was uncomfortable , blankets were rough , and the shower drain did not work very well . overall , i probably wound not stay here again .</td><td>Aspect: Hotel-Location
|
| 379 |
+
Label: Negative, Pred: Negative
|
| 380 |
+
Text: we stayed at the dona palace for 3 nights and while the location is central , it is also more crowded and noisy . the windowsof the room we stayed in did not have adequate sound proofing , noise from the canal and outside would wake us up early in the morning . the breakfast was a nice bonus though , the two waitresses serving the room were always gracious and helpful . the front desk personnel however were rude and abrupt , so that was n't pleasant to deal with . the rooms are dated and had a musty smell . the bed was uncomfortable , blankets were rough , and the shower drain did not work very well . Overall , i probably wound not stay here again .</td></tr></table>
|
2024/AGR_ Reinforced Causal Agent-Guided Self-explaining Rationalization/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f25c430077304a39d217579bedc6b10d3b93c5d497cf1c577e9de16b99ae360e
|
| 3 |
+
size 1196587
|
2024/AGR_ Reinforced Causal Agent-Guided Self-explaining Rationalization/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/ATLAS_ Improving Lay Summarisation with Attribute-based Control/dabae923-8653-40d5-a808-ef509db0f2cc_content_list.json
ADDED
|
@@ -0,0 +1,1236 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "ATLAS: Improving Lay Summarisation with Attribute-based Control",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
136,
|
| 8 |
+
87,
|
| 9 |
+
860,
|
| 10 |
+
109
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Zhihao Zhang $^{1}$ , Tomas Goldsack $^{2}$ , Carolina Scarton $^{2}$ , Chenghua Lin $^{3*}$ $^{1}$ College of Economics and Management, Beijing University of Technology, China, $^{2}$ Department of Computer Science, University of Sheffield, UK \n $^{3}$ Department of Computer Science, The University of Manchester, UK",
|
| 17 |
+
"bbox": [
|
| 18 |
+
163,
|
| 19 |
+
123,
|
| 20 |
+
838,
|
| 21 |
+
192
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "zhhzhang@bjut.edu.cn {tgoldsack1, c.scarton}@sheffield.ac.uk chenghua.lin@manchester.ac.uk",
|
| 28 |
+
"bbox": [
|
| 29 |
+
223,
|
| 30 |
+
195,
|
| 31 |
+
776,
|
| 32 |
+
225
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "Abstract",
|
| 39 |
+
"text_level": 1,
|
| 40 |
+
"bbox": [
|
| 41 |
+
260,
|
| 42 |
+
252,
|
| 43 |
+
339,
|
| 44 |
+
268
|
| 45 |
+
],
|
| 46 |
+
"page_idx": 0
|
| 47 |
+
},
|
| 48 |
+
{
|
| 49 |
+
"type": "text",
|
| 50 |
+
"text": "Automatic scientific lay summarisation aims to produce summaries of scientific articles that are comprehensible to non-expert audiences. However, previous work assumes a one-size-fits-all approach, where the content and style of the produced summary are entirely dependent on the data used to train the model. In practice, audiences with different goals and levels of expertise will have specific needs, impacting what content should appear in a lay summary and how it should be presented. Aiming to address this disparity, we propose ATLAS, a novel abstractive summarisation approach that can control various properties that contribute to the overall \"layness\" of the generated summary using targeted control attributes. We evaluate ATLAS on a combination of biomedical lay summarisation datasets, where it outperforms state-of-the-art baselines using both automatic and human evaluations. Additional analyses provided on the discriminatory power and emergent influence of our selected controllable attributes further attest to the effectiveness of our approach.",
|
| 51 |
+
"bbox": [
|
| 52 |
+
141,
|
| 53 |
+
279,
|
| 54 |
+
460,
|
| 55 |
+
620
|
| 56 |
+
],
|
| 57 |
+
"page_idx": 0
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"type": "text",
|
| 61 |
+
"text": "1 Introduction",
|
| 62 |
+
"text_level": 1,
|
| 63 |
+
"bbox": [
|
| 64 |
+
114,
|
| 65 |
+
631,
|
| 66 |
+
260,
|
| 67 |
+
646
|
| 68 |
+
],
|
| 69 |
+
"page_idx": 0
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"type": "text",
|
| 73 |
+
"text": "Lay summarisation is defined as producing a summary of a scientific article that is comprehensible to non-experts (King et al., 2017). Recent work has shown that, when compared to technical abstracts, lay summaries typically are more readable (lexically and syntactically), more abstractive, and contain more background information, enabling a non-technical reader to better understand their contents (Luo et al., 2022; Cohen et al., 2021; Goldsack et al., 2023b). However, the extent to which these attributes are required within a lay summary depends largely on the specific needs of the reader. For example, a scientist from a related field will require less background information to understand an article's contents than an entirely non-technical",
|
| 74 |
+
"bbox": [
|
| 75 |
+
112,
|
| 76 |
+
657,
|
| 77 |
+
489,
|
| 78 |
+
897
|
| 79 |
+
],
|
| 80 |
+
"page_idx": 0
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"type": "text",
|
| 84 |
+
"text": "reader, but they might still require domain-specific jargon to be simplified or explained. Despite its obvious benefits, to our knowledge, no work has yet explored how we can enable such fine-grained control over comprehensibility-related aspects for lay summary generation.",
|
| 85 |
+
"bbox": [
|
| 86 |
+
507,
|
| 87 |
+
253,
|
| 88 |
+
884,
|
| 89 |
+
350
|
| 90 |
+
],
|
| 91 |
+
"page_idx": 0
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"type": "text",
|
| 95 |
+
"text": "In this paper, we propose ATLAS (ATtribute-controlled LAy Summarization), a novel scientific summarisation approach that aims to control four attributes targeting distinct properties contributing to the overall \"layness\" of the generated summary, thus allowing it to cater to the specific needs of different audiences. Although recent attempts at text simplification and story generation have had success influencing the style (Martin et al., 2020; Kong et al., 2021; Sheang and Saggion, 2021) and content (Kong et al., 2021; Tang et al., 2024) of generated text using fine-grained controllable attributes, no work to our knowledge has explored this for scientific summarisation. Luo et al. (2022) recently addressed the task of readability-controlled scientific summarisation, however, this is only done at a binary level, training a model to produce either a technical or non-technical summary based on a single control token.",
|
| 96 |
+
"bbox": [
|
| 97 |
+
507,
|
| 98 |
+
376,
|
| 99 |
+
884,
|
| 100 |
+
682
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "text",
|
| 106 |
+
"text": "Our approach innovates by enabling a greater degree of controllability through the flexible handling of multiple attributes, allowing it to produce more diverse summaries and better address the specific needs of different audiences. Our results show that ATLAS outperforms state-of-the-art baselines in both automatic and human evaluations across three summary types with varying levels of technicality. Additional analyses confirm that attribute control positively influences performance, and suggest the selected control attributes are able to effectively capture the difference between technical and nontechnical summaries.",
|
| 107 |
+
"bbox": [
|
| 108 |
+
507,
|
| 109 |
+
709,
|
| 110 |
+
885,
|
| 111 |
+
917
|
| 112 |
+
],
|
| 113 |
+
"page_idx": 0
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"type": "page_footnote",
|
| 117 |
+
"text": "* Corresponding author",
|
| 118 |
+
"bbox": [
|
| 119 |
+
141,
|
| 120 |
+
904,
|
| 121 |
+
295,
|
| 122 |
+
917
|
| 123 |
+
],
|
| 124 |
+
"page_idx": 0
|
| 125 |
+
},
|
| 126 |
+
{
|
| 127 |
+
"type": "page_number",
|
| 128 |
+
"text": "337",
|
| 129 |
+
"bbox": [
|
| 130 |
+
485,
|
| 131 |
+
927,
|
| 132 |
+
515,
|
| 133 |
+
940
|
| 134 |
+
],
|
| 135 |
+
"page_idx": 0
|
| 136 |
+
},
|
| 137 |
+
{
|
| 138 |
+
"type": "footer",
|
| 139 |
+
"text": "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 337-345 August 11-16, 2024 ©2024 Association for Computational Linguistics",
|
| 140 |
+
"bbox": [
|
| 141 |
+
95,
|
| 142 |
+
945,
|
| 143 |
+
900,
|
| 144 |
+
973
|
| 145 |
+
],
|
| 146 |
+
"page_idx": 0
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"type": "text",
|
| 150 |
+
"text": "2 Methodology",
|
| 151 |
+
"text_level": 1,
|
| 152 |
+
"bbox": [
|
| 153 |
+
112,
|
| 154 |
+
84,
|
| 155 |
+
263,
|
| 156 |
+
99
|
| 157 |
+
],
|
| 158 |
+
"page_idx": 1
|
| 159 |
+
},
|
| 160 |
+
{
|
| 161 |
+
"type": "text",
|
| 162 |
+
"text": "As discussed in §1, ATLAS aims to control four targeted attributes. We use BART-base as the base model for ATLAS as it represents the state-of-the-art benchmark in previous lay summarisation works (Guo et al., 2021; Goldsack et al., 2022).",
|
| 163 |
+
"bbox": [
|
| 164 |
+
112,
|
| 165 |
+
110,
|
| 166 |
+
487,
|
| 167 |
+
189
|
| 168 |
+
],
|
| 169 |
+
"page_idx": 1
|
| 170 |
+
},
|
| 171 |
+
{
|
| 172 |
+
"type": "text",
|
| 173 |
+
"text": "Formally, each document $x = (x_{1}, x_{2}, \\ldots, x_{n})$ of length $n$ , where $x_{i}$ is the $i$ -th token, is prepended with a control token sequence $l$ such that $x = (l, x_{1}, x_{2}s, \\ldots, x_{n})$ . $l$ consists of our four selected control tokens, each of which targets distinct characteristics of the output summary that contributes to its overall comprehensibility. We describe each aspect below:",
|
| 174 |
+
"bbox": [
|
| 175 |
+
112,
|
| 176 |
+
191,
|
| 177 |
+
487,
|
| 178 |
+
319
|
| 179 |
+
],
|
| 180 |
+
"page_idx": 1
|
| 181 |
+
},
|
| 182 |
+
{
|
| 183 |
+
"type": "text",
|
| 184 |
+
"text": "Length (L) The length of the output summary in characters. A more lay audience may require a longer summary to aid comprehension.",
|
| 185 |
+
"bbox": [
|
| 186 |
+
112,
|
| 187 |
+
330,
|
| 188 |
+
487,
|
| 189 |
+
378
|
| 190 |
+
],
|
| 191 |
+
"page_idx": 1
|
| 192 |
+
},
|
| 193 |
+
{
|
| 194 |
+
"type": "text",
|
| 195 |
+
"text": "Readability (R) How easy it is to read the text. This is measured using the Flesh-Kincaid Grade Level (FKGL) metric, which estimates the reading grade level (US) required to understand the generated text based on the total number of sentences, words, and syllables present within it.",
|
| 196 |
+
"bbox": [
|
| 197 |
+
112,
|
| 198 |
+
388,
|
| 199 |
+
487,
|
| 200 |
+
483
|
| 201 |
+
],
|
| 202 |
+
"page_idx": 1
|
| 203 |
+
},
|
| 204 |
+
{
|
| 205 |
+
"type": "text",
|
| 206 |
+
"text": "Background information (BG) The percentage of sentences classified as containing primarily background information. Intuitively, a more lay audience will require greater levels of background information to contextualise an article.",
|
| 207 |
+
"bbox": [
|
| 208 |
+
112,
|
| 209 |
+
495,
|
| 210 |
+
489,
|
| 211 |
+
574
|
| 212 |
+
],
|
| 213 |
+
"page_idx": 1
|
| 214 |
+
},
|
| 215 |
+
{
|
| 216 |
+
"type": "text",
|
| 217 |
+
"text": "Content word entropy (CWE) The average entropy of content words. We hypothesise that jargon terms are likely to possess higher entropy values, thus lower average CWE is likely to be a property of more lay text. Since jargon terms are predominately nouns, we extract noun phrases as content words using CoreNLP library (Manning et al., 2014). We then follow Xiao et al. (2020) to calculate $I(x_{i})$ entropy of a given token $x_{i}$ as the negative logarithm of its generation probability $P(x_{i})$ , which is directly extracted from a pre-trained language model.",
|
| 218 |
+
"bbox": [
|
| 219 |
+
112,
|
| 220 |
+
586,
|
| 221 |
+
489,
|
| 222 |
+
778
|
| 223 |
+
],
|
| 224 |
+
"page_idx": 1
|
| 225 |
+
},
|
| 226 |
+
{
|
| 227 |
+
"type": "equation",
|
| 228 |
+
"text": "\n$$\nI \\left(x _ {i}\\right) = - \\log P \\left(x _ {i}\\right) \\tag {1}\n$$\n",
|
| 229 |
+
"text_format": "latex",
|
| 230 |
+
"bbox": [
|
| 231 |
+
223,
|
| 232 |
+
796,
|
| 233 |
+
487,
|
| 234 |
+
813
|
| 235 |
+
],
|
| 236 |
+
"page_idx": 1
|
| 237 |
+
},
|
| 238 |
+
{
|
| 239 |
+
"type": "text",
|
| 240 |
+
"text": "During model training, true attribute values (as calculated on reference summaries) are used, allowing the model to learn to associate attribute values with summary properties. For all attributes, values are discretized into 10 fixed-width bins depending on their respective range in the train split (from",
|
| 241 |
+
"bbox": [
|
| 242 |
+
112,
|
| 243 |
+
822,
|
| 244 |
+
487,
|
| 245 |
+
917
|
| 246 |
+
],
|
| 247 |
+
"page_idx": 1
|
| 248 |
+
},
|
| 249 |
+
{
|
| 250 |
+
"type": "text",
|
| 251 |
+
"text": "minimum to maximum observed value), resulting in 10 unique control tokens for each attribute which are added to the vocabulary. For each attribute at test time, we use the most common bin value observed for reference summaries of the training set as attribute values.",
|
| 252 |
+
"bbox": [
|
| 253 |
+
507,
|
| 254 |
+
84,
|
| 255 |
+
882,
|
| 256 |
+
179
|
| 257 |
+
],
|
| 258 |
+
"page_idx": 1
|
| 259 |
+
},
|
| 260 |
+
{
|
| 261 |
+
"type": "text",
|
| 262 |
+
"text": "3 Experimental Setup",
|
| 263 |
+
"text_level": 1,
|
| 264 |
+
"bbox": [
|
| 265 |
+
507,
|
| 266 |
+
193,
|
| 267 |
+
717,
|
| 268 |
+
210
|
| 269 |
+
],
|
| 270 |
+
"page_idx": 1
|
| 271 |
+
},
|
| 272 |
+
{
|
| 273 |
+
"type": "text",
|
| 274 |
+
"text": "Data. We experiment on the biomedical lay summarisation datasets introduced in Goldsack et al. (2022), eLife (4.8k articles) and PLOS (27.5k articles), for which target lay summaries have been shown to contain different levels of \"layness\". Specifically, eLife's lay summaries have been characterized as longer, more readable, and more abstractive than those of PLOS, as well as being empirically observed to be suitable for a more lay audience. We, therefore, combine both of these datasets, allowing us to expose ATLAS to a greater variety of attribute values during training. For each article in the combined dataset, we train our ATLAS to produce both the technical abstract and lay summary, using our control attributes to differentiate between them.",
|
| 275 |
+
"bbox": [
|
| 276 |
+
507,
|
| 277 |
+
219,
|
| 278 |
+
884,
|
| 279 |
+
475
|
| 280 |
+
],
|
| 281 |
+
"page_idx": 1
|
| 282 |
+
},
|
| 283 |
+
{
|
| 284 |
+
"type": "text",
|
| 285 |
+
"text": "Evaluation. We employ several automatic metrics to evaluate the performance of ATLAS. In line with common summarisation practice, we calculate ROUGE-1,2, and L variants (Lin, 2004) and BERTScore (Zhang et al., 2019). We also measure Dale-Chall Readability Score, a metric that estimates US grade level based on the frequency of common words.",
|
| 286 |
+
"bbox": [
|
| 287 |
+
507,
|
| 288 |
+
487,
|
| 289 |
+
882,
|
| 290 |
+
615
|
| 291 |
+
],
|
| 292 |
+
"page_idx": 1
|
| 293 |
+
},
|
| 294 |
+
{
|
| 295 |
+
"type": "text",
|
| 296 |
+
"text": "Baselines. To enable fair comparison, we rerun many of the baseline approaches used by Goldsack et al. (2022) (which have the abstract included in the input) on the combined datasets. Specifically, we rerun the Lead-3, Lead-K, and oracle heuristic baselines; TextRank (Mihalcea and Tairau, 2004), LexRank (Erkan and Radev, 2004), and HipoRank (Dong et al., 2021) unsupervised models; and BART and $\\mathrm{BART}_{\\mathrm{Scaffold}}$ supervised models. Here, we use the transformer-based BART base model (Lewis et al., 2020), which we fine-tune on our own datasets. $\\mathrm{BART}_{\\mathrm{Scaffold}}$ is the recreation of a model from Goldsack et al. (2022) which is trained using a binary control token (<abs> or <lay>) to produce either an abstract or lay summary for an article. This model is equivalent to that pro",
|
| 297 |
+
"bbox": [
|
| 298 |
+
507,
|
| 299 |
+
626,
|
| 300 |
+
882,
|
| 301 |
+
883
|
| 302 |
+
],
|
| 303 |
+
"page_idx": 1
|
| 304 |
+
},
|
| 305 |
+
{
|
| 306 |
+
"type": "page_footnote",
|
| 307 |
+
"text": "To combine the datasets, we merge the training and validation sets. We evaluate on the test sets separately.",
|
| 308 |
+
"bbox": [
|
| 309 |
+
507,
|
| 310 |
+
892,
|
| 311 |
+
882,
|
| 312 |
+
917
|
| 313 |
+
],
|
| 314 |
+
"page_idx": 1
|
| 315 |
+
},
|
| 316 |
+
{
|
| 317 |
+
"type": "page_number",
|
| 318 |
+
"text": "338",
|
| 319 |
+
"bbox": [
|
| 320 |
+
485,
|
| 321 |
+
927,
|
| 322 |
+
515,
|
| 323 |
+
939
|
| 324 |
+
],
|
| 325 |
+
"page_idx": 1
|
| 326 |
+
},
|
| 327 |
+
{
|
| 328 |
+
"type": "image",
|
| 329 |
+
"img_path": "images/f78e1cb9372117e01faaf5f53a7b12fa3c6e981c94d03808d2a6c50994a76466.jpg",
|
| 330 |
+
"image_caption": [
|
| 331 |
+
"Figure 1: Visualisation of the density distributions of controllable attribute values for each summary type in the combined train split."
|
| 332 |
+
],
|
| 333 |
+
"image_footnote": [],
|
| 334 |
+
"bbox": [
|
| 335 |
+
117,
|
| 336 |
+
84,
|
| 337 |
+
319,
|
| 338 |
+
193
|
| 339 |
+
],
|
| 340 |
+
"page_idx": 2
|
| 341 |
+
},
|
| 342 |
+
{
|
| 343 |
+
"type": "image",
|
| 344 |
+
"img_path": "images/9fb02424d093fffbd85cad9f743c0c16083254c3e8d591684762e3f4dcd35399.jpg",
|
| 345 |
+
"image_caption": [],
|
| 346 |
+
"image_footnote": [],
|
| 347 |
+
"bbox": [
|
| 348 |
+
327,
|
| 349 |
+
85,
|
| 350 |
+
504,
|
| 351 |
+
193
|
| 352 |
+
],
|
| 353 |
+
"page_idx": 2
|
| 354 |
+
},
|
| 355 |
+
{
|
| 356 |
+
"type": "image",
|
| 357 |
+
"img_path": "images/453396035c9edb5787984555e1b2490efd944e7875374f365bf5ca891e4cd32d.jpg",
|
| 358 |
+
"image_caption": [],
|
| 359 |
+
"image_footnote": [],
|
| 360 |
+
"bbox": [
|
| 361 |
+
527,
|
| 362 |
+
85,
|
| 363 |
+
690,
|
| 364 |
+
193
|
| 365 |
+
],
|
| 366 |
+
"page_idx": 2
|
| 367 |
+
},
|
| 368 |
+
{
|
| 369 |
+
"type": "image",
|
| 370 |
+
"img_path": "images/d977ab6cc2b7a2159e3c95fe34204ddd2c873c9fc446bb05299a52be19131cb6.jpg",
|
| 371 |
+
"image_caption": [],
|
| 372 |
+
"image_footnote": [],
|
| 373 |
+
"bbox": [
|
| 374 |
+
705,
|
| 375 |
+
98,
|
| 376 |
+
880,
|
| 377 |
+
193
|
| 378 |
+
],
|
| 379 |
+
"page_idx": 2
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"type": "text",
|
| 383 |
+
"text": "posed by Luo et al. (2022), the only previous work on controllable lay summarisation. $^{2}$",
|
| 384 |
+
"bbox": [
|
| 385 |
+
110,
|
| 386 |
+
259,
|
| 387 |
+
485,
|
| 388 |
+
290
|
| 389 |
+
],
|
| 390 |
+
"page_idx": 2
|
| 391 |
+
},
|
| 392 |
+
{
|
| 393 |
+
"type": "text",
|
| 394 |
+
"text": "Finally, we include two baselines based on ChatGPT (3.5-turbo), so as to compare against an accessible and widely used method of controlling text generation (i.e., prompt engineering). Our first GPT baseline (GPT3.5-zs) uses the following zero-shot prompts: (i) \"Summarize the following article for an expert audience that is familiar with the technical aspects of the content\" to generate technical abstracts; (ii) \"Summarize the following article for a non-expert audience that has some familiarity with the technical aspects of the content\" to generate PLOS lay summaries, and (iii) \"Summarize the following article for a non-expert audience that has no familiarity with the technical aspects of the content\" to generate eLife lay summaries. Our second GPT baseline (GPT3.5-mdc) replicates the method of Turbitt et al. (2023), the best-performing team of the recent BioLaySumm shared task (Goldsack et al., 2023a). Based on in-context learning, this method dynamically selects the maximum number of input-output examples that fit in the context window (separated by the simple prompt \"Explanation:\") to generate lay summaries based on only the article abstract.",
|
| 395 |
+
"bbox": [
|
| 396 |
+
115,
|
| 397 |
+
292,
|
| 398 |
+
489,
|
| 399 |
+
677
|
| 400 |
+
],
|
| 401 |
+
"page_idx": 2
|
| 402 |
+
},
|
| 403 |
+
{
|
| 404 |
+
"type": "text",
|
| 405 |
+
"text": "Implementation Details. As mentioned in §2, we employ BART-base as our base model. We train our ATLAS for a maximum of 5 epochs on a GeForce GTX-1080Ti GPU, retaining the checkpoint with the best average ROUGE-1/2/L score on the validation set. We set the batch size to 1 and keep the $\\alpha$ scale factor (§2) at the default value of 0.2 from Kong et al. (2021).",
|
| 406 |
+
"bbox": [
|
| 407 |
+
112,
|
| 408 |
+
690,
|
| 409 |
+
487,
|
| 410 |
+
818
|
| 411 |
+
],
|
| 412 |
+
"page_idx": 2
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"type": "text",
|
| 416 |
+
"text": "For calculating control attributes, we use SciBERT (Beltagy et al., 2019) for entropy calculation, and we employ a BERT-based sequential classi",
|
| 417 |
+
"bbox": [
|
| 418 |
+
112,
|
| 419 |
+
820,
|
| 420 |
+
489,
|
| 421 |
+
869
|
| 422 |
+
],
|
| 423 |
+
"page_idx": 2
|
| 424 |
+
},
|
| 425 |
+
{
|
| 426 |
+
"type": "table",
|
| 427 |
+
"img_path": "images/8fdf577baeba099277f98d30c563056cd38a357fbeb484c2e8979a65fcde34b3.jpg",
|
| 428 |
+
"table_caption": [],
|
| 429 |
+
"table_footnote": [],
|
| 430 |
+
"table_body": "<table><tr><td>Summary type</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td>Abstract</td><td>0.69</td><td>0.75</td><td>0.72</td></tr><tr><td>eLife-Lay</td><td>0.71</td><td>0.71</td><td>0.71</td></tr><tr><td>PLOS-Lay</td><td>0.73</td><td>0.66</td><td>0.71</td></tr></table>",
|
| 431 |
+
"bbox": [
|
| 432 |
+
532,
|
| 433 |
+
255,
|
| 434 |
+
858,
|
| 435 |
+
319
|
| 436 |
+
],
|
| 437 |
+
"page_idx": 2
|
| 438 |
+
},
|
| 439 |
+
{
|
| 440 |
+
"type": "text",
|
| 441 |
+
"text": "Table 1: Classifier performance for 3-way classification between summary types on the combined test set.",
|
| 442 |
+
"bbox": [
|
| 443 |
+
507,
|
| 444 |
+
329,
|
| 445 |
+
880,
|
| 446 |
+
357
|
| 447 |
+
],
|
| 448 |
+
"page_idx": 2
|
| 449 |
+
},
|
| 450 |
+
{
|
| 451 |
+
"type": "text",
|
| 452 |
+
"text": "fier (Cohan et al., 2019) trained on the PubMedRTC dataset (Dernoncourt and Lee, 2017) for background sentence classification (as described in Goldsack et al. (2022)). We compute the FKGL readability score using the textstat package.",
|
| 453 |
+
"bbox": [
|
| 454 |
+
507,
|
| 455 |
+
382,
|
| 456 |
+
882,
|
| 457 |
+
464
|
| 458 |
+
],
|
| 459 |
+
"page_idx": 2
|
| 460 |
+
},
|
| 461 |
+
{
|
| 462 |
+
"type": "text",
|
| 463 |
+
"text": "4 Experimental Results",
|
| 464 |
+
"text_level": 1,
|
| 465 |
+
"bbox": [
|
| 466 |
+
507,
|
| 467 |
+
476,
|
| 468 |
+
729,
|
| 469 |
+
492
|
| 470 |
+
],
|
| 471 |
+
"page_idx": 2
|
| 472 |
+
},
|
| 473 |
+
{
|
| 474 |
+
"type": "text",
|
| 475 |
+
"text": "Discriminatory ability of control attributes. To validate the ability of our controllable attributes to distinguish between different summary types, we plot the distribution of attribute values for each type in Figure 1. The figure suggests that, in combination, the attributes are able to capture characteristic differences between summary types, as instances in which two summary types share a similar distribution for one attribute can typically be separated by other attributes.<sup>3</sup>",
|
| 476 |
+
"bbox": [
|
| 477 |
+
505,
|
| 478 |
+
501,
|
| 479 |
+
882,
|
| 480 |
+
661
|
| 481 |
+
],
|
| 482 |
+
"page_idx": 2
|
| 483 |
+
},
|
| 484 |
+
{
|
| 485 |
+
"type": "text",
|
| 486 |
+
"text": "To further evidence this, we use the training set to train a simple logistic regression classifier, using only the attribute values of the reference summaries as features, to discriminate between reference summary types. The test set results in Table 1 show that all summary types are classified with an F1-score above 0.7, attesting to the discriminatory power of our control attributes.",
|
| 487 |
+
"bbox": [
|
| 488 |
+
507,
|
| 489 |
+
663,
|
| 490 |
+
882,
|
| 491 |
+
790
|
| 492 |
+
],
|
| 493 |
+
"page_idx": 2
|
| 494 |
+
},
|
| 495 |
+
{
|
| 496 |
+
"type": "text",
|
| 497 |
+
"text": "Summarisation performance. Table 2 presents the performance of ATLAS and baseline models using automatic metrics on the test sets of PLOS",
|
| 498 |
+
"bbox": [
|
| 499 |
+
507,
|
| 500 |
+
800,
|
| 501 |
+
880,
|
| 502 |
+
848
|
| 503 |
+
],
|
| 504 |
+
"page_idx": 2
|
| 505 |
+
},
|
| 506 |
+
{
|
| 507 |
+
"type": "page_footnote",
|
| 508 |
+
"text": "$^{3}$ E.g., PLOS lay summaries and abstracts have similar readability distributions but differ in their comprehensibility, length, and entropy distributions. Similarly, PLOS and eLife lay summaries have similar comprehensibility distributions but differ in their readability and length.",
|
| 509 |
+
"bbox": [
|
| 510 |
+
507,
|
| 511 |
+
857,
|
| 512 |
+
880,
|
| 513 |
+
917
|
| 514 |
+
],
|
| 515 |
+
"page_idx": 2
|
| 516 |
+
},
|
| 517 |
+
{
|
| 518 |
+
"type": "page_footnote",
|
| 519 |
+
"text": "2The original code for Luo et al. (2022) is not yet available at the time of writing and their results are reported on a different dataset and thus are not comparable.",
|
| 520 |
+
"bbox": [
|
| 521 |
+
112,
|
| 522 |
+
879,
|
| 523 |
+
487,
|
| 524 |
+
917
|
| 525 |
+
],
|
| 526 |
+
"page_idx": 2
|
| 527 |
+
},
|
| 528 |
+
{
|
| 529 |
+
"type": "page_number",
|
| 530 |
+
"text": "339",
|
| 531 |
+
"bbox": [
|
| 532 |
+
485,
|
| 533 |
+
927,
|
| 534 |
+
515,
|
| 535 |
+
940
|
| 536 |
+
],
|
| 537 |
+
"page_idx": 2
|
| 538 |
+
},
|
| 539 |
+
{
|
| 540 |
+
"type": "table",
|
| 541 |
+
"img_path": "images/2c6493158523b955cc2e27df06dfac34d094e7903c433f4ee5655c1bcda213ad.jpg",
|
| 542 |
+
"table_caption": [],
|
| 543 |
+
"table_footnote": [],
|
| 544 |
+
"table_body": "<table><tr><td rowspan=\"2\" colspan=\"2\">Model</td><td colspan=\"6\">Abstract</td><td colspan=\"6\">Lay summary - PLOS</td><td colspan=\"6\">Lay summary - eLife</td></tr><tr><td>R-1</td><td>R-2</td><td>R-L</td><td>BS</td><td>DCRS</td><td>FKGL</td><td>R-1</td><td>R-2</td><td>R-L</td><td>BS</td><td>DCRS</td><td>FKGL</td><td>R-1</td><td>R-2</td><td>R-L</td><td>BS</td><td>DCRS</td><td>FKGL</td></tr><tr><td rowspan=\"3\">Hemicratic</td><td>Lead-3</td><td>23.86</td><td>5.66</td><td>21.48</td><td>81.17</td><td>12.66</td><td>14.82</td><td>27.41</td><td>6.87</td><td>24.61</td><td>83.36</td><td>12.66</td><td>15.08</td><td>19.41</td><td>4.06</td><td>18.02</td><td>81.65</td><td>12.65</td><td>13.30</td></tr><tr><td>Lead-K</td><td>35.69</td><td>9.07</td><td>32.70</td><td>82.86</td><td>11.69</td><td>14.49</td><td>38.28</td><td>9.45</td><td>34.8</td><td>83.72</td><td>11.88</td><td>14.95</td><td>37.27</td><td>7.53</td><td>35.18</td><td>82.05</td><td>10.58</td><td>11.89</td></tr><tr><td>Oracle</td><td>60.08</td><td>27.48</td><td>55.95</td><td>87.35</td><td>11.12</td><td>15.15</td><td>57.82</td><td>23.92</td><td>53.37</td><td>87.13</td><td>11.20</td><td>15.28</td><td>48.92</td><td>13.42</td><td>46.30</td><td>82.94</td><td>10.51</td><td>13.18</td></tr><tr><td rowspan=\"3\">Unsupp.</td><td>TextRank</td><td>40.26</td><td>11.53</td><td>36.02</td><td>83.83</td><td>11.78</td><td>20.08</td><td>37.55</td><td>8.50</td><td>33.28</td><td>83.43</td><td>11.87</td><td>20.27</td><td>33.88</td><td>5.79</td><td>31.55</td><td>81.16</td><td>11.30</td><td>18.98</td></tr><tr><td>LexRank</td><td>38.22</td><td>13.06</td><td>35.42</td><td>83.85</td><td>9.70</td><td>14.23</td><td>31.20</td><td>9.09</td><td>28.72</td><td>82.97</td><td>9.70</td><td>14.59</td><td>32.25</td><td>5.73</td><td>30.45</td><td>80.67</td><td>9.68</td><td>13.32</td></tr><tr><td>HipoRank</td><td>36.95</td><td>10.19</td><td>33.89</td><td>83.22</td><td>12.15</td><td>14.46</td><td>37.67</td><td>9.22</td><td>34.28</td><td>83.68</td><td>12.15</td><td>14.69</td><td>31.50</td><td>5.17</td><td>29.68</td><td>80.88</td><td>12.13</td><td>12.13</td></tr><tr><td rowspan=\"6\">Supervised</td><td>BART</td><td>43.34</td><td>13.14</td><td>39.80</td><td>85.48</td><td>11.33</td><td>14.40</td><td>43.52</td><td>12.09</td><td>39.67</td><td>85.70</td><td>11.29</td><td>14.54</td><td>31.17</td><td>6.74</td><td>29.20</td><td>83.55</td><td>11.15</td><td>13.87</td></tr><tr><td>BART_Scaffold</td><td>43.13</td><td>12.87</td><td>39.66</td><td>85.33</td><td>11.10</td><td>14.14</td><td>43.73</td><td>12.22</td><td>39.92</td><td>85.67</td><td>11.30</td><td>14.58</td><td>43.01</td><td>10.82</td><td>40.54</td><td>84.88</td><td>9.68</td><td>11.85</td></tr><tr><td>GPT3.5-zs</td><td>28.69</td><td>6.52</td><td>15.04</td><td>82.76</td><td>11.70</td><td>14.32</td><td>42.74</td><td>12.70</td><td>22.28</td><td>86.32</td><td>10.40</td><td>13.19</td><td>33.72</td><td>8.45</td><td>16.95</td><td>84.36</td><td>10.36</td><td>13.03</td></tr><tr><td>GPT3.5-mdc</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>44.41</td><td>14.16</td><td>41.12</td><td>86.55</td><td>10.36</td><td>13.32</td><td>37.97</td><td>9.39</td><td>35.57</td><td>84.22</td><td>10.78</td><td>13.70</td></tr><tr><td>ATLAS</td><td>45.87</td><td>14.08</td><td>42.32</td><td>85.54</td><td>10.96</td><td>14.21</td><td>44.44</td><td>12.33</td><td>40.60</td><td>85.70</td><td>11.22</td><td>14.58</td><td>46.80</td><td>12.57</td><td>44.14</td><td>85.20</td><td>8.95</td><td>10.87</td></tr><tr><td>ATLAS_Oracle</td><td>46.11</td><td>14.07</td><td>42.51</td><td>85.69</td><td>10.99</td><td>14.13</td><td>44.97</td><td>12.49</td><td>41.02</td><td>85.82</td><td>11.21</td><td>14.48</td><td>46.61</td><td>12.29</td><td>43.95</td><td>85.11</td><td>9.18</td><td>11.39</td></tr></table>",
|
| 545 |
+
"bbox": [
|
| 546 |
+
117,
|
| 547 |
+
80,
|
| 548 |
+
882,
|
| 549 |
+
212
|
| 550 |
+
],
|
| 551 |
+
"page_idx": 3
|
| 552 |
+
},
|
| 553 |
+
{
|
| 554 |
+
"type": "text",
|
| 555 |
+
"text": "and eLife. We include the results for ATLAS under two conditions: 1) one utilizing the average value for each attribute observed in the training data for each summary type (ATLAS); and 2) one using true attribute values obtained from gold standard summaries $(\\mathrm{ATLAS}_{\\mathrm{Oracle}})$ , where $\\mathrm{ATLAS}_{\\mathrm{Oracle}}$ is intended to provide an upper bound of the obtainable performance using our control attributes.",
|
| 556 |
+
"bbox": [
|
| 557 |
+
110,
|
| 558 |
+
278,
|
| 559 |
+
487,
|
| 560 |
+
407
|
| 561 |
+
],
|
| 562 |
+
"page_idx": 3
|
| 563 |
+
},
|
| 564 |
+
{
|
| 565 |
+
"type": "text",
|
| 566 |
+
"text": "For all metrics, it is evident from Table 2 that ATLAS exceeds the performance of all baseline approaches for both eLife lay summaries and abstracts, demonstrating a strong ability to control the technicality of generated text whilst producing high-quality summaries. Interestingly, although the GPT3.5-mdc baseline achieves a slightly stronger all-round performance for PLOS lay summaries, it fails to maintain this for the more \"lay\" summaries of eLife where ATLAS achieves significantly better performance, indicating that our control attributes can effectively capture these differences.",
|
| 567 |
+
"bbox": [
|
| 568 |
+
110,
|
| 569 |
+
407,
|
| 570 |
+
487,
|
| 571 |
+
600
|
| 572 |
+
],
|
| 573 |
+
"page_idx": 3
|
| 574 |
+
},
|
| 575 |
+
{
|
| 576 |
+
"type": "text",
|
| 577 |
+
"text": "In all cases, ATLAS also achieves scores that are comparable to (and sometimes exceeding) that of $\\mathrm{ATLAS}_{\\text{Oracle}}$ , suggesting that the use of the most frequently observed bin value for control attributes is effective for producing the appropriate characteristics for each summary type.",
|
| 578 |
+
"bbox": [
|
| 579 |
+
112,
|
| 580 |
+
601,
|
| 581 |
+
489,
|
| 582 |
+
697
|
| 583 |
+
],
|
| 584 |
+
"page_idx": 3
|
| 585 |
+
},
|
| 586 |
+
{
|
| 587 |
+
"type": "text",
|
| 588 |
+
"text": "Ablation study. To assess the contribution of each attribute to model performance, we conduct an ablation study, evaluating $\\mathrm{ATLAS}_{\\mathrm{Oracle}}$ under different configurations. Table 3 reports the results of this study for abstracts and lay summaries on the combined test sets of PLOS and eLife.",
|
| 589 |
+
"bbox": [
|
| 590 |
+
112,
|
| 591 |
+
709,
|
| 592 |
+
487,
|
| 593 |
+
804
|
| 594 |
+
],
|
| 595 |
+
"page_idx": 3
|
| 596 |
+
},
|
| 597 |
+
{
|
| 598 |
+
"type": "text",
|
| 599 |
+
"text": "The table shows that the removal of control attributes has a significant detrimental effect on performance. Additionally, when only a single attribute is included, the length-based control has",
|
| 600 |
+
"bbox": [
|
| 601 |
+
112,
|
| 602 |
+
806,
|
| 603 |
+
489,
|
| 604 |
+
870
|
| 605 |
+
],
|
| 606 |
+
"page_idx": 3
|
| 607 |
+
},
|
| 608 |
+
{
|
| 609 |
+
"type": "table",
|
| 610 |
+
"img_path": "images/04177d490802229a5a69c9f207055de32a802504266fa826f1079c2949da7998.jpg",
|
| 611 |
+
"table_caption": [
|
| 612 |
+
"Table 2: Summarization performance on the PLOS and eLife test sets (abstracts combined). $\\mathrm{R} =$ ROUGE F1 (↑), BS = BERTScore (↑), DCRS = Dale-Chall Readability Score (↓), FKGL = Flesh-Kincaid Grade Level (↓). For supervised models, we highlight the best score obtained for each metric in bold and underline second best."
|
| 613 |
+
],
|
| 614 |
+
"table_footnote": [],
|
| 615 |
+
"table_body": "<table><tr><td rowspan=\"2\">Model</td><td colspan=\"4\">Lay summary</td><td colspan=\"4\">Abstract</td></tr><tr><td>R-1</td><td>R-2</td><td>R-L</td><td>DCRS</td><td>R-1</td><td>R-2</td><td>R-L</td><td>DCRS</td></tr><tr><td>BART</td><td>41.68</td><td>11.29</td><td>38.12</td><td>11.27</td><td>43.34</td><td>13.14</td><td>39.80</td><td>11.33</td></tr><tr><td>+R</td><td>43.34</td><td>12.03</td><td>39.75</td><td>10.91</td><td>43.49</td><td>13.23</td><td>39.95</td><td>11.12</td></tr><tr><td>+BG</td><td>42.52</td><td>11.71</td><td>39.01</td><td>11.01</td><td>43.74</td><td>13.65</td><td>40.35</td><td>10.98</td></tr><tr><td>+CWE</td><td>41.58</td><td>11.21</td><td>38.04</td><td>11.28</td><td>44.23</td><td>13.48</td><td>40.56</td><td>11.35</td></tr><tr><td>+L</td><td>44.22</td><td>12.21</td><td>40.55</td><td>10.81</td><td>44.83</td><td>13.75</td><td>41.31</td><td>11.03</td></tr><tr><td>+L+BG</td><td>44.66</td><td>12.36</td><td>40.96</td><td>10.99</td><td>45.67</td><td>13.78</td><td>42.02</td><td>11.17</td></tr><tr><td>+L+R</td><td>44.52</td><td>12.10</td><td>40.73</td><td>10.92</td><td>45.54</td><td>13.64</td><td>41.78</td><td>11.21</td></tr><tr><td>+L+CWE</td><td>44.72</td><td>12.41</td><td>41.04</td><td>10.88</td><td>45.87</td><td>13.99</td><td>42.32</td><td>10.10</td></tr><tr><td>+L+R+BG</td><td>44.82</td><td>12.41</td><td>41.10</td><td>10.97</td><td>45.94</td><td>14.07</td><td>42.32</td><td>11.10</td></tr><tr><td>+L+R+CWE</td><td>44.83</td><td>12.39</td><td>41.05</td><td>10.90</td><td>45.60</td><td>13.63</td><td>41.84</td><td>11.21</td></tr><tr><td>+L+BG+CWE</td><td>45.01</td><td>12.56</td><td>41.38</td><td>10.88</td><td>46.04</td><td>14.16</td><td>42.44</td><td>11.06</td></tr><tr><td>ATLASOracle</td><td>45.22</td><td>12.47</td><td>41.45</td><td>10.91</td><td>46.11</td><td>14.07</td><td>42.51</td><td>10.99</td></tr></table>",
|
| 616 |
+
"bbox": [
|
| 617 |
+
512,
|
| 618 |
+
275,
|
| 619 |
+
880,
|
| 620 |
+
404
|
| 621 |
+
],
|
| 622 |
+
"page_idx": 3
|
| 623 |
+
},
|
| 624 |
+
{
|
| 625 |
+
"type": "text",
|
| 626 |
+
"text": "Table 3: Ablation study on the ROUGE-based performance of ATLAS under different configurations using true attribute values. \"+\" denotes aspect addition. L = Length, R = Readability, CWE = Content Word Entropy, BG = Background information.",
|
| 627 |
+
"bbox": [
|
| 628 |
+
507,
|
| 629 |
+
414,
|
| 630 |
+
882,
|
| 631 |
+
486
|
| 632 |
+
],
|
| 633 |
+
"page_idx": 3
|
| 634 |
+
},
|
| 635 |
+
{
|
| 636 |
+
"type": "text",
|
| 637 |
+
"text": "the highest ROUGE scores, particularly for lay summaries. This is to be expected, as lay summaries are known to differ significantly in length between PLOS (avg. 175.6 words) and eLife (avg. 347.6 words). When employing attributes in combination, we can see that the addition of content word entropy control and the subsequent addition of background information control have the greatest benefit to performance for ATLAS with 2 and 3 attributes, respectively. Interestingly, no attribute emerges clearly as the least effective as, although readability score control is the only one not included in the 3 attribute model, its inclusion in the single attribute model has clear benefits for lay summary performance. This provides further evidence that, in combination, our control attributes are able to capture the differences between summary types and effectuate them during generation.",
|
| 638 |
+
"bbox": [
|
| 639 |
+
507,
|
| 640 |
+
527,
|
| 641 |
+
882,
|
| 642 |
+
818
|
| 643 |
+
],
|
| 644 |
+
"page_idx": 3
|
| 645 |
+
},
|
| 646 |
+
{
|
| 647 |
+
"type": "text",
|
| 648 |
+
"text": "Human evaluation. To provide a comprehensive assessment of the summaries generated, we conducted a human evaluation involving our proposed model ATLAS and the strongest baseline model",
|
| 649 |
+
"bbox": [
|
| 650 |
+
507,
|
| 651 |
+
854,
|
| 652 |
+
882,
|
| 653 |
+
917
|
| 654 |
+
],
|
| 655 |
+
"page_idx": 3
|
| 656 |
+
},
|
| 657 |
+
{
|
| 658 |
+
"type": "page_footnote",
|
| 659 |
+
"text": "<sup>4</sup>We use ATLAS<sub>Oracle</sub> as the subject of this experiment rather than ATLAS to get a true reflection of each attribute's influence, rather than an approximation.",
|
| 660 |
+
"bbox": [
|
| 661 |
+
112,
|
| 662 |
+
879,
|
| 663 |
+
487,
|
| 664 |
+
917
|
| 665 |
+
],
|
| 666 |
+
"page_idx": 3
|
| 667 |
+
},
|
| 668 |
+
{
|
| 669 |
+
"type": "page_number",
|
| 670 |
+
"text": "340",
|
| 671 |
+
"bbox": [
|
| 672 |
+
485,
|
| 673 |
+
928,
|
| 674 |
+
515,
|
| 675 |
+
940
|
| 676 |
+
],
|
| 677 |
+
"page_idx": 3
|
| 678 |
+
},
|
| 679 |
+
{
|
| 680 |
+
"type": "table",
|
| 681 |
+
"img_path": "images/22f9f288bc202c4fafa5b5240d5eaacb9895143900f0e5454276783b81810bde.jpg",
|
| 682 |
+
"table_caption": [],
|
| 683 |
+
"table_footnote": [],
|
| 684 |
+
"table_body": "<table><tr><td rowspan=\"2\">Criteria</td><td colspan=\"2\">eLife</td><td colspan=\"2\">PLOS</td></tr><tr><td>BART</td><td>ATLAS</td><td>BART</td><td>ATLAS</td></tr><tr><td>Comprehensiveness</td><td>2.30</td><td>2.65</td><td>2.00</td><td>2.55</td></tr><tr><td>Layness</td><td>2.60</td><td>3.05</td><td>2.10</td><td>2.45</td></tr><tr><td>Factuality</td><td>2.20</td><td>2.85</td><td>2.05</td><td>2.40</td></tr></table>",
|
| 685 |
+
"bbox": [
|
| 686 |
+
136,
|
| 687 |
+
80,
|
| 688 |
+
463,
|
| 689 |
+
139
|
| 690 |
+
],
|
| 691 |
+
"page_idx": 4
|
| 692 |
+
},
|
| 693 |
+
{
|
| 694 |
+
"type": "text",
|
| 695 |
+
"text": "(BART) using two experts. $^{5}$ Specifically, adopting a similar setting to the original that of Goldsack et al. (2022), we take a random sample of 10 articles from the test split of each dataset. Alongside each model-generated lay summary, judges are presented with both the abstract and reference lay summary of the given article. We choose not to provide judges with the full article text in an effort to minimise the complexity of the evaluation and the cognitive burden placed upon them. Using 1-5 Likert scale, the judges are asked to rate the model output based on three criteria: (1) Comprehensiveness: to what extent does the model output contain the information that might be necessary for a non-expert to understand the high-level topic of the article and the significance of the research; (2) Layness: to what extent is the content of the model output comprehensible (or readable) to a non-expert, in terms of both structure and language; (3) Factuality: to what extent is the model generated lay summary factually consistent with the two other provided summaries (i.e. abstract and reference lay summary). $^{6}$",
|
| 696 |
+
"bbox": [
|
| 697 |
+
115,
|
| 698 |
+
221,
|
| 699 |
+
487,
|
| 700 |
+
590
|
| 701 |
+
],
|
| 702 |
+
"page_idx": 4
|
| 703 |
+
},
|
| 704 |
+
{
|
| 705 |
+
"type": "text",
|
| 706 |
+
"text": "Table 4 presents the average ratings from our manual evaluation. We calculate the Cohan Kappa scores to measure inter-rater reliability, where we obtain values of 0.50 and 0.57 for eLife and PLOS, attesting to the reliability of our evaluation. The overall results suggest that our proposed method performs better than the BART baseline in terms of all three criteria on both datasets, attesting to their quality. In terms of layness, the higher layness scores observed in the eLife dataset compared to the PLOS dataset align with the previous analysis for the two datasets from (Goldsack et al., 2022). Moreover, compared to baseline, it is worth noting that our model outputs are judged to produce much more factually correct outputs on both datasets, suggesting our method generates fewer hallucinations.",
|
| 707 |
+
"bbox": [
|
| 708 |
+
115,
|
| 709 |
+
594,
|
| 710 |
+
487,
|
| 711 |
+
850
|
| 712 |
+
],
|
| 713 |
+
"page_idx": 4
|
| 714 |
+
},
|
| 715 |
+
{
|
| 716 |
+
"type": "table",
|
| 717 |
+
"img_path": "images/6aa636ea1f8dda3ea53bd29137e555b15026e3898349e7acbb220e2468b3ef8e.jpg",
|
| 718 |
+
"table_caption": [
|
| 719 |
+
"Table 4: Human evaluation on eLife and PLOS. Mean evaluator ratings (1-5) obtained by BART and ATLAS outputs for each metric."
|
| 720 |
+
],
|
| 721 |
+
"table_footnote": [],
|
| 722 |
+
"table_body": "<table><tr><td colspan=\"2\">Model</td><td>FKGL</td><td>CLI</td><td>DCRS</td></tr><tr><td rowspan=\"2\">PLOS</td><td>ATLAS专业技术</td><td>15.11</td><td>14.21</td><td>11.64</td></tr><tr><td>ATLASlay</td><td>13.22</td><td>13.97</td><td>11.22</td></tr><tr><td rowspan=\"2\">eLife</td><td>ATLAS专业技术</td><td>14.77</td><td>14.02</td><td>11.32</td></tr><tr><td>ATLASlay</td><td>10.89</td><td>11.45</td><td>9.17</td></tr></table>",
|
| 723 |
+
"bbox": [
|
| 724 |
+
532,
|
| 725 |
+
80,
|
| 726 |
+
857,
|
| 727 |
+
156
|
| 728 |
+
],
|
| 729 |
+
"page_idx": 4
|
| 730 |
+
},
|
| 731 |
+
{
|
| 732 |
+
"type": "text",
|
| 733 |
+
"text": "Table 5: Readability metrics for two versions of ATLAS with highly lay and technical attribute values.",
|
| 734 |
+
"bbox": [
|
| 735 |
+
507,
|
| 736 |
+
167,
|
| 737 |
+
880,
|
| 738 |
+
195
|
| 739 |
+
],
|
| 740 |
+
"page_idx": 4
|
| 741 |
+
},
|
| 742 |
+
{
|
| 743 |
+
"type": "text",
|
| 744 |
+
"text": "Controllability analysis. To assess the extent to which our control attributes enable controllability over the overall layness of the text, we conduct a further analysis using two additional versions of ATLAS with highly lay or technical values. Specifically, we create $\\mathrm{ATLAS}_{lay}$ and $\\mathrm{ATLAS}_{technical}$ by selecting the lowest and highest attribute bins, respectively, for which there are at least 100 observations in the training data (for all attributes other than length which is kept constant).",
|
| 745 |
+
"bbox": [
|
| 746 |
+
507,
|
| 747 |
+
228,
|
| 748 |
+
882,
|
| 749 |
+
388
|
| 750 |
+
],
|
| 751 |
+
"page_idx": 4
|
| 752 |
+
},
|
| 753 |
+
{
|
| 754 |
+
"type": "text",
|
| 755 |
+
"text": "We examine how these extreme attributes manifest themselves in generated summaries by calculating the average readability values obtained by the generated summaries for both datasets. We present the results of the analysis in Table 5, which show a significant divergence in the readability values obtained by each model on both datasets. Interestingly, this divergence is substantially wider for summaries generated on eLife, the dataset which is identified by Goldsack et al. (2022) as containing lay summaries that are more \"lay\" than those of PLOS, suggesting that exposure to more extreme values whilst training on this dataset may enable even greater controllability at inference time.7",
|
| 756 |
+
"bbox": [
|
| 757 |
+
507,
|
| 758 |
+
393,
|
| 759 |
+
882,
|
| 760 |
+
618
|
| 761 |
+
],
|
| 762 |
+
"page_idx": 4
|
| 763 |
+
},
|
| 764 |
+
{
|
| 765 |
+
"type": "text",
|
| 766 |
+
"text": "5 Conclusion",
|
| 767 |
+
"text_level": 1,
|
| 768 |
+
"bbox": [
|
| 769 |
+
507,
|
| 770 |
+
642,
|
| 771 |
+
640,
|
| 772 |
+
657
|
| 773 |
+
],
|
| 774 |
+
"page_idx": 4
|
| 775 |
+
},
|
| 776 |
+
{
|
| 777 |
+
"type": "text",
|
| 778 |
+
"text": "In this paper, we introduce ATLAS, a model for controllable lay summarisation that employs controllable attribute tokens to influence various properties of the generated summary, enabling it to cater to users of different levels of expertise. Using combined datasets for biomedical lay summarisation we perform multiple experiments whereby we confirm the ability of our selected control attributes to discriminate between summary types, demonstrate their effectiveness for controllable lay summarisation, and further investigate their ability to effectuate desired differences during generation.",
|
| 779 |
+
"bbox": [
|
| 780 |
+
507,
|
| 781 |
+
676,
|
| 782 |
+
884,
|
| 783 |
+
869
|
| 784 |
+
],
|
| 785 |
+
"page_idx": 4
|
| 786 |
+
},
|
| 787 |
+
{
|
| 788 |
+
"type": "page_footnote",
|
| 789 |
+
"text": "Both judges have experience in scientific research and hold at least a bachelor's degree.",
|
| 790 |
+
"bbox": [
|
| 791 |
+
112,
|
| 792 |
+
866,
|
| 793 |
+
485,
|
| 794 |
+
891
|
| 795 |
+
],
|
| 796 |
+
"page_idx": 4
|
| 797 |
+
},
|
| 798 |
+
{
|
| 799 |
+
"type": "page_footnote",
|
| 800 |
+
"text": "For example, for the \"Layness\" criteria, a score of 5 is equal to \"highly lay\" and a score of 1, \"highly technical\".",
|
| 801 |
+
"bbox": [
|
| 802 |
+
112,
|
| 803 |
+
891,
|
| 804 |
+
484,
|
| 805 |
+
917
|
| 806 |
+
],
|
| 807 |
+
"page_idx": 4
|
| 808 |
+
},
|
| 809 |
+
{
|
| 810 |
+
"type": "page_footnote",
|
| 811 |
+
"text": "Examples of summaries generated by these models are included in the Appendices.",
|
| 812 |
+
"bbox": [
|
| 813 |
+
507,
|
| 814 |
+
891,
|
| 815 |
+
880,
|
| 816 |
+
917
|
| 817 |
+
],
|
| 818 |
+
"page_idx": 4
|
| 819 |
+
},
|
| 820 |
+
{
|
| 821 |
+
"type": "page_number",
|
| 822 |
+
"text": "341",
|
| 823 |
+
"bbox": [
|
| 824 |
+
485,
|
| 825 |
+
928,
|
| 826 |
+
512,
|
| 827 |
+
939
|
| 828 |
+
],
|
| 829 |
+
"page_idx": 4
|
| 830 |
+
},
|
| 831 |
+
{
|
| 832 |
+
"type": "text",
|
| 833 |
+
"text": "Limitations",
|
| 834 |
+
"text_level": 1,
|
| 835 |
+
"bbox": [
|
| 836 |
+
114,
|
| 837 |
+
84,
|
| 838 |
+
220,
|
| 839 |
+
99
|
| 840 |
+
],
|
| 841 |
+
"page_idx": 5
|
| 842 |
+
},
|
| 843 |
+
{
|
| 844 |
+
"type": "text",
|
| 845 |
+
"text": "Although our results demonstrate that our selected control attributes are able to effectively capture the characteristics between summary types, it is highly likely that there are additional attributes that we have not explored that could benefit performance for controllable lay summarisation. We plan to explore this in future work, in addition to experimenting with more complex methods for enabling controllability.",
|
| 846 |
+
"bbox": [
|
| 847 |
+
112,
|
| 848 |
+
109,
|
| 849 |
+
489,
|
| 850 |
+
254
|
| 851 |
+
],
|
| 852 |
+
"page_idx": 5
|
| 853 |
+
},
|
| 854 |
+
{
|
| 855 |
+
"type": "text",
|
| 856 |
+
"text": "References",
|
| 857 |
+
"text_level": 1,
|
| 858 |
+
"bbox": [
|
| 859 |
+
114,
|
| 860 |
+
280,
|
| 861 |
+
213,
|
| 862 |
+
296
|
| 863 |
+
],
|
| 864 |
+
"page_idx": 5
|
| 865 |
+
},
|
| 866 |
+
{
|
| 867 |
+
"type": "list",
|
| 868 |
+
"sub_type": "ref_text",
|
| 869 |
+
"list_items": [
|
| 870 |
+
"Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615-3620, Hong Kong, China. Association for Computational Linguistics.",
|
| 871 |
+
"Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, and Dan Weld. 2019. Pretrained language models for sequential sentence classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3693-3699, Hong Kong, China. Association for Computational Linguistics.",
|
| 872 |
+
"Nachshon Cohen, Oren Kalinsky, Yftah Ziser, and Alessandro Moschitti. 2021. Wikisum: Coherent summarization dataset for efficient human-evaluation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 2: Short Papers), Virtual Event, August 1-6, 2021, pages 212-219. Association for Computational Linguistics.",
|
| 873 |
+
"Franck Dernoncourt and Ji Young Lee. 2017. PubMed 200k RCT: a dataset for sequential sentence classification in medical abstracts. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 308-313, Taipei, Taiwan. Asian Federation of Natural Language Processing.",
|
| 874 |
+
"Yue Dong, Andrei Mircea, and Jackie Chi Kit Cheung. 2021. Discourse-Aware unsupervised summarization for long scientific documents. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1089-1102, Online. Association for Computational Linguistics.",
|
| 875 |
+
"G. Erkan and D. R. Radev. 2004. LexRank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research, 22:457-479."
|
| 876 |
+
],
|
| 877 |
+
"bbox": [
|
| 878 |
+
115,
|
| 879 |
+
303,
|
| 880 |
+
490,
|
| 881 |
+
917
|
| 882 |
+
],
|
| 883 |
+
"page_idx": 5
|
| 884 |
+
},
|
| 885 |
+
{
|
| 886 |
+
"type": "list",
|
| 887 |
+
"sub_type": "ref_text",
|
| 888 |
+
"list_items": [
|
| 889 |
+
"Tomas Goldsack, Zheheng Luo, Qianqian Xie, Carolina Scarton, Matthew Shardlow, Sophia Ananiadou, and Chenghua Lin. 2023a. Overview of the biolaysumm 2023 shared task on lay summarization of biomedical research articles. In The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks, pages 468-477, Toronto, Canada. Association for Computational Linguistics.",
|
| 890 |
+
"Tomas Goldsack, Zhihao Zhang, Chenghua Lin, and Carolina Scarton. 2022. Making science simple: Corpora for the lay summarisation of scientific literature. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10589-10604, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.",
|
| 891 |
+
"Tomas Goldsack, Zhihao Zhang, Chen Tang, Carolina Scarton, and Chenghua Lin. 2023b. Enhancing biomedical lay summarisation with external knowledge graphs. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 8016-8032, Singapore. Association for Computational Linguistics.",
|
| 892 |
+
"Yue Guo, Wei Qiu, Yizhong Wang, and Trevor Cohen. 2021. Automated Lay Language Summarization of Biomedical Scientific Reviews. Proceedings of the AAAI Conference on Artificial Intelligence, 35(1):160-168.",
|
| 893 |
+
"Stuart R F King, Emma Pewsey, and Sarah Shailes. 2017. Plain-language Summaries of Research: An inside guide to eLife digests. eLife, 6:e25410.",
|
| 894 |
+
"Xiangzhe Kong, Jialiang Huang, Ziquan Tung, Jian Guan, and Minlie Huang. 2021. Stylized story generation with style-guided planning. In *Findings of the Association for Computational Linguistics: ACLIJCNLP* 2021, pages 2430-2436, Online. Association for Computational Linguistics.",
|
| 895 |
+
"Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7871-7880. Association for Computational Linguistics.",
|
| 896 |
+
"Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.",
|
| 897 |
+
"Zeheng Luo, Qianqian Xie, and Sophia Ananiadou. 2022. Readability controllable biomedical document summarization. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4667-4680, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics."
|
| 898 |
+
],
|
| 899 |
+
"bbox": [
|
| 900 |
+
510,
|
| 901 |
+
85,
|
| 902 |
+
884,
|
| 903 |
+
917
|
| 904 |
+
],
|
| 905 |
+
"page_idx": 5
|
| 906 |
+
},
|
| 907 |
+
{
|
| 908 |
+
"type": "page_number",
|
| 909 |
+
"text": "342",
|
| 910 |
+
"bbox": [
|
| 911 |
+
485,
|
| 912 |
+
928,
|
| 913 |
+
515,
|
| 914 |
+
940
|
| 915 |
+
],
|
| 916 |
+
"page_idx": 5
|
| 917 |
+
},
|
| 918 |
+
{
|
| 919 |
+
"type": "ref_text",
|
| 920 |
+
"text": "Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55-60, Baltimore, Maryland. Association for Computational Linguistics.",
|
| 921 |
+
"bbox": [
|
| 922 |
+
115,
|
| 923 |
+
85,
|
| 924 |
+
489,
|
| 925 |
+
191
|
| 926 |
+
],
|
| 927 |
+
"page_idx": 6
|
| 928 |
+
},
|
| 929 |
+
{
|
| 930 |
+
"type": "ref_text",
|
| 931 |
+
"text": "Louis Martin, Éric de la Clergerie, Benoit Sagot, and Antoine Bordes. 2020. Controllable sentence simplification. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 4689-4698, Marseille, France. European Language Resources Association.",
|
| 932 |
+
"bbox": [
|
| 933 |
+
115,
|
| 934 |
+
200,
|
| 935 |
+
489,
|
| 936 |
+
280
|
| 937 |
+
],
|
| 938 |
+
"page_idx": 6
|
| 939 |
+
},
|
| 940 |
+
{
|
| 941 |
+
"type": "ref_text",
|
| 942 |
+
"text": "Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 404-411, Barcelona, Spain. Association for Computational Linguistics.",
|
| 943 |
+
"bbox": [
|
| 944 |
+
115,
|
| 945 |
+
291,
|
| 946 |
+
489,
|
| 947 |
+
357
|
| 948 |
+
],
|
| 949 |
+
"page_idx": 6
|
| 950 |
+
},
|
| 951 |
+
{
|
| 952 |
+
"type": "ref_text",
|
| 953 |
+
"text": "Kim Cheng Sheang and Horacio Saggion. 2021. Controllable sentence simplification with a unified text-to-text transfer transformer. In Proceedings of the 14th International Conference on Natural Language Generation, pages 341-352, Aberdeen, Scotland, UK. Association for Computational Linguistics.",
|
| 954 |
+
"bbox": [
|
| 955 |
+
115,
|
| 956 |
+
367,
|
| 957 |
+
489,
|
| 958 |
+
445
|
| 959 |
+
],
|
| 960 |
+
"page_idx": 6
|
| 961 |
+
},
|
| 962 |
+
{
|
| 963 |
+
"type": "ref_text",
|
| 964 |
+
"text": "Chen Tang, Tyler Loakman, and Chenghua Lin. 2024. A cross-attention augmented model for event-triggered context-aware story generation. Computer Speech & Language, page 101662.",
|
| 965 |
+
"bbox": [
|
| 966 |
+
115,
|
| 967 |
+
456,
|
| 968 |
+
487,
|
| 969 |
+
510
|
| 970 |
+
],
|
| 971 |
+
"page_idx": 6
|
| 972 |
+
},
|
| 973 |
+
{
|
| 974 |
+
"type": "ref_text",
|
| 975 |
+
"text": "Oisin Turbitt, Robert Bevan, and Mouhamad Aboshokor. 2023. MDC at BioLaySumm task 1: Evaluating GPT models for biomedical lay summarization. In The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks, pages 611-619, Toronto, Canada. Association for Computational Linguistics.",
|
| 976 |
+
"bbox": [
|
| 977 |
+
115,
|
| 978 |
+
519,
|
| 979 |
+
489,
|
| 980 |
+
612
|
| 981 |
+
],
|
| 982 |
+
"page_idx": 6
|
| 983 |
+
},
|
| 984 |
+
{
|
| 985 |
+
"type": "ref_text",
|
| 986 |
+
"text": "Liqiang Xiao, Lu Wang, Hao He, and Yaohui Jin. 2020. Modeling content importance for summarization with pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3606-3611, Online. Association for Computational Linguistics.",
|
| 987 |
+
"bbox": [
|
| 988 |
+
115,
|
| 989 |
+
621,
|
| 990 |
+
489,
|
| 991 |
+
701
|
| 992 |
+
],
|
| 993 |
+
"page_idx": 6
|
| 994 |
+
},
|
| 995 |
+
{
|
| 996 |
+
"type": "ref_text",
|
| 997 |
+
"text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. BERTScore: Evaluating text generation with BERT.",
|
| 998 |
+
"bbox": [
|
| 999 |
+
115,
|
| 1000 |
+
711,
|
| 1001 |
+
489,
|
| 1002 |
+
751
|
| 1003 |
+
],
|
| 1004 |
+
"page_idx": 6
|
| 1005 |
+
},
|
| 1006 |
+
{
|
| 1007 |
+
"type": "text",
|
| 1008 |
+
"text": "A Appendix",
|
| 1009 |
+
"text_level": 1,
|
| 1010 |
+
"bbox": [
|
| 1011 |
+
115,
|
| 1012 |
+
764,
|
| 1013 |
+
236,
|
| 1014 |
+
781
|
| 1015 |
+
],
|
| 1016 |
+
"page_idx": 6
|
| 1017 |
+
},
|
| 1018 |
+
{
|
| 1019 |
+
"type": "text",
|
| 1020 |
+
"text": "ChatGPT Baseline Prompts The prompts provided to ChatGPT for each summary type are given in Table 6. To ensure a fair comparison, we control the length of the GPT baselines using the generation arguments, (e.g., max_new_tokens). Note that we differentiate between the lay summary types (namely, PLOS and eLife) based on distinctions made by Goldsack et al. (2022), who recognise",
|
| 1021 |
+
"bbox": [
|
| 1022 |
+
115,
|
| 1023 |
+
790,
|
| 1024 |
+
489,
|
| 1025 |
+
917
|
| 1026 |
+
],
|
| 1027 |
+
"page_idx": 6
|
| 1028 |
+
},
|
| 1029 |
+
{
|
| 1030 |
+
"type": "text",
|
| 1031 |
+
"text": "PLOS' summaries as the less \"lay\" of the two, making them better suited to an audience with some technical knowledge.",
|
| 1032 |
+
"bbox": [
|
| 1033 |
+
510,
|
| 1034 |
+
84,
|
| 1035 |
+
884,
|
| 1036 |
+
131
|
| 1037 |
+
],
|
| 1038 |
+
"page_idx": 6
|
| 1039 |
+
},
|
| 1040 |
+
{
|
| 1041 |
+
"type": "page_number",
|
| 1042 |
+
"text": "343",
|
| 1043 |
+
"bbox": [
|
| 1044 |
+
485,
|
| 1045 |
+
928,
|
| 1046 |
+
515,
|
| 1047 |
+
940
|
| 1048 |
+
],
|
| 1049 |
+
"page_idx": 6
|
| 1050 |
+
},
|
| 1051 |
+
{
|
| 1052 |
+
"type": "table",
|
| 1053 |
+
"img_path": "images/f90f868c0c9650b4f243bacd799fb43df61f3b7e18c1842db34721c8ff9506a0.jpg",
|
| 1054 |
+
"table_caption": [],
|
| 1055 |
+
"table_footnote": [],
|
| 1056 |
+
"table_body": "<table><tr><td>Summary Type</td><td>Prompt</td></tr><tr><td>Abstract</td><td>Summarize the following article for an expert audience that is familiar with the technical aspects of the content</td></tr><tr><td>PLOS lay summary</td><td>Summarize the following article for a non-expert audience that has some familiarity with the technical aspects of the content</td></tr><tr><td>eLife lay summary</td><td>Summarize the following article for a non-expert audience that has no familiarity with the technical aspects of the content</td></tr></table>",
|
| 1057 |
+
"bbox": [
|
| 1058 |
+
132,
|
| 1059 |
+
134,
|
| 1060 |
+
863,
|
| 1061 |
+
252
|
| 1062 |
+
],
|
| 1063 |
+
"page_idx": 7
|
| 1064 |
+
},
|
| 1065 |
+
{
|
| 1066 |
+
"type": "text",
|
| 1067 |
+
"text": "Table 6: Prompts used for the GPT3.5-zs baseline for each summary type.",
|
| 1068 |
+
"bbox": [
|
| 1069 |
+
245,
|
| 1070 |
+
261,
|
| 1071 |
+
746,
|
| 1072 |
+
275
|
| 1073 |
+
],
|
| 1074 |
+
"page_idx": 7
|
| 1075 |
+
},
|
| 1076 |
+
{
|
| 1077 |
+
"type": "text",
|
| 1078 |
+
"text": "eLife",
|
| 1079 |
+
"text_level": 1,
|
| 1080 |
+
"bbox": [
|
| 1081 |
+
480,
|
| 1082 |
+
399,
|
| 1083 |
+
527,
|
| 1084 |
+
413
|
| 1085 |
+
],
|
| 1086 |
+
"page_idx": 7
|
| 1087 |
+
},
|
| 1088 |
+
{
|
| 1089 |
+
"type": "text",
|
| 1090 |
+
"text": "ATLAS<sup>technical</sup>",
|
| 1091 |
+
"text_level": 1,
|
| 1092 |
+
"bbox": [
|
| 1093 |
+
121,
|
| 1094 |
+
420,
|
| 1095 |
+
233,
|
| 1096 |
+
434
|
| 1097 |
+
],
|
| 1098 |
+
"page_idx": 7
|
| 1099 |
+
},
|
| 1100 |
+
{
|
| 1101 |
+
"type": "text",
|
| 1102 |
+
"text": "The effects of muscle fatigue on motor learning under fatigue are poorly understood. Here, we investigated the effect of fatigue on learning under a sequential pinch force task. Irrespective of whether the observed fatigue effects are domain-specific or present in another task that is cognitive demanding but requires minimal force control, we found that participants had impaired skill learning in both the fatigued and unfatigued effector. We replicated the findings of experiment 1 and found that disruption of rTMS to the motor cortex (Cantarero et al., 2013a) alleviated the adverse effects of fatigue. Cortical excitability was similar to that observed in the untrained effector, but not in the fatigued. Altogether, our findings suggest that motor fatigue has a domain-dependent lasting effect on skill learning. Future studies should focus on understanding the role of motor cortex excitability in the acquisition of motor skills under fatigue, as well as the potential role for maladaptive memory formation under fatigued conditions. Cortical and motor cortices should be included in training and rehabilitation regimens geared to improve motor skill acquisition.",
|
| 1103 |
+
"bbox": [
|
| 1104 |
+
119,
|
| 1105 |
+
434,
|
| 1106 |
+
890,
|
| 1107 |
+
590
|
| 1108 |
+
],
|
| 1109 |
+
"page_idx": 7
|
| 1110 |
+
},
|
| 1111 |
+
{
|
| 1112 |
+
"type": "text",
|
| 1113 |
+
"text": "ATLASlay",
|
| 1114 |
+
"text_level": 1,
|
| 1115 |
+
"bbox": [
|
| 1116 |
+
121,
|
| 1117 |
+
607,
|
| 1118 |
+
196,
|
| 1119 |
+
621
|
| 1120 |
+
],
|
| 1121 |
+
"page_idx": 7
|
| 1122 |
+
},
|
| 1123 |
+
{
|
| 1124 |
+
"type": "text",
|
| 1125 |
+
"text": "Muscle fatigue is a neuromuscular phenomenon that can impair performance over time. People who experience fatigue tend to be less able to learn a new motor skill than people who experience no fatigue. However, it is not clear how fatigue affects the ability of people to learn new motor skills. One way to study the effects of fatigue is to study how people learn a motor skill under fatigue conditions. One of the main challenges in studying motor learning under fatigue is the so-termed \"performance-learning\" distinction In this study, participants were asked to practice a motor task over two days and then had to catch up to the skill performance level of the non-fatigued group. Unexpectedly, participants who were only fatigued at the end of the training were less likely to learn the motor skill. This suggests that fatigue has a domain-specific lasting effect on the learning of a skill. ernas et al. now show that people who are unable to recover the motor task under fatigue are more likely to be unable to learn their motor skill when they are not fatigued. The experiments show that when people are trained to perform the task, their ability to recover from fatigue is severely impaired. This effect is due to a change in the strength of the motor cortex, a region of the brain that is involved in learning and memory.",
|
| 1126 |
+
"bbox": [
|
| 1127 |
+
119,
|
| 1128 |
+
623,
|
| 1129 |
+
890,
|
| 1130 |
+
815
|
| 1131 |
+
],
|
| 1132 |
+
"page_idx": 7
|
| 1133 |
+
},
|
| 1134 |
+
{
|
| 1135 |
+
"type": "text",
|
| 1136 |
+
"text": "Figure 2: An case study from the eLife test set comparing summaries generated under highly lay and technical attribute values (with the length attribute being kept constant).",
|
| 1137 |
+
"bbox": [
|
| 1138 |
+
112,
|
| 1139 |
+
832,
|
| 1140 |
+
884,
|
| 1141 |
+
862
|
| 1142 |
+
],
|
| 1143 |
+
"page_idx": 7
|
| 1144 |
+
},
|
| 1145 |
+
{
|
| 1146 |
+
"type": "page_number",
|
| 1147 |
+
"text": "344",
|
| 1148 |
+
"bbox": [
|
| 1149 |
+
485,
|
| 1150 |
+
928,
|
| 1151 |
+
515,
|
| 1152 |
+
940
|
| 1153 |
+
],
|
| 1154 |
+
"page_idx": 7
|
| 1155 |
+
},
|
| 1156 |
+
{
|
| 1157 |
+
"type": "text",
|
| 1158 |
+
"text": "PLOS",
|
| 1159 |
+
"text_level": 1,
|
| 1160 |
+
"bbox": [
|
| 1161 |
+
477,
|
| 1162 |
+
286,
|
| 1163 |
+
529,
|
| 1164 |
+
300
|
| 1165 |
+
],
|
| 1166 |
+
"page_idx": 8
|
| 1167 |
+
},
|
| 1168 |
+
{
|
| 1169 |
+
"type": "text",
|
| 1170 |
+
"text": "ATLAS<sup>technical</sup>",
|
| 1171 |
+
"text_level": 1,
|
| 1172 |
+
"bbox": [
|
| 1173 |
+
121,
|
| 1174 |
+
307,
|
| 1175 |
+
233,
|
| 1176 |
+
322
|
| 1177 |
+
],
|
| 1178 |
+
"page_idx": 8
|
| 1179 |
+
},
|
| 1180 |
+
{
|
| 1181 |
+
"type": "text",
|
| 1182 |
+
"text": "In this paper, we explore the conditions under which associations between antigenic, metabolic and virulence properties of strains within pneumococcal populations and predict how these may shift under vaccination. In this work, we use a conceptual framework to investigate the dynamics of associations between serotype, serotype and serotype-specific immunity in pneumococcus populations. We find that antigenic type (AT) is the principal determinant of non-capsular virulence factors (VF), whereas MT is the major determinant. AT and MT are highly non-random; MT and AT are co-evolved and co-expressed. ET and CT are also found to be highly correlated, suggesting that they have synergistically adapted to a particular metabolic niche. IT and LD are found to have similar patterns of linkage disequilibrium (LD) than randomly selected genes not associated with metabolic/transport processes; AT is associated with a higher frequency of LD LD than MT LD; CT LD=0.013). CT is the first mathematical model to explain the non-overlapping association between serotypic and serotypes. TCT BC LD is a useful tool for predicting the potential impact of vaccination on the prevalence of serotypes associated with non-vaccine serotypes and for predicting how they may change under vaccination and vaccine serotype replacement.",
|
| 1183 |
+
"bbox": [
|
| 1184 |
+
119,
|
| 1185 |
+
323,
|
| 1186 |
+
890,
|
| 1187 |
+
493
|
| 1188 |
+
],
|
| 1189 |
+
"page_idx": 8
|
| 1190 |
+
},
|
| 1191 |
+
{
|
| 1192 |
+
"type": "text",
|
| 1193 |
+
"text": "ATLASlay",
|
| 1194 |
+
"text_level": 1,
|
| 1195 |
+
"bbox": [
|
| 1196 |
+
121,
|
| 1197 |
+
495,
|
| 1198 |
+
196,
|
| 1199 |
+
508
|
| 1200 |
+
],
|
| 1201 |
+
"page_idx": 8
|
| 1202 |
+
},
|
| 1203 |
+
{
|
| 1204 |
+
"type": "text",
|
| 1205 |
+
"text": "Pneumococcal populations are highly diverse in non-antigenic genes and are commonly classified into sequence types (ST) by Multi Locus Sequence Typing (MLST) of seven metabolic housekeeping genes. STs have been documented to occur regularly throughout the past 7 decades, yet many studies (eg) show an intriguing pattern of largely non-overlapping associations between serotype and ST. It has been noted that many STs that were previously associated with vaccine serotypes now occur in association with non-vaccine serotypes. It has been proposed that a combination of immune-mediated interference between identical antigenic types and direct competition between identical metabolic types can generate non-overlapping association between antigenic and STs in populations of the bacterial pathogen Neisseria meningitidis. In this paper, we explore whether pneumococcal population structure, can be explained within a similar conceptual framework. in which pathogen strains are profiled by antigenic type, AT, metabolic type (MT) and additional non-capsular virulence factors (VF).",
|
| 1206 |
+
"bbox": [
|
| 1207 |
+
119,
|
| 1208 |
+
511,
|
| 1209 |
+
890,
|
| 1210 |
+
671
|
| 1211 |
+
],
|
| 1212 |
+
"page_idx": 8
|
| 1213 |
+
},
|
| 1214 |
+
{
|
| 1215 |
+
"type": "text",
|
| 1216 |
+
"text": "Figure 3: An case study from the eLife test set comparing summaries generated under highly lay and technical attribute values (with the length attribute being kept constant).",
|
| 1217 |
+
"bbox": [
|
| 1218 |
+
112,
|
| 1219 |
+
688,
|
| 1220 |
+
884,
|
| 1221 |
+
718
|
| 1222 |
+
],
|
| 1223 |
+
"page_idx": 8
|
| 1224 |
+
},
|
| 1225 |
+
{
|
| 1226 |
+
"type": "page_number",
|
| 1227 |
+
"text": "345",
|
| 1228 |
+
"bbox": [
|
| 1229 |
+
485,
|
| 1230 |
+
928,
|
| 1231 |
+
515,
|
| 1232 |
+
940
|
| 1233 |
+
],
|
| 1234 |
+
"page_idx": 8
|
| 1235 |
+
}
|
| 1236 |
+
]
|
2024/ATLAS_ Improving Lay Summarisation with Attribute-based Control/dabae923-8653-40d5-a808-ef509db0f2cc_model.json
ADDED
|
@@ -0,0 +1,1395 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
[
|
| 3 |
+
{
|
| 4 |
+
"type": "title",
|
| 5 |
+
"bbox": [
|
| 6 |
+
0.137,
|
| 7 |
+
0.089,
|
| 8 |
+
0.861,
|
| 9 |
+
0.11
|
| 10 |
+
],
|
| 11 |
+
"angle": 0,
|
| 12 |
+
"content": "ATLAS: Improving Lay Summarisation with Attribute-based Control"
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"bbox": [
|
| 17 |
+
0.164,
|
| 18 |
+
0.124,
|
| 19 |
+
0.84,
|
| 20 |
+
0.193
|
| 21 |
+
],
|
| 22 |
+
"angle": 0,
|
| 23 |
+
"content": "Zhihao Zhang\\(^{1}\\), Tomas Goldsack\\(^{2}\\), Carolina Scarton\\(^{2}\\), Chenghua Lin\\(^{3*}\\) \n\\(^{1}\\)College of Economics and Management, Beijing University of Technology, China, \\(^{2}\\)Department of Computer Science, University of Sheffield, UK \n\\(^{3}\\)Department of Computer Science, The University of Manchester, UK"
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"bbox": [
|
| 28 |
+
0.224,
|
| 29 |
+
0.196,
|
| 30 |
+
0.778,
|
| 31 |
+
0.227
|
| 32 |
+
],
|
| 33 |
+
"angle": 0,
|
| 34 |
+
"content": "zhhzhang@bjut.edu.cn {tgoldsack1, c.scarton}@sheffield.ac.uk chenghua.lin@manchester.ac.uk"
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "title",
|
| 38 |
+
"bbox": [
|
| 39 |
+
0.261,
|
| 40 |
+
0.253,
|
| 41 |
+
0.341,
|
| 42 |
+
0.269
|
| 43 |
+
],
|
| 44 |
+
"angle": 0,
|
| 45 |
+
"content": "Abstract"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"bbox": [
|
| 50 |
+
0.142,
|
| 51 |
+
0.28,
|
| 52 |
+
0.461,
|
| 53 |
+
0.621
|
| 54 |
+
],
|
| 55 |
+
"angle": 0,
|
| 56 |
+
"content": "Automatic scientific lay summarisation aims to produce summaries of scientific articles that are comprehensible to non-expert audiences. However, previous work assumes a one-size-fits-all approach, where the content and style of the produced summary are entirely dependent on the data used to train the model. In practice, audiences with different goals and levels of expertise will have specific needs, impacting what content should appear in a lay summary and how it should be presented. Aiming to address this disparity, we propose ATLAS, a novel abstractive summarisation approach that can control various properties that contribute to the overall \"layness\" of the generated summary using targeted control attributes. We evaluate ATLAS on a combination of biomedical lay summarisation datasets, where it outperforms state-of-the-art baselines using both automatic and human evaluations. Additional analyses provided on the discriminatory power and emergent influence of our selected controllable attributes further attest to the effectiveness of our approach."
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "title",
|
| 60 |
+
"bbox": [
|
| 61 |
+
0.115,
|
| 62 |
+
0.632,
|
| 63 |
+
0.262,
|
| 64 |
+
0.647
|
| 65 |
+
],
|
| 66 |
+
"angle": 0,
|
| 67 |
+
"content": "1 Introduction"
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"bbox": [
|
| 72 |
+
0.113,
|
| 73 |
+
0.658,
|
| 74 |
+
0.49,
|
| 75 |
+
0.898
|
| 76 |
+
],
|
| 77 |
+
"angle": 0,
|
| 78 |
+
"content": "Lay summarisation is defined as producing a summary of a scientific article that is comprehensible to non-experts (King et al., 2017). Recent work has shown that, when compared to technical abstracts, lay summaries typically are more readable (lexically and syntactically), more abstractive, and contain more background information, enabling a non-technical reader to better understand their contents (Luo et al., 2022; Cohen et al., 2021; Goldsack et al., 2023b). However, the extent to which these attributes are required within a lay summary depends largely on the specific needs of the reader. For example, a scientist from a related field will require less background information to understand an article's contents than an entirely non-technical"
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"bbox": [
|
| 83 |
+
0.508,
|
| 84 |
+
0.254,
|
| 85 |
+
0.885,
|
| 86 |
+
0.351
|
| 87 |
+
],
|
| 88 |
+
"angle": 0,
|
| 89 |
+
"content": "reader, but they might still require domain-specific jargon to be simplified or explained. Despite its obvious benefits, to our knowledge, no work has yet explored how we can enable such fine-grained control over comprehensibility-related aspects for lay summary generation."
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "text",
|
| 93 |
+
"bbox": [
|
| 94 |
+
0.508,
|
| 95 |
+
0.378,
|
| 96 |
+
0.885,
|
| 97 |
+
0.683
|
| 98 |
+
],
|
| 99 |
+
"angle": 0,
|
| 100 |
+
"content": "In this paper, we propose ATLAS (ATtribute-controlled LAy Summarization), a novel scientific summarisation approach that aims to control four attributes targeting distinct properties contributing to the overall \"layness\" of the generated summary, thus allowing it to cater to the specific needs of different audiences. Although recent attempts at text simplification and story generation have had success influencing the style (Martin et al., 2020; Kong et al., 2021; Sheang and Saggion, 2021) and content (Kong et al., 2021; Tang et al., 2024) of generated text using fine-grained controllable attributes, no work to our knowledge has explored this for scientific summarisation. Luo et al. (2022) recently addressed the task of readability-controlled scientific summarisation, however, this is only done at a binary level, training a model to produce either a technical or non-technical summary based on a single control token."
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "text",
|
| 104 |
+
"bbox": [
|
| 105 |
+
0.508,
|
| 106 |
+
0.711,
|
| 107 |
+
0.887,
|
| 108 |
+
0.919
|
| 109 |
+
],
|
| 110 |
+
"angle": 0,
|
| 111 |
+
"content": "Our approach innovates by enabling a greater degree of controllability through the flexible handling of multiple attributes, allowing it to produce more diverse summaries and better address the specific needs of different audiences. Our results show that ATLAS outperforms state-of-the-art baselines in both automatic and human evaluations across three summary types with varying levels of technicality. Additional analyses confirm that attribute control positively influences performance, and suggest the selected control attributes are able to effectively capture the difference between technical and nontechnical summaries."
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"type": "page_footnote",
|
| 115 |
+
"bbox": [
|
| 116 |
+
0.142,
|
| 117 |
+
0.905,
|
| 118 |
+
0.296,
|
| 119 |
+
0.919
|
| 120 |
+
],
|
| 121 |
+
"angle": 0,
|
| 122 |
+
"content": "* Corresponding author"
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"type": "page_number",
|
| 126 |
+
"bbox": [
|
| 127 |
+
0.486,
|
| 128 |
+
0.928,
|
| 129 |
+
0.516,
|
| 130 |
+
0.941
|
| 131 |
+
],
|
| 132 |
+
"angle": 0,
|
| 133 |
+
"content": "337"
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"type": "footer",
|
| 137 |
+
"bbox": [
|
| 138 |
+
0.097,
|
| 139 |
+
0.946,
|
| 140 |
+
0.902,
|
| 141 |
+
0.974
|
| 142 |
+
],
|
| 143 |
+
"angle": 0,
|
| 144 |
+
"content": "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 337-345 August 11-16, 2024 ©2024 Association for Computational Linguistics"
|
| 145 |
+
}
|
| 146 |
+
],
|
| 147 |
+
[
|
| 148 |
+
{
|
| 149 |
+
"type": "title",
|
| 150 |
+
"bbox": [
|
| 151 |
+
0.114,
|
| 152 |
+
0.085,
|
| 153 |
+
0.264,
|
| 154 |
+
0.101
|
| 155 |
+
],
|
| 156 |
+
"angle": 0,
|
| 157 |
+
"content": "2 Methodology"
|
| 158 |
+
},
|
| 159 |
+
{
|
| 160 |
+
"type": "text",
|
| 161 |
+
"bbox": [
|
| 162 |
+
0.113,
|
| 163 |
+
0.111,
|
| 164 |
+
0.488,
|
| 165 |
+
0.19
|
| 166 |
+
],
|
| 167 |
+
"angle": 0,
|
| 168 |
+
"content": "As discussed in §1, ATLAS aims to control four targeted attributes. We use BART-base as the base model for ATLAS as it represents the state-of-the-art benchmark in previous lay summarisation works (Guo et al., 2021; Goldsack et al., 2022)."
|
| 169 |
+
},
|
| 170 |
+
{
|
| 171 |
+
"type": "text",
|
| 172 |
+
"bbox": [
|
| 173 |
+
0.113,
|
| 174 |
+
0.192,
|
| 175 |
+
0.489,
|
| 176 |
+
0.32
|
| 177 |
+
],
|
| 178 |
+
"angle": 0,
|
| 179 |
+
"content": "Formally, each document \\( x = (x_{1}, x_{2}, \\ldots, x_{n}) \\) of length \\( n \\), where \\( x_{i} \\) is the \\( i \\)-th token, is prepended with a control token sequence \\( l \\) such that \\( x = (l, x_{1}, x_{2}s, \\ldots, x_{n}) \\). \\( l \\) consists of our four selected control tokens, each of which targets distinct characteristics of the output summary that contributes to its overall comprehensibility. We describe each aspect below:"
|
| 180 |
+
},
|
| 181 |
+
{
|
| 182 |
+
"type": "text",
|
| 183 |
+
"bbox": [
|
| 184 |
+
0.113,
|
| 185 |
+
0.331,
|
| 186 |
+
0.488,
|
| 187 |
+
0.379
|
| 188 |
+
],
|
| 189 |
+
"angle": 0,
|
| 190 |
+
"content": "Length (L) The length of the output summary in characters. A more lay audience may require a longer summary to aid comprehension."
|
| 191 |
+
},
|
| 192 |
+
{
|
| 193 |
+
"type": "text",
|
| 194 |
+
"bbox": [
|
| 195 |
+
0.113,
|
| 196 |
+
0.389,
|
| 197 |
+
0.489,
|
| 198 |
+
0.485
|
| 199 |
+
],
|
| 200 |
+
"angle": 0,
|
| 201 |
+
"content": "Readability (R) How easy it is to read the text. This is measured using the Flesh-Kincaid Grade Level (FKGL) metric, which estimates the reading grade level (US) required to understand the generated text based on the total number of sentences, words, and syllables present within it."
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"type": "text",
|
| 205 |
+
"bbox": [
|
| 206 |
+
0.113,
|
| 207 |
+
0.496,
|
| 208 |
+
0.49,
|
| 209 |
+
0.575
|
| 210 |
+
],
|
| 211 |
+
"angle": 0,
|
| 212 |
+
"content": "Background information (BG) The percentage of sentences classified as containing primarily background information. Intuitively, a more lay audience will require greater levels of background information to contextualise an article."
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"type": "text",
|
| 216 |
+
"bbox": [
|
| 217 |
+
0.113,
|
| 218 |
+
0.587,
|
| 219 |
+
0.49,
|
| 220 |
+
0.78
|
| 221 |
+
],
|
| 222 |
+
"angle": 0,
|
| 223 |
+
"content": "Content word entropy (CWE) The average entropy of content words. We hypothesise that jargon terms are likely to possess higher entropy values, thus lower average CWE is likely to be a property of more lay text. Since jargon terms are predominately nouns, we extract noun phrases as content words using CoreNLP library (Manning et al., 2014). We then follow Xiao et al. (2020) to calculate \\( I(x_{i}) \\) entropy of a given token \\( x_{i} \\) as the negative logarithm of its generation probability \\( P(x_{i}) \\), which is directly extracted from a pre-trained language model."
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"type": "equation",
|
| 227 |
+
"bbox": [
|
| 228 |
+
0.224,
|
| 229 |
+
0.797,
|
| 230 |
+
0.488,
|
| 231 |
+
0.814
|
| 232 |
+
],
|
| 233 |
+
"angle": 0,
|
| 234 |
+
"content": "\\[\nI \\left(x _ {i}\\right) = - \\log P \\left(x _ {i}\\right) \\tag {1}\n\\]"
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"type": "text",
|
| 238 |
+
"bbox": [
|
| 239 |
+
0.113,
|
| 240 |
+
0.823,
|
| 241 |
+
0.489,
|
| 242 |
+
0.919
|
| 243 |
+
],
|
| 244 |
+
"angle": 0,
|
| 245 |
+
"content": "During model training, true attribute values (as calculated on reference summaries) are used, allowing the model to learn to associate attribute values with summary properties. For all attributes, values are discretized into 10 fixed-width bins depending on their respective range in the train split (from"
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "text",
|
| 249 |
+
"bbox": [
|
| 250 |
+
0.508,
|
| 251 |
+
0.085,
|
| 252 |
+
0.883,
|
| 253 |
+
0.18
|
| 254 |
+
],
|
| 255 |
+
"angle": 0,
|
| 256 |
+
"content": "minimum to maximum observed value), resulting in 10 unique control tokens for each attribute which are added to the vocabulary. For each attribute at test time, we use the most common bin value observed for reference summaries of the training set as attribute values."
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "title",
|
| 260 |
+
"bbox": [
|
| 261 |
+
0.509,
|
| 262 |
+
0.194,
|
| 263 |
+
0.719,
|
| 264 |
+
0.211
|
| 265 |
+
],
|
| 266 |
+
"angle": 0,
|
| 267 |
+
"content": "3 Experimental Setup"
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "text",
|
| 271 |
+
"bbox": [
|
| 272 |
+
0.508,
|
| 273 |
+
0.22,
|
| 274 |
+
0.885,
|
| 275 |
+
0.476
|
| 276 |
+
],
|
| 277 |
+
"angle": 0,
|
| 278 |
+
"content": "Data. We experiment on the biomedical lay summarisation datasets introduced in Goldsack et al. (2022), eLife (4.8k articles) and PLOS (27.5k articles), for which target lay summaries have been shown to contain different levels of \"layness\". Specifically, eLife's lay summaries have been characterized as longer, more readable, and more abstractive than those of PLOS, as well as being empirically observed to be suitable for a more lay audience. We, therefore, combine both of these datasets, allowing us to expose ATLAS to a greater variety of attribute values during training. For each article in the combined dataset, we train our ATLAS to produce both the technical abstract and lay summary, using our control attributes to differentiate between them."
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"type": "text",
|
| 282 |
+
"bbox": [
|
| 283 |
+
0.508,
|
| 284 |
+
0.488,
|
| 285 |
+
0.884,
|
| 286 |
+
0.616
|
| 287 |
+
],
|
| 288 |
+
"angle": 0,
|
| 289 |
+
"content": "Evaluation. We employ several automatic metrics to evaluate the performance of ATLAS. In line with common summarisation practice, we calculate ROUGE-1,2, and L variants (Lin, 2004) and BERTScore (Zhang et al., 2019). We also measure Dale-Chall Readability Score, a metric that estimates US grade level based on the frequency of common words."
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"type": "text",
|
| 293 |
+
"bbox": [
|
| 294 |
+
0.508,
|
| 295 |
+
0.627,
|
| 296 |
+
0.884,
|
| 297 |
+
0.884
|
| 298 |
+
],
|
| 299 |
+
"angle": 0,
|
| 300 |
+
"content": "Baselines. To enable fair comparison, we rerun many of the baseline approaches used by Goldsack et al. (2022) (which have the abstract included in the input) on the combined datasets. Specifically, we rerun the Lead-3, Lead-K, and oracle heuristic baselines; TextRank (Mihalcea and Tairau, 2004), LexRank (Erkan and Radev, 2004), and HipoRank (Dong et al., 2021) unsupervised models; and BART and \\(\\mathrm{BART}_{\\mathrm{Scaffold}}\\) supervised models. Here, we use the transformer-based BART base model (Lewis et al., 2020), which we fine-tune on our own datasets. \\(\\mathrm{BART}_{\\mathrm{Scaffold}}\\) is the recreation of a model from Goldsack et al. (2022) which is trained using a binary control token (<abs> or <lay>) to produce either an abstract or lay summary for an article. This model is equivalent to that pro"
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"type": "page_footnote",
|
| 304 |
+
"bbox": [
|
| 305 |
+
0.509,
|
| 306 |
+
0.893,
|
| 307 |
+
0.883,
|
| 308 |
+
0.919
|
| 309 |
+
],
|
| 310 |
+
"angle": 0,
|
| 311 |
+
"content": "To combine the datasets, we merge the training and validation sets. We evaluate on the test sets separately."
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"type": "page_number",
|
| 315 |
+
"bbox": [
|
| 316 |
+
0.486,
|
| 317 |
+
0.928,
|
| 318 |
+
0.516,
|
| 319 |
+
0.94
|
| 320 |
+
],
|
| 321 |
+
"angle": 0,
|
| 322 |
+
"content": "338"
|
| 323 |
+
}
|
| 324 |
+
],
|
| 325 |
+
[
|
| 326 |
+
{
|
| 327 |
+
"type": "image",
|
| 328 |
+
"bbox": [
|
| 329 |
+
0.118,
|
| 330 |
+
0.085,
|
| 331 |
+
0.321,
|
| 332 |
+
0.195
|
| 333 |
+
],
|
| 334 |
+
"angle": 0,
|
| 335 |
+
"content": null
|
| 336 |
+
},
|
| 337 |
+
{
|
| 338 |
+
"type": "image",
|
| 339 |
+
"bbox": [
|
| 340 |
+
0.329,
|
| 341 |
+
0.086,
|
| 342 |
+
0.505,
|
| 343 |
+
0.195
|
| 344 |
+
],
|
| 345 |
+
"angle": 0,
|
| 346 |
+
"content": null
|
| 347 |
+
},
|
| 348 |
+
{
|
| 349 |
+
"type": "image",
|
| 350 |
+
"bbox": [
|
| 351 |
+
0.529,
|
| 352 |
+
0.086,
|
| 353 |
+
0.691,
|
| 354 |
+
0.195
|
| 355 |
+
],
|
| 356 |
+
"angle": 0,
|
| 357 |
+
"content": null
|
| 358 |
+
},
|
| 359 |
+
{
|
| 360 |
+
"type": "image",
|
| 361 |
+
"bbox": [
|
| 362 |
+
0.706,
|
| 363 |
+
0.099,
|
| 364 |
+
0.881,
|
| 365 |
+
0.195
|
| 366 |
+
],
|
| 367 |
+
"angle": 0,
|
| 368 |
+
"content": null
|
| 369 |
+
},
|
| 370 |
+
{
|
| 371 |
+
"type": "image_caption",
|
| 372 |
+
"bbox": [
|
| 373 |
+
0.113,
|
| 374 |
+
0.205,
|
| 375 |
+
0.882,
|
| 376 |
+
0.234
|
| 377 |
+
],
|
| 378 |
+
"angle": 0,
|
| 379 |
+
"content": "Figure 1: Visualisation of the density distributions of controllable attribute values for each summary type in the combined train split."
|
| 380 |
+
},
|
| 381 |
+
{
|
| 382 |
+
"type": "text",
|
| 383 |
+
"bbox": [
|
| 384 |
+
0.112,
|
| 385 |
+
0.26,
|
| 386 |
+
0.486,
|
| 387 |
+
0.291
|
| 388 |
+
],
|
| 389 |
+
"angle": 0,
|
| 390 |
+
"content": "posed by Luo et al. (2022), the only previous work on controllable lay summarisation.\\(^{2}\\)"
|
| 391 |
+
},
|
| 392 |
+
{
|
| 393 |
+
"type": "text",
|
| 394 |
+
"bbox": [
|
| 395 |
+
0.116,
|
| 396 |
+
0.293,
|
| 397 |
+
0.49,
|
| 398 |
+
0.678
|
| 399 |
+
],
|
| 400 |
+
"angle": 0,
|
| 401 |
+
"content": "Finally, we include two baselines based on ChatGPT (3.5-turbo), so as to compare against an accessible and widely used method of controlling text generation (i.e., prompt engineering). Our first GPT baseline (GPT3.5-zs) uses the following zero-shot prompts: (i) \"Summarize the following article for an expert audience that is familiar with the technical aspects of the content\" to generate technical abstracts; (ii) \"Summarize the following article for a non-expert audience that has some familiarity with the technical aspects of the content\" to generate PLOS lay summaries, and (iii) \"Summarize the following article for a non-expert audience that has no familiarity with the technical aspects of the content\" to generate eLife lay summaries. Our second GPT baseline (GPT3.5-mdc) replicates the method of Turbitt et al. (2023), the best-performing team of the recent BioLaySumm shared task (Goldsack et al., 2023a). Based on in-context learning, this method dynamically selects the maximum number of input-output examples that fit in the context window (separated by the simple prompt \"Explanation:\") to generate lay summaries based on only the article abstract."
|
| 402 |
+
},
|
| 403 |
+
{
|
| 404 |
+
"type": "text",
|
| 405 |
+
"bbox": [
|
| 406 |
+
0.113,
|
| 407 |
+
0.691,
|
| 408 |
+
0.489,
|
| 409 |
+
0.819
|
| 410 |
+
],
|
| 411 |
+
"angle": 0,
|
| 412 |
+
"content": "Implementation Details. As mentioned in §2, we employ BART-base as our base model. We train our ATLAS for a maximum of 5 epochs on a GeForce GTX-1080Ti GPU, retaining the checkpoint with the best average ROUGE-1/2/L score on the validation set. We set the batch size to 1 and keep the \\(\\alpha\\) scale factor (§2) at the default value of 0.2 from Kong et al. (2021)."
|
| 413 |
+
},
|
| 414 |
+
{
|
| 415 |
+
"type": "text",
|
| 416 |
+
"bbox": [
|
| 417 |
+
0.113,
|
| 418 |
+
0.821,
|
| 419 |
+
0.49,
|
| 420 |
+
0.87
|
| 421 |
+
],
|
| 422 |
+
"angle": 0,
|
| 423 |
+
"content": "For calculating control attributes, we use SciBERT (Beltagy et al., 2019) for entropy calculation, and we employ a BERT-based sequential classi"
|
| 424 |
+
},
|
| 425 |
+
{
|
| 426 |
+
"type": "table",
|
| 427 |
+
"bbox": [
|
| 428 |
+
0.534,
|
| 429 |
+
0.256,
|
| 430 |
+
0.86,
|
| 431 |
+
0.32
|
| 432 |
+
],
|
| 433 |
+
"angle": 0,
|
| 434 |
+
"content": "<table><tr><td>Summary type</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td>Abstract</td><td>0.69</td><td>0.75</td><td>0.72</td></tr><tr><td>eLife-Lay</td><td>0.71</td><td>0.71</td><td>0.71</td></tr><tr><td>PLOS-Lay</td><td>0.73</td><td>0.66</td><td>0.71</td></tr></table>"
|
| 435 |
+
},
|
| 436 |
+
{
|
| 437 |
+
"type": "table_caption",
|
| 438 |
+
"bbox": [
|
| 439 |
+
0.508,
|
| 440 |
+
0.33,
|
| 441 |
+
0.882,
|
| 442 |
+
0.358
|
| 443 |
+
],
|
| 444 |
+
"angle": 0,
|
| 445 |
+
"content": "Table 1: Classifier performance for 3-way classification between summary types on the combined test set."
|
| 446 |
+
},
|
| 447 |
+
{
|
| 448 |
+
"type": "text",
|
| 449 |
+
"bbox": [
|
| 450 |
+
0.508,
|
| 451 |
+
0.384,
|
| 452 |
+
0.884,
|
| 453 |
+
0.465
|
| 454 |
+
],
|
| 455 |
+
"angle": 0,
|
| 456 |
+
"content": "fier (Cohan et al., 2019) trained on the PubMedRTC dataset (Dernoncourt and Lee, 2017) for background sentence classification (as described in Goldsack et al. (2022)). We compute the FKGL readability score using the textstat package."
|
| 457 |
+
},
|
| 458 |
+
{
|
| 459 |
+
"type": "title",
|
| 460 |
+
"bbox": [
|
| 461 |
+
0.509,
|
| 462 |
+
0.477,
|
| 463 |
+
0.73,
|
| 464 |
+
0.493
|
| 465 |
+
],
|
| 466 |
+
"angle": 0,
|
| 467 |
+
"content": "4 Experimental Results"
|
| 468 |
+
},
|
| 469 |
+
{
|
| 470 |
+
"type": "text",
|
| 471 |
+
"bbox": [
|
| 472 |
+
0.507,
|
| 473 |
+
0.502,
|
| 474 |
+
0.884,
|
| 475 |
+
0.662
|
| 476 |
+
],
|
| 477 |
+
"angle": 0,
|
| 478 |
+
"content": "Discriminatory ability of control attributes. To validate the ability of our controllable attributes to distinguish between different summary types, we plot the distribution of attribute values for each type in Figure 1. The figure suggests that, in combination, the attributes are able to capture characteristic differences between summary types, as instances in which two summary types share a similar distribution for one attribute can typically be separated by other attributes.<sup>3</sup>"
|
| 479 |
+
},
|
| 480 |
+
{
|
| 481 |
+
"type": "text",
|
| 482 |
+
"bbox": [
|
| 483 |
+
0.508,
|
| 484 |
+
0.664,
|
| 485 |
+
0.884,
|
| 486 |
+
0.791
|
| 487 |
+
],
|
| 488 |
+
"angle": 0,
|
| 489 |
+
"content": "To further evidence this, we use the training set to train a simple logistic regression classifier, using only the attribute values of the reference summaries as features, to discriminate between reference summary types. The test set results in Table 1 show that all summary types are classified with an F1-score above 0.7, attesting to the discriminatory power of our control attributes."
|
| 490 |
+
},
|
| 491 |
+
{
|
| 492 |
+
"type": "text",
|
| 493 |
+
"bbox": [
|
| 494 |
+
0.508,
|
| 495 |
+
0.801,
|
| 496 |
+
0.882,
|
| 497 |
+
0.849
|
| 498 |
+
],
|
| 499 |
+
"angle": 0,
|
| 500 |
+
"content": "Summarisation performance. Table 2 presents the performance of ATLAS and baseline models using automatic metrics on the test sets of PLOS"
|
| 501 |
+
},
|
| 502 |
+
{
|
| 503 |
+
"type": "page_footnote",
|
| 504 |
+
"bbox": [
|
| 505 |
+
0.508,
|
| 506 |
+
0.858,
|
| 507 |
+
0.882,
|
| 508 |
+
0.919
|
| 509 |
+
],
|
| 510 |
+
"angle": 0,
|
| 511 |
+
"content": "\\(^{3}\\)E.g., PLOS lay summaries and abstracts have similar readability distributions but differ in their comprehensibility, length, and entropy distributions. Similarly, PLOS and eLife lay summaries have similar comprehensibility distributions but differ in their readability and length."
|
| 512 |
+
},
|
| 513 |
+
{
|
| 514 |
+
"type": "page_footnote",
|
| 515 |
+
"bbox": [
|
| 516 |
+
0.113,
|
| 517 |
+
0.881,
|
| 518 |
+
0.489,
|
| 519 |
+
0.919
|
| 520 |
+
],
|
| 521 |
+
"angle": 0,
|
| 522 |
+
"content": "2The original code for Luo et al. (2022) is not yet available at the time of writing and their results are reported on a different dataset and thus are not comparable."
|
| 523 |
+
},
|
| 524 |
+
{
|
| 525 |
+
"type": "page_number",
|
| 526 |
+
"bbox": [
|
| 527 |
+
0.486,
|
| 528 |
+
0.928,
|
| 529 |
+
0.516,
|
| 530 |
+
0.941
|
| 531 |
+
],
|
| 532 |
+
"angle": 0,
|
| 533 |
+
"content": "339"
|
| 534 |
+
}
|
| 535 |
+
],
|
| 536 |
+
[
|
| 537 |
+
{
|
| 538 |
+
"type": "table",
|
| 539 |
+
"bbox": [
|
| 540 |
+
0.118,
|
| 541 |
+
0.082,
|
| 542 |
+
0.884,
|
| 543 |
+
0.213
|
| 544 |
+
],
|
| 545 |
+
"angle": 0,
|
| 546 |
+
"content": "<table><tr><td rowspan=\"2\" colspan=\"2\">Model</td><td colspan=\"6\">Abstract</td><td colspan=\"6\">Lay summary - PLOS</td><td colspan=\"6\">Lay summary - eLife</td></tr><tr><td>R-1</td><td>R-2</td><td>R-L</td><td>BS</td><td>DCRS</td><td>FKGL</td><td>R-1</td><td>R-2</td><td>R-L</td><td>BS</td><td>DCRS</td><td>FKGL</td><td>R-1</td><td>R-2</td><td>R-L</td><td>BS</td><td>DCRS</td><td>FKGL</td></tr><tr><td rowspan=\"3\">Hemicratic</td><td>Lead-3</td><td>23.86</td><td>5.66</td><td>21.48</td><td>81.17</td><td>12.66</td><td>14.82</td><td>27.41</td><td>6.87</td><td>24.61</td><td>83.36</td><td>12.66</td><td>15.08</td><td>19.41</td><td>4.06</td><td>18.02</td><td>81.65</td><td>12.65</td><td>13.30</td></tr><tr><td>Lead-K</td><td>35.69</td><td>9.07</td><td>32.70</td><td>82.86</td><td>11.69</td><td>14.49</td><td>38.28</td><td>9.45</td><td>34.8</td><td>83.72</td><td>11.88</td><td>14.95</td><td>37.27</td><td>7.53</td><td>35.18</td><td>82.05</td><td>10.58</td><td>11.89</td></tr><tr><td>Oracle</td><td>60.08</td><td>27.48</td><td>55.95</td><td>87.35</td><td>11.12</td><td>15.15</td><td>57.82</td><td>23.92</td><td>53.37</td><td>87.13</td><td>11.20</td><td>15.28</td><td>48.92</td><td>13.42</td><td>46.30</td><td>82.94</td><td>10.51</td><td>13.18</td></tr><tr><td rowspan=\"3\">Unsupp.</td><td>TextRank</td><td>40.26</td><td>11.53</td><td>36.02</td><td>83.83</td><td>11.78</td><td>20.08</td><td>37.55</td><td>8.50</td><td>33.28</td><td>83.43</td><td>11.87</td><td>20.27</td><td>33.88</td><td>5.79</td><td>31.55</td><td>81.16</td><td>11.30</td><td>18.98</td></tr><tr><td>LexRank</td><td>38.22</td><td>13.06</td><td>35.42</td><td>83.85</td><td>9.70</td><td>14.23</td><td>31.20</td><td>9.09</td><td>28.72</td><td>82.97</td><td>9.70</td><td>14.59</td><td>32.25</td><td>5.73</td><td>30.45</td><td>80.67</td><td>9.68</td><td>13.32</td></tr><tr><td>HipoRank</td><td>36.95</td><td>10.19</td><td>33.89</td><td>83.22</td><td>12.15</td><td>14.46</td><td>37.67</td><td>9.22</td><td>34.28</td><td>83.68</td><td>12.15</td><td>14.69</td><td>31.50</td><td>5.17</td><td>29.68</td><td>80.88</td><td>12.13</td><td>12.13</td></tr><tr><td rowspan=\"6\">Supervised</td><td>BART</td><td>43.34</td><td>13.14</td><td>39.80</td><td>85.48</td><td>11.33</td><td>14.40</td><td>43.52</td><td>12.09</td><td>39.67</td><td>85.70</td><td>11.29</td><td>14.54</td><td>31.17</td><td>6.74</td><td>29.20</td><td>83.55</td><td>11.15</td><td>13.87</td></tr><tr><td>BART_Scaffold</td><td>43.13</td><td>12.87</td><td>39.66</td><td>85.33</td><td>11.10</td><td>14.14</td><td>43.73</td><td>12.22</td><td>39.92</td><td>85.67</td><td>11.30</td><td>14.58</td><td>43.01</td><td>10.82</td><td>40.54</td><td>84.88</td><td>9.68</td><td>11.85</td></tr><tr><td>GPT3.5-zs</td><td>28.69</td><td>6.52</td><td>15.04</td><td>82.76</td><td>11.70</td><td>14.32</td><td>42.74</td><td>12.70</td><td>22.28</td><td>86.32</td><td>10.40</td><td>13.19</td><td>33.72</td><td>8.45</td><td>16.95</td><td>84.36</td><td>10.36</td><td>13.03</td></tr><tr><td>GPT3.5-mdc</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>44.41</td><td>14.16</td><td>41.12</td><td>86.55</td><td>10.36</td><td>13.32</td><td>37.97</td><td>9.39</td><td>35.57</td><td>84.22</td><td>10.78</td><td>13.70</td></tr><tr><td>ATLAS</td><td>45.87</td><td>14.08</td><td>42.32</td><td>85.54</td><td>10.96</td><td>14.21</td><td>44.44</td><td>12.33</td><td>40.60</td><td>85.70</td><td>11.22</td><td>14.58</td><td>46.80</td><td>12.57</td><td>44.14</td><td>85.20</td><td>8.95</td><td>10.87</td></tr><tr><td>ATLAS_Oracle</td><td>46.11</td><td>14.07</td><td>42.51</td><td>85.69</td><td>10.99</td><td>14.13</td><td>44.97</td><td>12.49</td><td>41.02</td><td>85.82</td><td>11.21</td><td>14.48</td><td>46.61</td><td>12.29</td><td>43.95</td><td>85.11</td><td>9.18</td><td>11.39</td></tr></table>"
|
| 547 |
+
},
|
| 548 |
+
{
|
| 549 |
+
"type": "table_caption",
|
| 550 |
+
"bbox": [
|
| 551 |
+
0.113,
|
| 552 |
+
0.223,
|
| 553 |
+
0.884,
|
| 554 |
+
0.266
|
| 555 |
+
],
|
| 556 |
+
"angle": 0,
|
| 557 |
+
"content": "Table 2: Summarization performance on the PLOS and eLife test sets (abstracts combined). \\( \\mathrm{R} = \\) ROUGE F1 (↑), BS = BERTScore (↑), DCRS = Dale-Chall Readability Score (↓), FKGL = Flesh-Kincaid Grade Level (↓). For supervised models, we highlight the best score obtained for each metric in bold and underline second best."
|
| 558 |
+
},
|
| 559 |
+
{
|
| 560 |
+
"type": "text",
|
| 561 |
+
"bbox": [
|
| 562 |
+
0.112,
|
| 563 |
+
0.279,
|
| 564 |
+
0.489,
|
| 565 |
+
0.408
|
| 566 |
+
],
|
| 567 |
+
"angle": 0,
|
| 568 |
+
"content": "and eLife. We include the results for ATLAS under two conditions: 1) one utilizing the average value for each attribute observed in the training data for each summary type (ATLAS); and 2) one using true attribute values obtained from gold standard summaries \\((\\mathrm{ATLAS}_{\\mathrm{Oracle}})\\), where \\(\\mathrm{ATLAS}_{\\mathrm{Oracle}}\\) is intended to provide an upper bound of the obtainable performance using our control attributes."
|
| 569 |
+
},
|
| 570 |
+
{
|
| 571 |
+
"type": "text",
|
| 572 |
+
"bbox": [
|
| 573 |
+
0.112,
|
| 574 |
+
0.409,
|
| 575 |
+
0.489,
|
| 576 |
+
0.601
|
| 577 |
+
],
|
| 578 |
+
"angle": 0,
|
| 579 |
+
"content": "For all metrics, it is evident from Table 2 that ATLAS exceeds the performance of all baseline approaches for both eLife lay summaries and abstracts, demonstrating a strong ability to control the technicality of generated text whilst producing high-quality summaries. Interestingly, although the GPT3.5-mdc baseline achieves a slightly stronger all-round performance for PLOS lay summaries, it fails to maintain this for the more \"lay\" summaries of eLife where ATLAS achieves significantly better performance, indicating that our control attributes can effectively capture these differences."
|
| 580 |
+
},
|
| 581 |
+
{
|
| 582 |
+
"type": "text",
|
| 583 |
+
"bbox": [
|
| 584 |
+
0.113,
|
| 585 |
+
0.602,
|
| 586 |
+
0.49,
|
| 587 |
+
0.699
|
| 588 |
+
],
|
| 589 |
+
"angle": 0,
|
| 590 |
+
"content": "In all cases, ATLAS also achieves scores that are comparable to (and sometimes exceeding) that of \\(\\mathrm{ATLAS}_{\\text{Oracle}}\\), suggesting that the use of the most frequently observed bin value for control attributes is effective for producing the appropriate characteristics for each summary type."
|
| 591 |
+
},
|
| 592 |
+
{
|
| 593 |
+
"type": "text",
|
| 594 |
+
"bbox": [
|
| 595 |
+
0.113,
|
| 596 |
+
0.71,
|
| 597 |
+
0.489,
|
| 598 |
+
0.805
|
| 599 |
+
],
|
| 600 |
+
"angle": 0,
|
| 601 |
+
"content": "Ablation study. To assess the contribution of each attribute to model performance, we conduct an ablation study, evaluating \\(\\mathrm{ATLAS}_{\\mathrm{Oracle}}\\) under different configurations. Table 3 reports the results of this study for abstracts and lay summaries on the combined test sets of PLOS and eLife."
|
| 602 |
+
},
|
| 603 |
+
{
|
| 604 |
+
"type": "text",
|
| 605 |
+
"bbox": [
|
| 606 |
+
0.113,
|
| 607 |
+
0.807,
|
| 608 |
+
0.49,
|
| 609 |
+
0.871
|
| 610 |
+
],
|
| 611 |
+
"angle": 0,
|
| 612 |
+
"content": "The table shows that the removal of control attributes has a significant detrimental effect on performance. Additionally, when only a single attribute is included, the length-based control has"
|
| 613 |
+
},
|
| 614 |
+
{
|
| 615 |
+
"type": "table",
|
| 616 |
+
"bbox": [
|
| 617 |
+
0.513,
|
| 618 |
+
0.276,
|
| 619 |
+
0.882,
|
| 620 |
+
0.405
|
| 621 |
+
],
|
| 622 |
+
"angle": 0,
|
| 623 |
+
"content": "<table><tr><td rowspan=\"2\">Model</td><td colspan=\"4\">Lay summary</td><td colspan=\"4\">Abstract</td></tr><tr><td>R-1</td><td>R-2</td><td>R-L</td><td>DCRS</td><td>R-1</td><td>R-2</td><td>R-L</td><td>DCRS</td></tr><tr><td>BART</td><td>41.68</td><td>11.29</td><td>38.12</td><td>11.27</td><td>43.34</td><td>13.14</td><td>39.80</td><td>11.33</td></tr><tr><td>+R</td><td>43.34</td><td>12.03</td><td>39.75</td><td>10.91</td><td>43.49</td><td>13.23</td><td>39.95</td><td>11.12</td></tr><tr><td>+BG</td><td>42.52</td><td>11.71</td><td>39.01</td><td>11.01</td><td>43.74</td><td>13.65</td><td>40.35</td><td>10.98</td></tr><tr><td>+CWE</td><td>41.58</td><td>11.21</td><td>38.04</td><td>11.28</td><td>44.23</td><td>13.48</td><td>40.56</td><td>11.35</td></tr><tr><td>+L</td><td>44.22</td><td>12.21</td><td>40.55</td><td>10.81</td><td>44.83</td><td>13.75</td><td>41.31</td><td>11.03</td></tr><tr><td>+L+BG</td><td>44.66</td><td>12.36</td><td>40.96</td><td>10.99</td><td>45.67</td><td>13.78</td><td>42.02</td><td>11.17</td></tr><tr><td>+L+R</td><td>44.52</td><td>12.10</td><td>40.73</td><td>10.92</td><td>45.54</td><td>13.64</td><td>41.78</td><td>11.21</td></tr><tr><td>+L+CWE</td><td>44.72</td><td>12.41</td><td>41.04</td><td>10.88</td><td>45.87</td><td>13.99</td><td>42.32</td><td>10.10</td></tr><tr><td>+L+R+BG</td><td>44.82</td><td>12.41</td><td>41.10</td><td>10.97</td><td>45.94</td><td>14.07</td><td>42.32</td><td>11.10</td></tr><tr><td>+L+R+CWE</td><td>44.83</td><td>12.39</td><td>41.05</td><td>10.90</td><td>45.60</td><td>13.63</td><td>41.84</td><td>11.21</td></tr><tr><td>+L+BG+CWE</td><td>45.01</td><td>12.56</td><td>41.38</td><td>10.88</td><td>46.04</td><td>14.16</td><td>42.44</td><td>11.06</td></tr><tr><td>ATLASOracle</td><td>45.22</td><td>12.47</td><td>41.45</td><td>10.91</td><td>46.11</td><td>14.07</td><td>42.51</td><td>10.99</td></tr></table>"
|
| 624 |
+
},
|
| 625 |
+
{
|
| 626 |
+
"type": "table_caption",
|
| 627 |
+
"bbox": [
|
| 628 |
+
0.508,
|
| 629 |
+
0.415,
|
| 630 |
+
0.884,
|
| 631 |
+
0.487
|
| 632 |
+
],
|
| 633 |
+
"angle": 0,
|
| 634 |
+
"content": "Table 3: Ablation study on the ROUGE-based performance of ATLAS under different configurations using true attribute values. \"+\" denotes aspect addition. L = Length, R = Readability, CWE = Content Word Entropy, BG = Background information."
|
| 635 |
+
},
|
| 636 |
+
{
|
| 637 |
+
"type": "text",
|
| 638 |
+
"bbox": [
|
| 639 |
+
0.508,
|
| 640 |
+
0.528,
|
| 641 |
+
0.884,
|
| 642 |
+
0.819
|
| 643 |
+
],
|
| 644 |
+
"angle": 0,
|
| 645 |
+
"content": "the highest ROUGE scores, particularly for lay summaries. This is to be expected, as lay summaries are known to differ significantly in length between PLOS (avg. 175.6 words) and eLife (avg. 347.6 words). When employing attributes in combination, we can see that the addition of content word entropy control and the subsequent addition of background information control have the greatest benefit to performance for ATLAS with 2 and 3 attributes, respectively. Interestingly, no attribute emerges clearly as the least effective as, although readability score control is the only one not included in the 3 attribute model, its inclusion in the single attribute model has clear benefits for lay summary performance. This provides further evidence that, in combination, our control attributes are able to capture the differences between summary types and effectuate them during generation."
|
| 646 |
+
},
|
| 647 |
+
{
|
| 648 |
+
"type": "text",
|
| 649 |
+
"bbox": [
|
| 650 |
+
0.508,
|
| 651 |
+
0.856,
|
| 652 |
+
0.884,
|
| 653 |
+
0.919
|
| 654 |
+
],
|
| 655 |
+
"angle": 0,
|
| 656 |
+
"content": "Human evaluation. To provide a comprehensive assessment of the summaries generated, we conducted a human evaluation involving our proposed model ATLAS and the strongest baseline model"
|
| 657 |
+
},
|
| 658 |
+
{
|
| 659 |
+
"type": "page_footnote",
|
| 660 |
+
"bbox": [
|
| 661 |
+
0.113,
|
| 662 |
+
0.881,
|
| 663 |
+
0.488,
|
| 664 |
+
0.919
|
| 665 |
+
],
|
| 666 |
+
"angle": 0,
|
| 667 |
+
"content": "<sup>4</sup>We use ATLAS<sub>Oracle</sub> as the subject of this experiment rather than ATLAS to get a true reflection of each attribute's influence, rather than an approximation."
|
| 668 |
+
},
|
| 669 |
+
{
|
| 670 |
+
"type": "page_number",
|
| 671 |
+
"bbox": [
|
| 672 |
+
0.486,
|
| 673 |
+
0.929,
|
| 674 |
+
0.517,
|
| 675 |
+
0.941
|
| 676 |
+
],
|
| 677 |
+
"angle": 0,
|
| 678 |
+
"content": "340"
|
| 679 |
+
}
|
| 680 |
+
],
|
| 681 |
+
[
|
| 682 |
+
{
|
| 683 |
+
"type": "table",
|
| 684 |
+
"bbox": [
|
| 685 |
+
0.137,
|
| 686 |
+
0.082,
|
| 687 |
+
0.465,
|
| 688 |
+
0.14
|
| 689 |
+
],
|
| 690 |
+
"angle": 0,
|
| 691 |
+
"content": "<table><tr><td rowspan=\"2\">Criteria</td><td colspan=\"2\">eLife</td><td colspan=\"2\">PLOS</td></tr><tr><td>BART</td><td>ATLAS</td><td>BART</td><td>ATLAS</td></tr><tr><td>Comprehensiveness</td><td>2.30</td><td>2.65</td><td>2.00</td><td>2.55</td></tr><tr><td>Layness</td><td>2.60</td><td>3.05</td><td>2.10</td><td>2.45</td></tr><tr><td>Factuality</td><td>2.20</td><td>2.85</td><td>2.05</td><td>2.40</td></tr></table>"
|
| 692 |
+
},
|
| 693 |
+
{
|
| 694 |
+
"type": "table_caption",
|
| 695 |
+
"bbox": [
|
| 696 |
+
0.114,
|
| 697 |
+
0.149,
|
| 698 |
+
0.486,
|
| 699 |
+
0.192
|
| 700 |
+
],
|
| 701 |
+
"angle": 0,
|
| 702 |
+
"content": "Table 4: Human evaluation on eLife and PLOS. Mean evaluator ratings (1-5) obtained by BART and ATLAS outputs for each metric."
|
| 703 |
+
},
|
| 704 |
+
{
|
| 705 |
+
"type": "text",
|
| 706 |
+
"bbox": [
|
| 707 |
+
0.117,
|
| 708 |
+
0.222,
|
| 709 |
+
0.488,
|
| 710 |
+
0.591
|
| 711 |
+
],
|
| 712 |
+
"angle": 0,
|
| 713 |
+
"content": "(BART) using two experts.\\(^{5}\\) Specifically, adopting a similar setting to the original that of Goldsack et al. (2022), we take a random sample of 10 articles from the test split of each dataset. Alongside each model-generated lay summary, judges are presented with both the abstract and reference lay summary of the given article. We choose not to provide judges with the full article text in an effort to minimise the complexity of the evaluation and the cognitive burden placed upon them. Using 1-5 Likert scale, the judges are asked to rate the model output based on three criteria: (1) Comprehensiveness: to what extent does the model output contain the information that might be necessary for a non-expert to understand the high-level topic of the article and the significance of the research; (2) Layness: to what extent is the content of the model output comprehensible (or readable) to a non-expert, in terms of both structure and language; (3) Factuality: to what extent is the model generated lay summary factually consistent with the two other provided summaries (i.e. abstract and reference lay summary).\\(^{6}\\)"
|
| 714 |
+
},
|
| 715 |
+
{
|
| 716 |
+
"type": "text",
|
| 717 |
+
"bbox": [
|
| 718 |
+
0.117,
|
| 719 |
+
0.595,
|
| 720 |
+
0.488,
|
| 721 |
+
0.851
|
| 722 |
+
],
|
| 723 |
+
"angle": 0,
|
| 724 |
+
"content": "Table 4 presents the average ratings from our manual evaluation. We calculate the Cohan Kappa scores to measure inter-rater reliability, where we obtain values of 0.50 and 0.57 for eLife and PLOS, attesting to the reliability of our evaluation. The overall results suggest that our proposed method performs better than the BART baseline in terms of all three criteria on both datasets, attesting to their quality. In terms of layness, the higher layness scores observed in the eLife dataset compared to the PLOS dataset align with the previous analysis for the two datasets from (Goldsack et al., 2022). Moreover, compared to baseline, it is worth noting that our model outputs are judged to produce much more factually correct outputs on both datasets, suggesting our method generates fewer hallucinations."
|
| 725 |
+
},
|
| 726 |
+
{
|
| 727 |
+
"type": "table",
|
| 728 |
+
"bbox": [
|
| 729 |
+
0.534,
|
| 730 |
+
0.082,
|
| 731 |
+
0.858,
|
| 732 |
+
0.158
|
| 733 |
+
],
|
| 734 |
+
"angle": 0,
|
| 735 |
+
"content": "<table><tr><td colspan=\"2\">Model</td><td>FKGL</td><td>CLI</td><td>DCRS</td></tr><tr><td rowspan=\"2\">PLOS</td><td>ATLAS专业技术</td><td>15.11</td><td>14.21</td><td>11.64</td></tr><tr><td>ATLASlay</td><td>13.22</td><td>13.97</td><td>11.22</td></tr><tr><td rowspan=\"2\">eLife</td><td>ATLAS专业技术</td><td>14.77</td><td>14.02</td><td>11.32</td></tr><tr><td>ATLASlay</td><td>10.89</td><td>11.45</td><td>9.17</td></tr></table>"
|
| 736 |
+
},
|
| 737 |
+
{
|
| 738 |
+
"type": "table_caption",
|
| 739 |
+
"bbox": [
|
| 740 |
+
0.509,
|
| 741 |
+
0.168,
|
| 742 |
+
0.881,
|
| 743 |
+
0.196
|
| 744 |
+
],
|
| 745 |
+
"angle": 0,
|
| 746 |
+
"content": "Table 5: Readability metrics for two versions of ATLAS with highly lay and technical attribute values."
|
| 747 |
+
},
|
| 748 |
+
{
|
| 749 |
+
"type": "text",
|
| 750 |
+
"bbox": [
|
| 751 |
+
0.508,
|
| 752 |
+
0.229,
|
| 753 |
+
0.884,
|
| 754 |
+
0.389
|
| 755 |
+
],
|
| 756 |
+
"angle": 0,
|
| 757 |
+
"content": "Controllability analysis. To assess the extent to which our control attributes enable controllability over the overall layness of the text, we conduct a further analysis using two additional versions of ATLAS with highly lay or technical values. Specifically, we create \\(\\mathrm{ATLAS}_{lay}\\) and \\(\\mathrm{ATLAS}_{technical}\\) by selecting the lowest and highest attribute bins, respectively, for which there are at least 100 observations in the training data (for all attributes other than length which is kept constant)."
|
| 758 |
+
},
|
| 759 |
+
{
|
| 760 |
+
"type": "text",
|
| 761 |
+
"bbox": [
|
| 762 |
+
0.508,
|
| 763 |
+
0.394,
|
| 764 |
+
0.884,
|
| 765 |
+
0.619
|
| 766 |
+
],
|
| 767 |
+
"angle": 0,
|
| 768 |
+
"content": "We examine how these extreme attributes manifest themselves in generated summaries by calculating the average readability values obtained by the generated summaries for both datasets. We present the results of the analysis in Table 5, which show a significant divergence in the readability values obtained by each model on both datasets. Interestingly, this divergence is substantially wider for summaries generated on eLife, the dataset which is identified by Goldsack et al. (2022) as containing lay summaries that are more \"lay\" than those of PLOS, suggesting that exposure to more extreme values whilst training on this dataset may enable even greater controllability at inference time.7"
|
| 769 |
+
},
|
| 770 |
+
{
|
| 771 |
+
"type": "title",
|
| 772 |
+
"bbox": [
|
| 773 |
+
0.509,
|
| 774 |
+
0.643,
|
| 775 |
+
0.642,
|
| 776 |
+
0.658
|
| 777 |
+
],
|
| 778 |
+
"angle": 0,
|
| 779 |
+
"content": "5 Conclusion"
|
| 780 |
+
},
|
| 781 |
+
{
|
| 782 |
+
"type": "text",
|
| 783 |
+
"bbox": [
|
| 784 |
+
0.508,
|
| 785 |
+
0.677,
|
| 786 |
+
0.885,
|
| 787 |
+
0.87
|
| 788 |
+
],
|
| 789 |
+
"angle": 0,
|
| 790 |
+
"content": "In this paper, we introduce ATLAS, a model for controllable lay summarisation that employs controllable attribute tokens to influence various properties of the generated summary, enabling it to cater to users of different levels of expertise. Using combined datasets for biomedical lay summarisation we perform multiple experiments whereby we confirm the ability of our selected control attributes to discriminate between summary types, demonstrate their effectiveness for controllable lay summarisation, and further investigate their ability to effectuate desired differences during generation."
|
| 791 |
+
},
|
| 792 |
+
{
|
| 793 |
+
"type": "page_footnote",
|
| 794 |
+
"bbox": [
|
| 795 |
+
0.114,
|
| 796 |
+
0.868,
|
| 797 |
+
0.486,
|
| 798 |
+
0.892
|
| 799 |
+
],
|
| 800 |
+
"angle": 0,
|
| 801 |
+
"content": "Both judges have experience in scientific research and hold at least a bachelor's degree."
|
| 802 |
+
},
|
| 803 |
+
{
|
| 804 |
+
"type": "page_footnote",
|
| 805 |
+
"bbox": [
|
| 806 |
+
0.114,
|
| 807 |
+
0.892,
|
| 808 |
+
0.485,
|
| 809 |
+
0.919
|
| 810 |
+
],
|
| 811 |
+
"angle": 0,
|
| 812 |
+
"content": "For example, for the \"Layness\" criteria, a score of 5 is equal to \"highly lay\" and a score of 1, \"highly technical\"."
|
| 813 |
+
},
|
| 814 |
+
{
|
| 815 |
+
"type": "list",
|
| 816 |
+
"bbox": [
|
| 817 |
+
0.114,
|
| 818 |
+
0.868,
|
| 819 |
+
0.486,
|
| 820 |
+
0.919
|
| 821 |
+
],
|
| 822 |
+
"angle": 0,
|
| 823 |
+
"content": null
|
| 824 |
+
},
|
| 825 |
+
{
|
| 826 |
+
"type": "page_footnote",
|
| 827 |
+
"bbox": [
|
| 828 |
+
0.509,
|
| 829 |
+
0.892,
|
| 830 |
+
0.881,
|
| 831 |
+
0.919
|
| 832 |
+
],
|
| 833 |
+
"angle": 0,
|
| 834 |
+
"content": "Examples of summaries generated by these models are included in the Appendices."
|
| 835 |
+
},
|
| 836 |
+
{
|
| 837 |
+
"type": "page_number",
|
| 838 |
+
"bbox": [
|
| 839 |
+
0.486,
|
| 840 |
+
0.929,
|
| 841 |
+
0.514,
|
| 842 |
+
0.94
|
| 843 |
+
],
|
| 844 |
+
"angle": 0,
|
| 845 |
+
"content": "341"
|
| 846 |
+
}
|
| 847 |
+
],
|
| 848 |
+
[
|
| 849 |
+
{
|
| 850 |
+
"type": "title",
|
| 851 |
+
"bbox": [
|
| 852 |
+
0.115,
|
| 853 |
+
0.085,
|
| 854 |
+
0.221,
|
| 855 |
+
0.1
|
| 856 |
+
],
|
| 857 |
+
"angle": 0,
|
| 858 |
+
"content": "Limitations"
|
| 859 |
+
},
|
| 860 |
+
{
|
| 861 |
+
"type": "text",
|
| 862 |
+
"bbox": [
|
| 863 |
+
0.113,
|
| 864 |
+
0.11,
|
| 865 |
+
0.49,
|
| 866 |
+
0.255
|
| 867 |
+
],
|
| 868 |
+
"angle": 0,
|
| 869 |
+
"content": "Although our results demonstrate that our selected control attributes are able to effectively capture the characteristics between summary types, it is highly likely that there are additional attributes that we have not explored that could benefit performance for controllable lay summarisation. We plan to explore this in future work, in addition to experimenting with more complex methods for enabling controllability."
|
| 870 |
+
},
|
| 871 |
+
{
|
| 872 |
+
"type": "title",
|
| 873 |
+
"bbox": [
|
| 874 |
+
0.115,
|
| 875 |
+
0.281,
|
| 876 |
+
0.214,
|
| 877 |
+
0.297
|
| 878 |
+
],
|
| 879 |
+
"angle": 0,
|
| 880 |
+
"content": "References"
|
| 881 |
+
},
|
| 882 |
+
{
|
| 883 |
+
"type": "ref_text",
|
| 884 |
+
"bbox": [
|
| 885 |
+
0.116,
|
| 886 |
+
0.304,
|
| 887 |
+
0.492,
|
| 888 |
+
0.41
|
| 889 |
+
],
|
| 890 |
+
"angle": 0,
|
| 891 |
+
"content": "Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615-3620, Hong Kong, China. Association for Computational Linguistics."
|
| 892 |
+
},
|
| 893 |
+
{
|
| 894 |
+
"type": "ref_text",
|
| 895 |
+
"bbox": [
|
| 896 |
+
0.117,
|
| 897 |
+
0.419,
|
| 898 |
+
0.49,
|
| 899 |
+
0.526
|
| 900 |
+
],
|
| 901 |
+
"angle": 0,
|
| 902 |
+
"content": "Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, and Dan Weld. 2019. Pretrained language models for sequential sentence classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3693-3699, Hong Kong, China. Association for Computational Linguistics."
|
| 903 |
+
},
|
| 904 |
+
{
|
| 905 |
+
"type": "ref_text",
|
| 906 |
+
"bbox": [
|
| 907 |
+
0.117,
|
| 908 |
+
0.534,
|
| 909 |
+
0.49,
|
| 910 |
+
0.653
|
| 911 |
+
],
|
| 912 |
+
"angle": 0,
|
| 913 |
+
"content": "Nachshon Cohen, Oren Kalinsky, Yftah Ziser, and Alessandro Moschitti. 2021. Wikisum: Coherent summarization dataset for efficient human-evaluation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 2: Short Papers), Virtual Event, August 1-6, 2021, pages 212-219. Association for Computational Linguistics."
|
| 914 |
+
},
|
| 915 |
+
{
|
| 916 |
+
"type": "ref_text",
|
| 917 |
+
"bbox": [
|
| 918 |
+
0.117,
|
| 919 |
+
0.662,
|
| 920 |
+
0.49,
|
| 921 |
+
0.755
|
| 922 |
+
],
|
| 923 |
+
"angle": 0,
|
| 924 |
+
"content": "Franck Dernoncourt and Ji Young Lee. 2017. PubMed 200k RCT: a dataset for sequential sentence classification in medical abstracts. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 308-313, Taipei, Taiwan. Asian Federation of Natural Language Processing."
|
| 925 |
+
},
|
| 926 |
+
{
|
| 927 |
+
"type": "ref_text",
|
| 928 |
+
"bbox": [
|
| 929 |
+
0.117,
|
| 930 |
+
0.764,
|
| 931 |
+
0.49,
|
| 932 |
+
0.856
|
| 933 |
+
],
|
| 934 |
+
"angle": 0,
|
| 935 |
+
"content": "Yue Dong, Andrei Mircea, and Jackie Chi Kit Cheung. 2021. Discourse-Aware unsupervised summarization for long scientific documents. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1089-1102, Online. Association for Computational Linguistics."
|
| 936 |
+
},
|
| 937 |
+
{
|
| 938 |
+
"type": "ref_text",
|
| 939 |
+
"bbox": [
|
| 940 |
+
0.117,
|
| 941 |
+
0.866,
|
| 942 |
+
0.49,
|
| 943 |
+
0.918
|
| 944 |
+
],
|
| 945 |
+
"angle": 0,
|
| 946 |
+
"content": "G. Erkan and D. R. Radev. 2004. LexRank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research, 22:457-479."
|
| 947 |
+
},
|
| 948 |
+
{
|
| 949 |
+
"type": "list",
|
| 950 |
+
"bbox": [
|
| 951 |
+
0.116,
|
| 952 |
+
0.304,
|
| 953 |
+
0.492,
|
| 954 |
+
0.918
|
| 955 |
+
],
|
| 956 |
+
"angle": 0,
|
| 957 |
+
"content": null
|
| 958 |
+
},
|
| 959 |
+
{
|
| 960 |
+
"type": "ref_text",
|
| 961 |
+
"bbox": [
|
| 962 |
+
0.513,
|
| 963 |
+
0.086,
|
| 964 |
+
0.885,
|
| 965 |
+
0.192
|
| 966 |
+
],
|
| 967 |
+
"angle": 0,
|
| 968 |
+
"content": "Tomas Goldsack, Zheheng Luo, Qianqian Xie, Carolina Scarton, Matthew Shardlow, Sophia Ananiadou, and Chenghua Lin. 2023a. Overview of the biolaysumm 2023 shared task on lay summarization of biomedical research articles. In The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks, pages 468-477, Toronto, Canada. Association for Computational Linguistics."
|
| 969 |
+
},
|
| 970 |
+
{
|
| 971 |
+
"type": "ref_text",
|
| 972 |
+
"bbox": [
|
| 973 |
+
0.512,
|
| 974 |
+
0.205,
|
| 975 |
+
0.885,
|
| 976 |
+
0.297
|
| 977 |
+
],
|
| 978 |
+
"angle": 0,
|
| 979 |
+
"content": "Tomas Goldsack, Zhihao Zhang, Chenghua Lin, and Carolina Scarton. 2022. Making science simple: Corpora for the lay summarisation of scientific literature. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10589-10604, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics."
|
| 980 |
+
},
|
| 981 |
+
{
|
| 982 |
+
"type": "ref_text",
|
| 983 |
+
"bbox": [
|
| 984 |
+
0.512,
|
| 985 |
+
0.31,
|
| 986 |
+
0.885,
|
| 987 |
+
0.403
|
| 988 |
+
],
|
| 989 |
+
"angle": 0,
|
| 990 |
+
"content": "Tomas Goldsack, Zhihao Zhang, Chen Tang, Carolina Scarton, and Chenghua Lin. 2023b. Enhancing biomedical lay summarisation with external knowledge graphs. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 8016-8032, Singapore. Association for Computational Linguistics."
|
| 991 |
+
},
|
| 992 |
+
{
|
| 993 |
+
"type": "ref_text",
|
| 994 |
+
"bbox": [
|
| 995 |
+
0.512,
|
| 996 |
+
0.416,
|
| 997 |
+
0.885,
|
| 998 |
+
0.482
|
| 999 |
+
],
|
| 1000 |
+
"angle": 0,
|
| 1001 |
+
"content": "Yue Guo, Wei Qiu, Yizhong Wang, and Trevor Cohen. 2021. Automated Lay Language Summarization of Biomedical Scientific Reviews. Proceedings of the AAAI Conference on Artificial Intelligence, 35(1):160-168."
|
| 1002 |
+
},
|
| 1003 |
+
{
|
| 1004 |
+
"type": "ref_text",
|
| 1005 |
+
"bbox": [
|
| 1006 |
+
0.512,
|
| 1007 |
+
0.495,
|
| 1008 |
+
0.885,
|
| 1009 |
+
0.537
|
| 1010 |
+
],
|
| 1011 |
+
"angle": 0,
|
| 1012 |
+
"content": "Stuart R F King, Emma Pewsey, and Sarah Shailes. 2017. Plain-language Summaries of Research: An inside guide to eLife digests. eLife, 6:e25410."
|
| 1013 |
+
},
|
| 1014 |
+
{
|
| 1015 |
+
"type": "ref_text",
|
| 1016 |
+
"bbox": [
|
| 1017 |
+
0.512,
|
| 1018 |
+
0.549,
|
| 1019 |
+
0.885,
|
| 1020 |
+
0.629
|
| 1021 |
+
],
|
| 1022 |
+
"angle": 0,
|
| 1023 |
+
"content": "Xiangzhe Kong, Jialiang Huang, Ziquan Tung, Jian Guan, and Minlie Huang. 2021. Stylized story generation with style-guided planning. In *Findings of the Association for Computational Linguistics: ACLIJCNLP* 2021, pages 2430-2436, Online. Association for Computational Linguistics."
|
| 1024 |
+
},
|
| 1025 |
+
{
|
| 1026 |
+
"type": "ref_text",
|
| 1027 |
+
"bbox": [
|
| 1028 |
+
0.512,
|
| 1029 |
+
0.641,
|
| 1030 |
+
0.885,
|
| 1031 |
+
0.761
|
| 1032 |
+
],
|
| 1033 |
+
"angle": 0,
|
| 1034 |
+
"content": "Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7871-7880. Association for Computational Linguistics."
|
| 1035 |
+
},
|
| 1036 |
+
{
|
| 1037 |
+
"type": "ref_text",
|
| 1038 |
+
"bbox": [
|
| 1039 |
+
0.512,
|
| 1040 |
+
0.773,
|
| 1041 |
+
0.885,
|
| 1042 |
+
0.827
|
| 1043 |
+
],
|
| 1044 |
+
"angle": 0,
|
| 1045 |
+
"content": "Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics."
|
| 1046 |
+
},
|
| 1047 |
+
{
|
| 1048 |
+
"type": "ref_text",
|
| 1049 |
+
"bbox": [
|
| 1050 |
+
0.512,
|
| 1051 |
+
0.84,
|
| 1052 |
+
0.885,
|
| 1053 |
+
0.919
|
| 1054 |
+
],
|
| 1055 |
+
"angle": 0,
|
| 1056 |
+
"content": "Zeheng Luo, Qianqian Xie, and Sophia Ananiadou. 2022. Readability controllable biomedical document summarization. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4667-4680, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics."
|
| 1057 |
+
},
|
| 1058 |
+
{
|
| 1059 |
+
"type": "list",
|
| 1060 |
+
"bbox": [
|
| 1061 |
+
0.512,
|
| 1062 |
+
0.086,
|
| 1063 |
+
0.885,
|
| 1064 |
+
0.919
|
| 1065 |
+
],
|
| 1066 |
+
"angle": 0,
|
| 1067 |
+
"content": null
|
| 1068 |
+
},
|
| 1069 |
+
{
|
| 1070 |
+
"type": "page_number",
|
| 1071 |
+
"bbox": [
|
| 1072 |
+
0.486,
|
| 1073 |
+
0.929,
|
| 1074 |
+
0.517,
|
| 1075 |
+
0.941
|
| 1076 |
+
],
|
| 1077 |
+
"angle": 0,
|
| 1078 |
+
"content": "342"
|
| 1079 |
+
}
|
| 1080 |
+
],
|
| 1081 |
+
[
|
| 1082 |
+
{
|
| 1083 |
+
"type": "ref_text",
|
| 1084 |
+
"bbox": [
|
| 1085 |
+
0.117,
|
| 1086 |
+
0.086,
|
| 1087 |
+
0.49,
|
| 1088 |
+
0.192
|
| 1089 |
+
],
|
| 1090 |
+
"angle": 0,
|
| 1091 |
+
"content": "Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55-60, Baltimore, Maryland. Association for Computational Linguistics."
|
| 1092 |
+
},
|
| 1093 |
+
{
|
| 1094 |
+
"type": "ref_text",
|
| 1095 |
+
"bbox": [
|
| 1096 |
+
0.117,
|
| 1097 |
+
0.202,
|
| 1098 |
+
0.49,
|
| 1099 |
+
0.281
|
| 1100 |
+
],
|
| 1101 |
+
"angle": 0,
|
| 1102 |
+
"content": "Louis Martin, Éric de la Clergerie, Benoit Sagot, and Antoine Bordes. 2020. Controllable sentence simplification. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 4689-4698, Marseille, France. European Language Resources Association."
|
| 1103 |
+
},
|
| 1104 |
+
{
|
| 1105 |
+
"type": "ref_text",
|
| 1106 |
+
"bbox": [
|
| 1107 |
+
0.117,
|
| 1108 |
+
0.292,
|
| 1109 |
+
0.49,
|
| 1110 |
+
0.358
|
| 1111 |
+
],
|
| 1112 |
+
"angle": 0,
|
| 1113 |
+
"content": "Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 404-411, Barcelona, Spain. Association for Computational Linguistics."
|
| 1114 |
+
},
|
| 1115 |
+
{
|
| 1116 |
+
"type": "ref_text",
|
| 1117 |
+
"bbox": [
|
| 1118 |
+
0.117,
|
| 1119 |
+
0.368,
|
| 1120 |
+
0.49,
|
| 1121 |
+
0.447
|
| 1122 |
+
],
|
| 1123 |
+
"angle": 0,
|
| 1124 |
+
"content": "Kim Cheng Sheang and Horacio Saggion. 2021. Controllable sentence simplification with a unified text-to-text transfer transformer. In Proceedings of the 14th International Conference on Natural Language Generation, pages 341-352, Aberdeen, Scotland, UK. Association for Computational Linguistics."
|
| 1125 |
+
},
|
| 1126 |
+
{
|
| 1127 |
+
"type": "ref_text",
|
| 1128 |
+
"bbox": [
|
| 1129 |
+
0.117,
|
| 1130 |
+
0.457,
|
| 1131 |
+
0.488,
|
| 1132 |
+
0.511
|
| 1133 |
+
],
|
| 1134 |
+
"angle": 0,
|
| 1135 |
+
"content": "Chen Tang, Tyler Loakman, and Chenghua Lin. 2024. A cross-attention augmented model for event-triggered context-aware story generation. Computer Speech & Language, page 101662."
|
| 1136 |
+
},
|
| 1137 |
+
{
|
| 1138 |
+
"type": "ref_text",
|
| 1139 |
+
"bbox": [
|
| 1140 |
+
0.117,
|
| 1141 |
+
0.52,
|
| 1142 |
+
0.49,
|
| 1143 |
+
0.613
|
| 1144 |
+
],
|
| 1145 |
+
"angle": 0,
|
| 1146 |
+
"content": "Oisin Turbitt, Robert Bevan, and Mouhamad Aboshokor. 2023. MDC at BioLaySumm task 1: Evaluating GPT models for biomedical lay summarization. In The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks, pages 611-619, Toronto, Canada. Association for Computational Linguistics."
|
| 1147 |
+
},
|
| 1148 |
+
{
|
| 1149 |
+
"type": "ref_text",
|
| 1150 |
+
"bbox": [
|
| 1151 |
+
0.117,
|
| 1152 |
+
0.623,
|
| 1153 |
+
0.49,
|
| 1154 |
+
0.702
|
| 1155 |
+
],
|
| 1156 |
+
"angle": 0,
|
| 1157 |
+
"content": "Liqiang Xiao, Lu Wang, Hao He, and Yaohui Jin. 2020. Modeling content importance for summarization with pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3606-3611, Online. Association for Computational Linguistics."
|
| 1158 |
+
},
|
| 1159 |
+
{
|
| 1160 |
+
"type": "ref_text",
|
| 1161 |
+
"bbox": [
|
| 1162 |
+
0.117,
|
| 1163 |
+
0.712,
|
| 1164 |
+
0.49,
|
| 1165 |
+
0.752
|
| 1166 |
+
],
|
| 1167 |
+
"angle": 0,
|
| 1168 |
+
"content": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. BERTScore: Evaluating text generation with BERT."
|
| 1169 |
+
},
|
| 1170 |
+
{
|
| 1171 |
+
"type": "title",
|
| 1172 |
+
"bbox": [
|
| 1173 |
+
0.117,
|
| 1174 |
+
0.765,
|
| 1175 |
+
0.237,
|
| 1176 |
+
0.782
|
| 1177 |
+
],
|
| 1178 |
+
"angle": 0,
|
| 1179 |
+
"content": "A Appendix"
|
| 1180 |
+
},
|
| 1181 |
+
{
|
| 1182 |
+
"type": "text",
|
| 1183 |
+
"bbox": [
|
| 1184 |
+
0.117,
|
| 1185 |
+
0.791,
|
| 1186 |
+
0.49,
|
| 1187 |
+
0.919
|
| 1188 |
+
],
|
| 1189 |
+
"angle": 0,
|
| 1190 |
+
"content": "ChatGPT Baseline Prompts The prompts provided to ChatGPT for each summary type are given in Table 6. To ensure a fair comparison, we control the length of the GPT baselines using the generation arguments, (e.g., max_new_tokens). Note that we differentiate between the lay summary types (namely, PLOS and eLife) based on distinctions made by Goldsack et al. (2022), who recognise"
|
| 1191 |
+
},
|
| 1192 |
+
{
|
| 1193 |
+
"type": "text",
|
| 1194 |
+
"bbox": [
|
| 1195 |
+
0.512,
|
| 1196 |
+
0.085,
|
| 1197 |
+
0.885,
|
| 1198 |
+
0.133
|
| 1199 |
+
],
|
| 1200 |
+
"angle": 0,
|
| 1201 |
+
"content": "PLOS' summaries as the less \"lay\" of the two, making them better suited to an audience with some technical knowledge."
|
| 1202 |
+
},
|
| 1203 |
+
{
|
| 1204 |
+
"type": "page_number",
|
| 1205 |
+
"bbox": [
|
| 1206 |
+
0.486,
|
| 1207 |
+
0.929,
|
| 1208 |
+
0.516,
|
| 1209 |
+
0.941
|
| 1210 |
+
],
|
| 1211 |
+
"angle": 0,
|
| 1212 |
+
"content": "343"
|
| 1213 |
+
}
|
| 1214 |
+
],
|
| 1215 |
+
[
|
| 1216 |
+
{
|
| 1217 |
+
"type": "table",
|
| 1218 |
+
"bbox": [
|
| 1219 |
+
0.134,
|
| 1220 |
+
0.135,
|
| 1221 |
+
0.864,
|
| 1222 |
+
0.253
|
| 1223 |
+
],
|
| 1224 |
+
"angle": 0,
|
| 1225 |
+
"content": "<table><tr><td>Summary Type</td><td>Prompt</td></tr><tr><td>Abstract</td><td>Summarize the following article for an expert audience that is familiar with the technical aspects of the content</td></tr><tr><td>PLOS lay summary</td><td>Summarize the following article for a non-expert audience that has some familiarity with the technical aspects of the content</td></tr><tr><td>eLife lay summary</td><td>Summarize the following article for a non-expert audience that has no familiarity with the technical aspects of the content</td></tr></table>"
|
| 1226 |
+
},
|
| 1227 |
+
{
|
| 1228 |
+
"type": "table_caption",
|
| 1229 |
+
"bbox": [
|
| 1230 |
+
0.247,
|
| 1231 |
+
0.262,
|
| 1232 |
+
0.747,
|
| 1233 |
+
0.277
|
| 1234 |
+
],
|
| 1235 |
+
"angle": 0,
|
| 1236 |
+
"content": "Table 6: Prompts used for the GPT3.5-zs baseline for each summary type."
|
| 1237 |
+
},
|
| 1238 |
+
{
|
| 1239 |
+
"type": "title",
|
| 1240 |
+
"bbox": [
|
| 1241 |
+
0.482,
|
| 1242 |
+
0.4,
|
| 1243 |
+
0.528,
|
| 1244 |
+
0.414
|
| 1245 |
+
],
|
| 1246 |
+
"angle": 0,
|
| 1247 |
+
"content": "eLife"
|
| 1248 |
+
},
|
| 1249 |
+
{
|
| 1250 |
+
"type": "title",
|
| 1251 |
+
"bbox": [
|
| 1252 |
+
0.122,
|
| 1253 |
+
0.421,
|
| 1254 |
+
0.234,
|
| 1255 |
+
0.435
|
| 1256 |
+
],
|
| 1257 |
+
"angle": 0,
|
| 1258 |
+
"content": "ATLAS<sup>technical</sup>"
|
| 1259 |
+
},
|
| 1260 |
+
{
|
| 1261 |
+
"type": "text",
|
| 1262 |
+
"bbox": [
|
| 1263 |
+
0.12,
|
| 1264 |
+
0.435,
|
| 1265 |
+
0.891,
|
| 1266 |
+
0.592
|
| 1267 |
+
],
|
| 1268 |
+
"angle": 0,
|
| 1269 |
+
"content": "The effects of muscle fatigue on motor learning under fatigue are poorly understood. Here, we investigated the effect of fatigue on learning under a sequential pinch force task. Irrespective of whether the observed fatigue effects are domain-specific or present in another task that is cognitive demanding but requires minimal force control, we found that participants had impaired skill learning in both the fatigued and unfatigued effector. We replicated the findings of experiment 1 and found that disruption of rTMS to the motor cortex (Cantarero et al., 2013a) alleviated the adverse effects of fatigue. Cortical excitability was similar to that observed in the untrained effector, but not in the fatigued. Altogether, our findings suggest that motor fatigue has a domain-dependent lasting effect on skill learning. Future studies should focus on understanding the role of motor cortex excitability in the acquisition of motor skills under fatigue, as well as the potential role for maladaptive memory formation under fatigued conditions. Cortical and motor cortices should be included in training and rehabilitation regimens geared to improve motor skill acquisition."
|
| 1270 |
+
},
|
| 1271 |
+
{
|
| 1272 |
+
"type": "title",
|
| 1273 |
+
"bbox": [
|
| 1274 |
+
0.122,
|
| 1275 |
+
0.608,
|
| 1276 |
+
0.197,
|
| 1277 |
+
0.622
|
| 1278 |
+
],
|
| 1279 |
+
"angle": 0,
|
| 1280 |
+
"content": "ATLASlay"
|
| 1281 |
+
},
|
| 1282 |
+
{
|
| 1283 |
+
"type": "text",
|
| 1284 |
+
"bbox": [
|
| 1285 |
+
0.12,
|
| 1286 |
+
0.624,
|
| 1287 |
+
0.892,
|
| 1288 |
+
0.816
|
| 1289 |
+
],
|
| 1290 |
+
"angle": 0,
|
| 1291 |
+
"content": "Muscle fatigue is a neuromuscular phenomenon that can impair performance over time. People who experience fatigue tend to be less able to learn a new motor skill than people who experience no fatigue. However, it is not clear how fatigue affects the ability of people to learn new motor skills. One way to study the effects of fatigue is to study how people learn a motor skill under fatigue conditions. One of the main challenges in studying motor learning under fatigue is the so-termed \"performance-learning\" distinction In this study, participants were asked to practice a motor task over two days and then had to catch up to the skill performance level of the non-fatigued group. Unexpectedly, participants who were only fatigued at the end of the training were less likely to learn the motor skill. This suggests that fatigue has a domain-specific lasting effect on the learning of a skill. ernas et al. now show that people who are unable to recover the motor task under fatigue are more likely to be unable to learn their motor skill when they are not fatigued. The experiments show that when people are trained to perform the task, their ability to recover from fatigue is severely impaired. This effect is due to a change in the strength of the motor cortex, a region of the brain that is involved in learning and memory."
|
| 1292 |
+
},
|
| 1293 |
+
{
|
| 1294 |
+
"type": "text",
|
| 1295 |
+
"bbox": [
|
| 1296 |
+
0.113,
|
| 1297 |
+
0.833,
|
| 1298 |
+
0.885,
|
| 1299 |
+
0.863
|
| 1300 |
+
],
|
| 1301 |
+
"angle": 0,
|
| 1302 |
+
"content": "Figure 2: An case study from the eLife test set comparing summaries generated under highly lay and technical attribute values (with the length attribute being kept constant)."
|
| 1303 |
+
},
|
| 1304 |
+
{
|
| 1305 |
+
"type": "page_number",
|
| 1306 |
+
"bbox": [
|
| 1307 |
+
0.486,
|
| 1308 |
+
0.929,
|
| 1309 |
+
0.516,
|
| 1310 |
+
0.941
|
| 1311 |
+
],
|
| 1312 |
+
"angle": 0,
|
| 1313 |
+
"content": "344"
|
| 1314 |
+
}
|
| 1315 |
+
],
|
| 1316 |
+
[
|
| 1317 |
+
{
|
| 1318 |
+
"type": "title",
|
| 1319 |
+
"bbox": [
|
| 1320 |
+
0.478,
|
| 1321 |
+
0.287,
|
| 1322 |
+
0.531,
|
| 1323 |
+
0.302
|
| 1324 |
+
],
|
| 1325 |
+
"angle": 0,
|
| 1326 |
+
"content": "PLOS"
|
| 1327 |
+
},
|
| 1328 |
+
{
|
| 1329 |
+
"type": "title",
|
| 1330 |
+
"bbox": [
|
| 1331 |
+
0.122,
|
| 1332 |
+
0.309,
|
| 1333 |
+
0.234,
|
| 1334 |
+
0.323
|
| 1335 |
+
],
|
| 1336 |
+
"angle": 0,
|
| 1337 |
+
"content": "ATLAS<sup>technical</sup>"
|
| 1338 |
+
},
|
| 1339 |
+
{
|
| 1340 |
+
"type": "text",
|
| 1341 |
+
"bbox": [
|
| 1342 |
+
0.12,
|
| 1343 |
+
0.324,
|
| 1344 |
+
0.891,
|
| 1345 |
+
0.494
|
| 1346 |
+
],
|
| 1347 |
+
"angle": 0,
|
| 1348 |
+
"content": "In this paper, we explore the conditions under which associations between antigenic, metabolic and virulence properties of strains within pneumococcal populations and predict how these may shift under vaccination. In this work, we use a conceptual framework to investigate the dynamics of associations between serotype, serotype and serotype-specific immunity in pneumococcus populations. We find that antigenic type (AT) is the principal determinant of non-capsular virulence factors (VF), whereas MT is the major determinant. AT and MT are highly non-random; MT and AT are co-evolved and co-expressed. ET and CT are also found to be highly correlated, suggesting that they have synergistically adapted to a particular metabolic niche. IT and LD are found to have similar patterns of linkage disequilibrium (LD) than randomly selected genes not associated with metabolic/transport processes; AT is associated with a higher frequency of LD LD than MT LD; CT LD=0.013). CT is the first mathematical model to explain the non-overlapping association between serotypic and serotypes. TCT BC LD is a useful tool for predicting the potential impact of vaccination on the prevalence of serotypes associated with non-vaccine serotypes and for predicting how they may change under vaccination and vaccine serotype replacement."
|
| 1349 |
+
},
|
| 1350 |
+
{
|
| 1351 |
+
"type": "title",
|
| 1352 |
+
"bbox": [
|
| 1353 |
+
0.122,
|
| 1354 |
+
0.496,
|
| 1355 |
+
0.197,
|
| 1356 |
+
0.51
|
| 1357 |
+
],
|
| 1358 |
+
"angle": 0,
|
| 1359 |
+
"content": "ATLASlay"
|
| 1360 |
+
},
|
| 1361 |
+
{
|
| 1362 |
+
"type": "text",
|
| 1363 |
+
"bbox": [
|
| 1364 |
+
0.12,
|
| 1365 |
+
0.512,
|
| 1366 |
+
0.892,
|
| 1367 |
+
0.672
|
| 1368 |
+
],
|
| 1369 |
+
"angle": 0,
|
| 1370 |
+
"content": "Pneumococcal populations are highly diverse in non-antigenic genes and are commonly classified into sequence types (ST) by Multi Locus Sequence Typing (MLST) of seven metabolic housekeeping genes. STs have been documented to occur regularly throughout the past 7 decades, yet many studies (eg) show an intriguing pattern of largely non-overlapping associations between serotype and ST. It has been noted that many STs that were previously associated with vaccine serotypes now occur in association with non-vaccine serotypes. It has been proposed that a combination of immune-mediated interference between identical antigenic types and direct competition between identical metabolic types can generate non-overlapping association between antigenic and STs in populations of the bacterial pathogen Neisseria meningitidis. In this paper, we explore whether pneumococcal population structure, can be explained within a similar conceptual framework. in which pathogen strains are profiled by antigenic type, AT, metabolic type (MT) and additional non-capsular virulence factors (VF)."
|
| 1371 |
+
},
|
| 1372 |
+
{
|
| 1373 |
+
"type": "image_caption",
|
| 1374 |
+
"bbox": [
|
| 1375 |
+
0.114,
|
| 1376 |
+
0.689,
|
| 1377 |
+
0.885,
|
| 1378 |
+
0.719
|
| 1379 |
+
],
|
| 1380 |
+
"angle": 0,
|
| 1381 |
+
"content": "Figure 3: An case study from the eLife test set comparing summaries generated under highly lay and technical attribute values (with the length attribute being kept constant)."
|
| 1382 |
+
},
|
| 1383 |
+
{
|
| 1384 |
+
"type": "page_number",
|
| 1385 |
+
"bbox": [
|
| 1386 |
+
0.486,
|
| 1387 |
+
0.929,
|
| 1388 |
+
0.516,
|
| 1389 |
+
0.941
|
| 1390 |
+
],
|
| 1391 |
+
"angle": 0,
|
| 1392 |
+
"content": "345"
|
| 1393 |
+
}
|
| 1394 |
+
]
|
| 1395 |
+
]
|
2024/ATLAS_ Improving Lay Summarisation with Attribute-based Control/dabae923-8653-40d5-a808-ef509db0f2cc_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:72527c3ae1eaf739b1dfb3b43c9c665bf34ed761329234ca5fc2a544965455ad
|
| 3 |
+
size 344821
|
2024/ATLAS_ Improving Lay Summarisation with Attribute-based Control/full.md
ADDED
|
@@ -0,0 +1,196 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ATLAS: Improving Lay Summarisation with Attribute-based Control
|
| 2 |
+
|
| 3 |
+
Zhihao Zhang $^{1}$ , Tomas Goldsack $^{2}$ , Carolina Scarton $^{2}$ , Chenghua Lin $^{3*}$ $^{1}$ College of Economics and Management, Beijing University of Technology, China, $^{2}$ Department of Computer Science, University of Sheffield, UK
|
| 4 |
+
$^{3}$ Department of Computer Science, The University of Manchester, UK
|
| 5 |
+
|
| 6 |
+
zhhzhang@bjut.edu.cn {tgoldsack1, c.scarton}@sheffield.ac.uk chenghua.lin@manchester.ac.uk
|
| 7 |
+
|
| 8 |
+
# Abstract
|
| 9 |
+
|
| 10 |
+
Automatic scientific lay summarisation aims to produce summaries of scientific articles that are comprehensible to non-expert audiences. However, previous work assumes a one-size-fits-all approach, where the content and style of the produced summary are entirely dependent on the data used to train the model. In practice, audiences with different goals and levels of expertise will have specific needs, impacting what content should appear in a lay summary and how it should be presented. Aiming to address this disparity, we propose ATLAS, a novel abstractive summarisation approach that can control various properties that contribute to the overall "layness" of the generated summary using targeted control attributes. We evaluate ATLAS on a combination of biomedical lay summarisation datasets, where it outperforms state-of-the-art baselines using both automatic and human evaluations. Additional analyses provided on the discriminatory power and emergent influence of our selected controllable attributes further attest to the effectiveness of our approach.
|
| 11 |
+
|
| 12 |
+
# 1 Introduction
|
| 13 |
+
|
| 14 |
+
Lay summarisation is defined as producing a summary of a scientific article that is comprehensible to non-experts (King et al., 2017). Recent work has shown that, when compared to technical abstracts, lay summaries typically are more readable (lexically and syntactically), more abstractive, and contain more background information, enabling a non-technical reader to better understand their contents (Luo et al., 2022; Cohen et al., 2021; Goldsack et al., 2023b). However, the extent to which these attributes are required within a lay summary depends largely on the specific needs of the reader. For example, a scientist from a related field will require less background information to understand an article's contents than an entirely non-technical
|
| 15 |
+
|
| 16 |
+
reader, but they might still require domain-specific jargon to be simplified or explained. Despite its obvious benefits, to our knowledge, no work has yet explored how we can enable such fine-grained control over comprehensibility-related aspects for lay summary generation.
|
| 17 |
+
|
| 18 |
+
In this paper, we propose ATLAS (ATtribute-controlled LAy Summarization), a novel scientific summarisation approach that aims to control four attributes targeting distinct properties contributing to the overall "layness" of the generated summary, thus allowing it to cater to the specific needs of different audiences. Although recent attempts at text simplification and story generation have had success influencing the style (Martin et al., 2020; Kong et al., 2021; Sheang and Saggion, 2021) and content (Kong et al., 2021; Tang et al., 2024) of generated text using fine-grained controllable attributes, no work to our knowledge has explored this for scientific summarisation. Luo et al. (2022) recently addressed the task of readability-controlled scientific summarisation, however, this is only done at a binary level, training a model to produce either a technical or non-technical summary based on a single control token.
|
| 19 |
+
|
| 20 |
+
Our approach innovates by enabling a greater degree of controllability through the flexible handling of multiple attributes, allowing it to produce more diverse summaries and better address the specific needs of different audiences. Our results show that ATLAS outperforms state-of-the-art baselines in both automatic and human evaluations across three summary types with varying levels of technicality. Additional analyses confirm that attribute control positively influences performance, and suggest the selected control attributes are able to effectively capture the difference between technical and nontechnical summaries.
|
| 21 |
+
|
| 22 |
+
# 2 Methodology
|
| 23 |
+
|
| 24 |
+
As discussed in §1, ATLAS aims to control four targeted attributes. We use BART-base as the base model for ATLAS as it represents the state-of-the-art benchmark in previous lay summarisation works (Guo et al., 2021; Goldsack et al., 2022).
|
| 25 |
+
|
| 26 |
+
Formally, each document $x = (x_{1}, x_{2}, \ldots, x_{n})$ of length $n$ , where $x_{i}$ is the $i$ -th token, is prepended with a control token sequence $l$ such that $x = (l, x_{1}, x_{2}s, \ldots, x_{n})$ . $l$ consists of our four selected control tokens, each of which targets distinct characteristics of the output summary that contributes to its overall comprehensibility. We describe each aspect below:
|
| 27 |
+
|
| 28 |
+
Length (L) The length of the output summary in characters. A more lay audience may require a longer summary to aid comprehension.
|
| 29 |
+
|
| 30 |
+
Readability (R) How easy it is to read the text. This is measured using the Flesh-Kincaid Grade Level (FKGL) metric, which estimates the reading grade level (US) required to understand the generated text based on the total number of sentences, words, and syllables present within it.
|
| 31 |
+
|
| 32 |
+
Background information (BG) The percentage of sentences classified as containing primarily background information. Intuitively, a more lay audience will require greater levels of background information to contextualise an article.
|
| 33 |
+
|
| 34 |
+
Content word entropy (CWE) The average entropy of content words. We hypothesise that jargon terms are likely to possess higher entropy values, thus lower average CWE is likely to be a property of more lay text. Since jargon terms are predominately nouns, we extract noun phrases as content words using CoreNLP library (Manning et al., 2014). We then follow Xiao et al. (2020) to calculate $I(x_{i})$ entropy of a given token $x_{i}$ as the negative logarithm of its generation probability $P(x_{i})$ , which is directly extracted from a pre-trained language model.
|
| 35 |
+
|
| 36 |
+
$$
|
| 37 |
+
I \left(x _ {i}\right) = - \log P \left(x _ {i}\right) \tag {1}
|
| 38 |
+
$$
|
| 39 |
+
|
| 40 |
+
During model training, true attribute values (as calculated on reference summaries) are used, allowing the model to learn to associate attribute values with summary properties. For all attributes, values are discretized into 10 fixed-width bins depending on their respective range in the train split (from
|
| 41 |
+
|
| 42 |
+
minimum to maximum observed value), resulting in 10 unique control tokens for each attribute which are added to the vocabulary. For each attribute at test time, we use the most common bin value observed for reference summaries of the training set as attribute values.
|
| 43 |
+
|
| 44 |
+
# 3 Experimental Setup
|
| 45 |
+
|
| 46 |
+
Data. We experiment on the biomedical lay summarisation datasets introduced in Goldsack et al. (2022), eLife (4.8k articles) and PLOS (27.5k articles), for which target lay summaries have been shown to contain different levels of "layness". Specifically, eLife's lay summaries have been characterized as longer, more readable, and more abstractive than those of PLOS, as well as being empirically observed to be suitable for a more lay audience. We, therefore, combine both of these datasets, allowing us to expose ATLAS to a greater variety of attribute values during training. For each article in the combined dataset, we train our ATLAS to produce both the technical abstract and lay summary, using our control attributes to differentiate between them.
|
| 47 |
+
|
| 48 |
+
Evaluation. We employ several automatic metrics to evaluate the performance of ATLAS. In line with common summarisation practice, we calculate ROUGE-1,2, and L variants (Lin, 2004) and BERTScore (Zhang et al., 2019). We also measure Dale-Chall Readability Score, a metric that estimates US grade level based on the frequency of common words.
|
| 49 |
+
|
| 50 |
+
Baselines. To enable fair comparison, we rerun many of the baseline approaches used by Goldsack et al. (2022) (which have the abstract included in the input) on the combined datasets. Specifically, we rerun the Lead-3, Lead-K, and oracle heuristic baselines; TextRank (Mihalcea and Tairau, 2004), LexRank (Erkan and Radev, 2004), and HipoRank (Dong et al., 2021) unsupervised models; and BART and $\mathrm{BART}_{\mathrm{Scaffold}}$ supervised models. Here, we use the transformer-based BART base model (Lewis et al., 2020), which we fine-tune on our own datasets. $\mathrm{BART}_{\mathrm{Scaffold}}$ is the recreation of a model from Goldsack et al. (2022) which is trained using a binary control token (<abs> or <lay>) to produce either an abstract or lay summary for an article. This model is equivalent to that pro
|
| 51 |
+
|
| 52 |
+

|
| 53 |
+
Figure 1: Visualisation of the density distributions of controllable attribute values for each summary type in the combined train split.
|
| 54 |
+
|
| 55 |
+

|
| 56 |
+
|
| 57 |
+

|
| 58 |
+
|
| 59 |
+

|
| 60 |
+
|
| 61 |
+
posed by Luo et al. (2022), the only previous work on controllable lay summarisation. $^{2}$
|
| 62 |
+
|
| 63 |
+
Finally, we include two baselines based on ChatGPT (3.5-turbo), so as to compare against an accessible and widely used method of controlling text generation (i.e., prompt engineering). Our first GPT baseline (GPT3.5-zs) uses the following zero-shot prompts: (i) "Summarize the following article for an expert audience that is familiar with the technical aspects of the content" to generate technical abstracts; (ii) "Summarize the following article for a non-expert audience that has some familiarity with the technical aspects of the content" to generate PLOS lay summaries, and (iii) "Summarize the following article for a non-expert audience that has no familiarity with the technical aspects of the content" to generate eLife lay summaries. Our second GPT baseline (GPT3.5-mdc) replicates the method of Turbitt et al. (2023), the best-performing team of the recent BioLaySumm shared task (Goldsack et al., 2023a). Based on in-context learning, this method dynamically selects the maximum number of input-output examples that fit in the context window (separated by the simple prompt "Explanation:") to generate lay summaries based on only the article abstract.
|
| 64 |
+
|
| 65 |
+
Implementation Details. As mentioned in §2, we employ BART-base as our base model. We train our ATLAS for a maximum of 5 epochs on a GeForce GTX-1080Ti GPU, retaining the checkpoint with the best average ROUGE-1/2/L score on the validation set. We set the batch size to 1 and keep the $\alpha$ scale factor (§2) at the default value of 0.2 from Kong et al. (2021).
|
| 66 |
+
|
| 67 |
+
For calculating control attributes, we use SciBERT (Beltagy et al., 2019) for entropy calculation, and we employ a BERT-based sequential classi
|
| 68 |
+
|
| 69 |
+
<table><tr><td>Summary type</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td>Abstract</td><td>0.69</td><td>0.75</td><td>0.72</td></tr><tr><td>eLife-Lay</td><td>0.71</td><td>0.71</td><td>0.71</td></tr><tr><td>PLOS-Lay</td><td>0.73</td><td>0.66</td><td>0.71</td></tr></table>
|
| 70 |
+
|
| 71 |
+
Table 1: Classifier performance for 3-way classification between summary types on the combined test set.
|
| 72 |
+
|
| 73 |
+
fier (Cohan et al., 2019) trained on the PubMedRTC dataset (Dernoncourt and Lee, 2017) for background sentence classification (as described in Goldsack et al. (2022)). We compute the FKGL readability score using the textstat package.
|
| 74 |
+
|
| 75 |
+
# 4 Experimental Results
|
| 76 |
+
|
| 77 |
+
Discriminatory ability of control attributes. To validate the ability of our controllable attributes to distinguish between different summary types, we plot the distribution of attribute values for each type in Figure 1. The figure suggests that, in combination, the attributes are able to capture characteristic differences between summary types, as instances in which two summary types share a similar distribution for one attribute can typically be separated by other attributes.<sup>3</sup>
|
| 78 |
+
|
| 79 |
+
To further evidence this, we use the training set to train a simple logistic regression classifier, using only the attribute values of the reference summaries as features, to discriminate between reference summary types. The test set results in Table 1 show that all summary types are classified with an F1-score above 0.7, attesting to the discriminatory power of our control attributes.
|
| 80 |
+
|
| 81 |
+
Summarisation performance. Table 2 presents the performance of ATLAS and baseline models using automatic metrics on the test sets of PLOS
|
| 82 |
+
|
| 83 |
+
<table><tr><td rowspan="2" colspan="2">Model</td><td colspan="6">Abstract</td><td colspan="6">Lay summary - PLOS</td><td colspan="6">Lay summary - eLife</td></tr><tr><td>R-1</td><td>R-2</td><td>R-L</td><td>BS</td><td>DCRS</td><td>FKGL</td><td>R-1</td><td>R-2</td><td>R-L</td><td>BS</td><td>DCRS</td><td>FKGL</td><td>R-1</td><td>R-2</td><td>R-L</td><td>BS</td><td>DCRS</td><td>FKGL</td></tr><tr><td rowspan="3">Hemicratic</td><td>Lead-3</td><td>23.86</td><td>5.66</td><td>21.48</td><td>81.17</td><td>12.66</td><td>14.82</td><td>27.41</td><td>6.87</td><td>24.61</td><td>83.36</td><td>12.66</td><td>15.08</td><td>19.41</td><td>4.06</td><td>18.02</td><td>81.65</td><td>12.65</td><td>13.30</td></tr><tr><td>Lead-K</td><td>35.69</td><td>9.07</td><td>32.70</td><td>82.86</td><td>11.69</td><td>14.49</td><td>38.28</td><td>9.45</td><td>34.8</td><td>83.72</td><td>11.88</td><td>14.95</td><td>37.27</td><td>7.53</td><td>35.18</td><td>82.05</td><td>10.58</td><td>11.89</td></tr><tr><td>Oracle</td><td>60.08</td><td>27.48</td><td>55.95</td><td>87.35</td><td>11.12</td><td>15.15</td><td>57.82</td><td>23.92</td><td>53.37</td><td>87.13</td><td>11.20</td><td>15.28</td><td>48.92</td><td>13.42</td><td>46.30</td><td>82.94</td><td>10.51</td><td>13.18</td></tr><tr><td rowspan="3">Unsupp.</td><td>TextRank</td><td>40.26</td><td>11.53</td><td>36.02</td><td>83.83</td><td>11.78</td><td>20.08</td><td>37.55</td><td>8.50</td><td>33.28</td><td>83.43</td><td>11.87</td><td>20.27</td><td>33.88</td><td>5.79</td><td>31.55</td><td>81.16</td><td>11.30</td><td>18.98</td></tr><tr><td>LexRank</td><td>38.22</td><td>13.06</td><td>35.42</td><td>83.85</td><td>9.70</td><td>14.23</td><td>31.20</td><td>9.09</td><td>28.72</td><td>82.97</td><td>9.70</td><td>14.59</td><td>32.25</td><td>5.73</td><td>30.45</td><td>80.67</td><td>9.68</td><td>13.32</td></tr><tr><td>HipoRank</td><td>36.95</td><td>10.19</td><td>33.89</td><td>83.22</td><td>12.15</td><td>14.46</td><td>37.67</td><td>9.22</td><td>34.28</td><td>83.68</td><td>12.15</td><td>14.69</td><td>31.50</td><td>5.17</td><td>29.68</td><td>80.88</td><td>12.13</td><td>12.13</td></tr><tr><td rowspan="6">Supervised</td><td>BART</td><td>43.34</td><td>13.14</td><td>39.80</td><td>85.48</td><td>11.33</td><td>14.40</td><td>43.52</td><td>12.09</td><td>39.67</td><td>85.70</td><td>11.29</td><td>14.54</td><td>31.17</td><td>6.74</td><td>29.20</td><td>83.55</td><td>11.15</td><td>13.87</td></tr><tr><td>BART_Scaffold</td><td>43.13</td><td>12.87</td><td>39.66</td><td>85.33</td><td>11.10</td><td>14.14</td><td>43.73</td><td>12.22</td><td>39.92</td><td>85.67</td><td>11.30</td><td>14.58</td><td>43.01</td><td>10.82</td><td>40.54</td><td>84.88</td><td>9.68</td><td>11.85</td></tr><tr><td>GPT3.5-zs</td><td>28.69</td><td>6.52</td><td>15.04</td><td>82.76</td><td>11.70</td><td>14.32</td><td>42.74</td><td>12.70</td><td>22.28</td><td>86.32</td><td>10.40</td><td>13.19</td><td>33.72</td><td>8.45</td><td>16.95</td><td>84.36</td><td>10.36</td><td>13.03</td></tr><tr><td>GPT3.5-mdc</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>-</td><td>44.41</td><td>14.16</td><td>41.12</td><td>86.55</td><td>10.36</td><td>13.32</td><td>37.97</td><td>9.39</td><td>35.57</td><td>84.22</td><td>10.78</td><td>13.70</td></tr><tr><td>ATLAS</td><td>45.87</td><td>14.08</td><td>42.32</td><td>85.54</td><td>10.96</td><td>14.21</td><td>44.44</td><td>12.33</td><td>40.60</td><td>85.70</td><td>11.22</td><td>14.58</td><td>46.80</td><td>12.57</td><td>44.14</td><td>85.20</td><td>8.95</td><td>10.87</td></tr><tr><td>ATLAS_Oracle</td><td>46.11</td><td>14.07</td><td>42.51</td><td>85.69</td><td>10.99</td><td>14.13</td><td>44.97</td><td>12.49</td><td>41.02</td><td>85.82</td><td>11.21</td><td>14.48</td><td>46.61</td><td>12.29</td><td>43.95</td><td>85.11</td><td>9.18</td><td>11.39</td></tr></table>
|
| 84 |
+
|
| 85 |
+
and eLife. We include the results for ATLAS under two conditions: 1) one utilizing the average value for each attribute observed in the training data for each summary type (ATLAS); and 2) one using true attribute values obtained from gold standard summaries $(\mathrm{ATLAS}_{\mathrm{Oracle}})$ , where $\mathrm{ATLAS}_{\mathrm{Oracle}}$ is intended to provide an upper bound of the obtainable performance using our control attributes.
|
| 86 |
+
|
| 87 |
+
For all metrics, it is evident from Table 2 that ATLAS exceeds the performance of all baseline approaches for both eLife lay summaries and abstracts, demonstrating a strong ability to control the technicality of generated text whilst producing high-quality summaries. Interestingly, although the GPT3.5-mdc baseline achieves a slightly stronger all-round performance for PLOS lay summaries, it fails to maintain this for the more "lay" summaries of eLife where ATLAS achieves significantly better performance, indicating that our control attributes can effectively capture these differences.
|
| 88 |
+
|
| 89 |
+
In all cases, ATLAS also achieves scores that are comparable to (and sometimes exceeding) that of $\mathrm{ATLAS}_{\text{Oracle}}$ , suggesting that the use of the most frequently observed bin value for control attributes is effective for producing the appropriate characteristics for each summary type.
|
| 90 |
+
|
| 91 |
+
Ablation study. To assess the contribution of each attribute to model performance, we conduct an ablation study, evaluating $\mathrm{ATLAS}_{\mathrm{Oracle}}$ under different configurations. Table 3 reports the results of this study for abstracts and lay summaries on the combined test sets of PLOS and eLife.
|
| 92 |
+
|
| 93 |
+
The table shows that the removal of control attributes has a significant detrimental effect on performance. Additionally, when only a single attribute is included, the length-based control has
|
| 94 |
+
|
| 95 |
+
Table 2: Summarization performance on the PLOS and eLife test sets (abstracts combined). $\mathrm{R} =$ ROUGE F1 (↑), BS = BERTScore (↑), DCRS = Dale-Chall Readability Score (↓), FKGL = Flesh-Kincaid Grade Level (↓). For supervised models, we highlight the best score obtained for each metric in bold and underline second best.
|
| 96 |
+
|
| 97 |
+
<table><tr><td rowspan="2">Model</td><td colspan="4">Lay summary</td><td colspan="4">Abstract</td></tr><tr><td>R-1</td><td>R-2</td><td>R-L</td><td>DCRS</td><td>R-1</td><td>R-2</td><td>R-L</td><td>DCRS</td></tr><tr><td>BART</td><td>41.68</td><td>11.29</td><td>38.12</td><td>11.27</td><td>43.34</td><td>13.14</td><td>39.80</td><td>11.33</td></tr><tr><td>+R</td><td>43.34</td><td>12.03</td><td>39.75</td><td>10.91</td><td>43.49</td><td>13.23</td><td>39.95</td><td>11.12</td></tr><tr><td>+BG</td><td>42.52</td><td>11.71</td><td>39.01</td><td>11.01</td><td>43.74</td><td>13.65</td><td>40.35</td><td>10.98</td></tr><tr><td>+CWE</td><td>41.58</td><td>11.21</td><td>38.04</td><td>11.28</td><td>44.23</td><td>13.48</td><td>40.56</td><td>11.35</td></tr><tr><td>+L</td><td>44.22</td><td>12.21</td><td>40.55</td><td>10.81</td><td>44.83</td><td>13.75</td><td>41.31</td><td>11.03</td></tr><tr><td>+L+BG</td><td>44.66</td><td>12.36</td><td>40.96</td><td>10.99</td><td>45.67</td><td>13.78</td><td>42.02</td><td>11.17</td></tr><tr><td>+L+R</td><td>44.52</td><td>12.10</td><td>40.73</td><td>10.92</td><td>45.54</td><td>13.64</td><td>41.78</td><td>11.21</td></tr><tr><td>+L+CWE</td><td>44.72</td><td>12.41</td><td>41.04</td><td>10.88</td><td>45.87</td><td>13.99</td><td>42.32</td><td>10.10</td></tr><tr><td>+L+R+BG</td><td>44.82</td><td>12.41</td><td>41.10</td><td>10.97</td><td>45.94</td><td>14.07</td><td>42.32</td><td>11.10</td></tr><tr><td>+L+R+CWE</td><td>44.83</td><td>12.39</td><td>41.05</td><td>10.90</td><td>45.60</td><td>13.63</td><td>41.84</td><td>11.21</td></tr><tr><td>+L+BG+CWE</td><td>45.01</td><td>12.56</td><td>41.38</td><td>10.88</td><td>46.04</td><td>14.16</td><td>42.44</td><td>11.06</td></tr><tr><td>ATLASOracle</td><td>45.22</td><td>12.47</td><td>41.45</td><td>10.91</td><td>46.11</td><td>14.07</td><td>42.51</td><td>10.99</td></tr></table>
|
| 98 |
+
|
| 99 |
+
Table 3: Ablation study on the ROUGE-based performance of ATLAS under different configurations using true attribute values. "+" denotes aspect addition. L = Length, R = Readability, CWE = Content Word Entropy, BG = Background information.
|
| 100 |
+
|
| 101 |
+
the highest ROUGE scores, particularly for lay summaries. This is to be expected, as lay summaries are known to differ significantly in length between PLOS (avg. 175.6 words) and eLife (avg. 347.6 words). When employing attributes in combination, we can see that the addition of content word entropy control and the subsequent addition of background information control have the greatest benefit to performance for ATLAS with 2 and 3 attributes, respectively. Interestingly, no attribute emerges clearly as the least effective as, although readability score control is the only one not included in the 3 attribute model, its inclusion in the single attribute model has clear benefits for lay summary performance. This provides further evidence that, in combination, our control attributes are able to capture the differences between summary types and effectuate them during generation.
|
| 102 |
+
|
| 103 |
+
Human evaluation. To provide a comprehensive assessment of the summaries generated, we conducted a human evaluation involving our proposed model ATLAS and the strongest baseline model
|
| 104 |
+
|
| 105 |
+
<table><tr><td rowspan="2">Criteria</td><td colspan="2">eLife</td><td colspan="2">PLOS</td></tr><tr><td>BART</td><td>ATLAS</td><td>BART</td><td>ATLAS</td></tr><tr><td>Comprehensiveness</td><td>2.30</td><td>2.65</td><td>2.00</td><td>2.55</td></tr><tr><td>Layness</td><td>2.60</td><td>3.05</td><td>2.10</td><td>2.45</td></tr><tr><td>Factuality</td><td>2.20</td><td>2.85</td><td>2.05</td><td>2.40</td></tr></table>
|
| 106 |
+
|
| 107 |
+
(BART) using two experts. $^{5}$ Specifically, adopting a similar setting to the original that of Goldsack et al. (2022), we take a random sample of 10 articles from the test split of each dataset. Alongside each model-generated lay summary, judges are presented with both the abstract and reference lay summary of the given article. We choose not to provide judges with the full article text in an effort to minimise the complexity of the evaluation and the cognitive burden placed upon them. Using 1-5 Likert scale, the judges are asked to rate the model output based on three criteria: (1) Comprehensiveness: to what extent does the model output contain the information that might be necessary for a non-expert to understand the high-level topic of the article and the significance of the research; (2) Layness: to what extent is the content of the model output comprehensible (or readable) to a non-expert, in terms of both structure and language; (3) Factuality: to what extent is the model generated lay summary factually consistent with the two other provided summaries (i.e. abstract and reference lay summary). $^{6}$
|
| 108 |
+
|
| 109 |
+
Table 4 presents the average ratings from our manual evaluation. We calculate the Cohan Kappa scores to measure inter-rater reliability, where we obtain values of 0.50 and 0.57 for eLife and PLOS, attesting to the reliability of our evaluation. The overall results suggest that our proposed method performs better than the BART baseline in terms of all three criteria on both datasets, attesting to their quality. In terms of layness, the higher layness scores observed in the eLife dataset compared to the PLOS dataset align with the previous analysis for the two datasets from (Goldsack et al., 2022). Moreover, compared to baseline, it is worth noting that our model outputs are judged to produce much more factually correct outputs on both datasets, suggesting our method generates fewer hallucinations.
|
| 110 |
+
|
| 111 |
+
Table 4: Human evaluation on eLife and PLOS. Mean evaluator ratings (1-5) obtained by BART and ATLAS outputs for each metric.
|
| 112 |
+
|
| 113 |
+
<table><tr><td colspan="2">Model</td><td>FKGL</td><td>CLI</td><td>DCRS</td></tr><tr><td rowspan="2">PLOS</td><td>ATLAS专业技术</td><td>15.11</td><td>14.21</td><td>11.64</td></tr><tr><td>ATLASlay</td><td>13.22</td><td>13.97</td><td>11.22</td></tr><tr><td rowspan="2">eLife</td><td>ATLAS专业技术</td><td>14.77</td><td>14.02</td><td>11.32</td></tr><tr><td>ATLASlay</td><td>10.89</td><td>11.45</td><td>9.17</td></tr></table>
|
| 114 |
+
|
| 115 |
+
Table 5: Readability metrics for two versions of ATLAS with highly lay and technical attribute values.
|
| 116 |
+
|
| 117 |
+
Controllability analysis. To assess the extent to which our control attributes enable controllability over the overall layness of the text, we conduct a further analysis using two additional versions of ATLAS with highly lay or technical values. Specifically, we create $\mathrm{ATLAS}_{lay}$ and $\mathrm{ATLAS}_{technical}$ by selecting the lowest and highest attribute bins, respectively, for which there are at least 100 observations in the training data (for all attributes other than length which is kept constant).
|
| 118 |
+
|
| 119 |
+
We examine how these extreme attributes manifest themselves in generated summaries by calculating the average readability values obtained by the generated summaries for both datasets. We present the results of the analysis in Table 5, which show a significant divergence in the readability values obtained by each model on both datasets. Interestingly, this divergence is substantially wider for summaries generated on eLife, the dataset which is identified by Goldsack et al. (2022) as containing lay summaries that are more "lay" than those of PLOS, suggesting that exposure to more extreme values whilst training on this dataset may enable even greater controllability at inference time.7
|
| 120 |
+
|
| 121 |
+
# 5 Conclusion
|
| 122 |
+
|
| 123 |
+
In this paper, we introduce ATLAS, a model for controllable lay summarisation that employs controllable attribute tokens to influence various properties of the generated summary, enabling it to cater to users of different levels of expertise. Using combined datasets for biomedical lay summarisation we perform multiple experiments whereby we confirm the ability of our selected control attributes to discriminate between summary types, demonstrate their effectiveness for controllable lay summarisation, and further investigate their ability to effectuate desired differences during generation.
|
| 124 |
+
|
| 125 |
+
# Limitations
|
| 126 |
+
|
| 127 |
+
Although our results demonstrate that our selected control attributes are able to effectively capture the characteristics between summary types, it is highly likely that there are additional attributes that we have not explored that could benefit performance for controllable lay summarisation. We plan to explore this in future work, in addition to experimenting with more complex methods for enabling controllability.
|
| 128 |
+
|
| 129 |
+
# References
|
| 130 |
+
|
| 131 |
+
Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3615-3620, Hong Kong, China. Association for Computational Linguistics.
|
| 132 |
+
Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, and Dan Weld. 2019. Pretrained language models for sequential sentence classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3693-3699, Hong Kong, China. Association for Computational Linguistics.
|
| 133 |
+
Nachshon Cohen, Oren Kalinsky, Yftah Ziser, and Alessandro Moschitti. 2021. Wikisum: Coherent summarization dataset for efficient human-evaluation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 2: Short Papers), Virtual Event, August 1-6, 2021, pages 212-219. Association for Computational Linguistics.
|
| 134 |
+
Franck Dernoncourt and Ji Young Lee. 2017. PubMed 200k RCT: a dataset for sequential sentence classification in medical abstracts. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 308-313, Taipei, Taiwan. Asian Federation of Natural Language Processing.
|
| 135 |
+
Yue Dong, Andrei Mircea, and Jackie Chi Kit Cheung. 2021. Discourse-Aware unsupervised summarization for long scientific documents. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1089-1102, Online. Association for Computational Linguistics.
|
| 136 |
+
G. Erkan and D. R. Radev. 2004. LexRank: Graph-based lexical centrality as salience in text summarization. Journal of Artificial Intelligence Research, 22:457-479.
|
| 137 |
+
|
| 138 |
+
Tomas Goldsack, Zheheng Luo, Qianqian Xie, Carolina Scarton, Matthew Shardlow, Sophia Ananiadou, and Chenghua Lin. 2023a. Overview of the biolaysumm 2023 shared task on lay summarization of biomedical research articles. In The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks, pages 468-477, Toronto, Canada. Association for Computational Linguistics.
|
| 139 |
+
Tomas Goldsack, Zhihao Zhang, Chenghua Lin, and Carolina Scarton. 2022. Making science simple: Corpora for the lay summarisation of scientific literature. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10589-10604, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
|
| 140 |
+
Tomas Goldsack, Zhihao Zhang, Chen Tang, Carolina Scarton, and Chenghua Lin. 2023b. Enhancing biomedical lay summarisation with external knowledge graphs. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 8016-8032, Singapore. Association for Computational Linguistics.
|
| 141 |
+
Yue Guo, Wei Qiu, Yizhong Wang, and Trevor Cohen. 2021. Automated Lay Language Summarization of Biomedical Scientific Reviews. Proceedings of the AAAI Conference on Artificial Intelligence, 35(1):160-168.
|
| 142 |
+
Stuart R F King, Emma Pewsey, and Sarah Shailes. 2017. Plain-language Summaries of Research: An inside guide to eLife digests. eLife, 6:e25410.
|
| 143 |
+
Xiangzhe Kong, Jialiang Huang, Ziquan Tung, Jian Guan, and Minlie Huang. 2021. Stylized story generation with style-guided planning. In *Findings of the Association for Computational Linguistics: ACLIJCNLP* 2021, pages 2430-2436, Online. Association for Computational Linguistics.
|
| 144 |
+
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 7871-7880. Association for Computational Linguistics.
|
| 145 |
+
Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Text Summarization Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.
|
| 146 |
+
Zeheng Luo, Qianqian Xie, and Sophia Ananiadou. 2022. Readability controllable biomedical document summarization. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4667-4680, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
|
| 147 |
+
|
| 148 |
+
Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55-60, Baltimore, Maryland. Association for Computational Linguistics.
|
| 149 |
+
|
| 150 |
+
Louis Martin, Éric de la Clergerie, Benoit Sagot, and Antoine Bordes. 2020. Controllable sentence simplification. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 4689-4698, Marseille, France. European Language Resources Association.
|
| 151 |
+
|
| 152 |
+
Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 404-411, Barcelona, Spain. Association for Computational Linguistics.
|
| 153 |
+
|
| 154 |
+
Kim Cheng Sheang and Horacio Saggion. 2021. Controllable sentence simplification with a unified text-to-text transfer transformer. In Proceedings of the 14th International Conference on Natural Language Generation, pages 341-352, Aberdeen, Scotland, UK. Association for Computational Linguistics.
|
| 155 |
+
|
| 156 |
+
Chen Tang, Tyler Loakman, and Chenghua Lin. 2024. A cross-attention augmented model for event-triggered context-aware story generation. Computer Speech & Language, page 101662.
|
| 157 |
+
|
| 158 |
+
Oisin Turbitt, Robert Bevan, and Mouhamad Aboshokor. 2023. MDC at BioLaySumm task 1: Evaluating GPT models for biomedical lay summarization. In The 22nd Workshop on Biomedical Natural Language Processing and BioNLP Shared Tasks, pages 611-619, Toronto, Canada. Association for Computational Linguistics.
|
| 159 |
+
|
| 160 |
+
Liqiang Xiao, Lu Wang, Hao He, and Yaohui Jin. 2020. Modeling content importance for summarization with pre-trained language models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3606-3611, Online. Association for Computational Linguistics.
|
| 161 |
+
|
| 162 |
+
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. BERTScore: Evaluating text generation with BERT.
|
| 163 |
+
|
| 164 |
+
# A Appendix
|
| 165 |
+
|
| 166 |
+
ChatGPT Baseline Prompts The prompts provided to ChatGPT for each summary type are given in Table 6. To ensure a fair comparison, we control the length of the GPT baselines using the generation arguments, (e.g., max_new_tokens). Note that we differentiate between the lay summary types (namely, PLOS and eLife) based on distinctions made by Goldsack et al. (2022), who recognise
|
| 167 |
+
|
| 168 |
+
PLOS' summaries as the less "lay" of the two, making them better suited to an audience with some technical knowledge.
|
| 169 |
+
|
| 170 |
+
<table><tr><td>Summary Type</td><td>Prompt</td></tr><tr><td>Abstract</td><td>Summarize the following article for an expert audience that is familiar with the technical aspects of the content</td></tr><tr><td>PLOS lay summary</td><td>Summarize the following article for a non-expert audience that has some familiarity with the technical aspects of the content</td></tr><tr><td>eLife lay summary</td><td>Summarize the following article for a non-expert audience that has no familiarity with the technical aspects of the content</td></tr></table>
|
| 171 |
+
|
| 172 |
+
Table 6: Prompts used for the GPT3.5-zs baseline for each summary type.
|
| 173 |
+
|
| 174 |
+
# eLife
|
| 175 |
+
|
| 176 |
+
# ATLAS<sup>technical</sup>
|
| 177 |
+
|
| 178 |
+
The effects of muscle fatigue on motor learning under fatigue are poorly understood. Here, we investigated the effect of fatigue on learning under a sequential pinch force task. Irrespective of whether the observed fatigue effects are domain-specific or present in another task that is cognitive demanding but requires minimal force control, we found that participants had impaired skill learning in both the fatigued and unfatigued effector. We replicated the findings of experiment 1 and found that disruption of rTMS to the motor cortex (Cantarero et al., 2013a) alleviated the adverse effects of fatigue. Cortical excitability was similar to that observed in the untrained effector, but not in the fatigued. Altogether, our findings suggest that motor fatigue has a domain-dependent lasting effect on skill learning. Future studies should focus on understanding the role of motor cortex excitability in the acquisition of motor skills under fatigue, as well as the potential role for maladaptive memory formation under fatigued conditions. Cortical and motor cortices should be included in training and rehabilitation regimens geared to improve motor skill acquisition.
|
| 179 |
+
|
| 180 |
+
# ATLASlay
|
| 181 |
+
|
| 182 |
+
Muscle fatigue is a neuromuscular phenomenon that can impair performance over time. People who experience fatigue tend to be less able to learn a new motor skill than people who experience no fatigue. However, it is not clear how fatigue affects the ability of people to learn new motor skills. One way to study the effects of fatigue is to study how people learn a motor skill under fatigue conditions. One of the main challenges in studying motor learning under fatigue is the so-termed "performance-learning" distinction In this study, participants were asked to practice a motor task over two days and then had to catch up to the skill performance level of the non-fatigued group. Unexpectedly, participants who were only fatigued at the end of the training were less likely to learn the motor skill. This suggests that fatigue has a domain-specific lasting effect on the learning of a skill. ernas et al. now show that people who are unable to recover the motor task under fatigue are more likely to be unable to learn their motor skill when they are not fatigued. The experiments show that when people are trained to perform the task, their ability to recover from fatigue is severely impaired. This effect is due to a change in the strength of the motor cortex, a region of the brain that is involved in learning and memory.
|
| 183 |
+
|
| 184 |
+
Figure 2: An case study from the eLife test set comparing summaries generated under highly lay and technical attribute values (with the length attribute being kept constant).
|
| 185 |
+
|
| 186 |
+
# PLOS
|
| 187 |
+
|
| 188 |
+
# ATLAS<sup>technical</sup>
|
| 189 |
+
|
| 190 |
+
In this paper, we explore the conditions under which associations between antigenic, metabolic and virulence properties of strains within pneumococcal populations and predict how these may shift under vaccination. In this work, we use a conceptual framework to investigate the dynamics of associations between serotype, serotype and serotype-specific immunity in pneumococcus populations. We find that antigenic type (AT) is the principal determinant of non-capsular virulence factors (VF), whereas MT is the major determinant. AT and MT are highly non-random; MT and AT are co-evolved and co-expressed. ET and CT are also found to be highly correlated, suggesting that they have synergistically adapted to a particular metabolic niche. IT and LD are found to have similar patterns of linkage disequilibrium (LD) than randomly selected genes not associated with metabolic/transport processes; AT is associated with a higher frequency of LD LD than MT LD; CT LD=0.013). CT is the first mathematical model to explain the non-overlapping association between serotypic and serotypes. TCT BC LD is a useful tool for predicting the potential impact of vaccination on the prevalence of serotypes associated with non-vaccine serotypes and for predicting how they may change under vaccination and vaccine serotype replacement.
|
| 191 |
+
|
| 192 |
+
# ATLASlay
|
| 193 |
+
|
| 194 |
+
Pneumococcal populations are highly diverse in non-antigenic genes and are commonly classified into sequence types (ST) by Multi Locus Sequence Typing (MLST) of seven metabolic housekeeping genes. STs have been documented to occur regularly throughout the past 7 decades, yet many studies (eg) show an intriguing pattern of largely non-overlapping associations between serotype and ST. It has been noted that many STs that were previously associated with vaccine serotypes now occur in association with non-vaccine serotypes. It has been proposed that a combination of immune-mediated interference between identical antigenic types and direct competition between identical metabolic types can generate non-overlapping association between antigenic and STs in populations of the bacterial pathogen Neisseria meningitidis. In this paper, we explore whether pneumococcal population structure, can be explained within a similar conceptual framework. in which pathogen strains are profiled by antigenic type, AT, metabolic type (MT) and additional non-capsular virulence factors (VF).
|
| 195 |
+
|
| 196 |
+
Figure 3: An case study from the eLife test set comparing summaries generated under highly lay and technical attribute values (with the length attribute being kept constant).
|
2024/ATLAS_ Improving Lay Summarisation with Attribute-based Control/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f71998247a3b7ce0bcedb589c10c50b00821c74b7a2553574ed9d886c1e9656e
|
| 3 |
+
size 290045
|
2024/ATLAS_ Improving Lay Summarisation with Attribute-based Control/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Aligning Large Language Models via Fine-grained Supervision/d2093850-e28f-43df-a51c-a0b07b648dd6_content_list.json
ADDED
|
@@ -0,0 +1,1541 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "Aligning Large Language Models via Fine-grained Supervision",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
173,
|
| 8 |
+
89,
|
| 9 |
+
825,
|
| 10 |
+
111
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Dehong Xu $^{1*}$ , Liang Qiu $^{2*}$ , Minseok Kim $^{2}$ , Faisal Ladhak $^{2}$ , Jaeyoung Do $^{3}$",
|
| 17 |
+
"bbox": [
|
| 18 |
+
188,
|
| 19 |
+
129,
|
| 20 |
+
813,
|
| 21 |
+
149
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "$^{1}$ Department of Statistics, UCLA $^{2}$ Amazon \n $^{3}$ Department of Electrical and Computer Engineering, Seoul National University Correspondence: xudehong1996@ucla.edu, liangqxx@amazon.com",
|
| 28 |
+
"bbox": [
|
| 29 |
+
173,
|
| 30 |
+
162,
|
| 31 |
+
826,
|
| 32 |
+
212
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "Abstract",
|
| 39 |
+
"text_level": 1,
|
| 40 |
+
"bbox": [
|
| 41 |
+
260,
|
| 42 |
+
252,
|
| 43 |
+
339,
|
| 44 |
+
268
|
| 45 |
+
],
|
| 46 |
+
"page_idx": 0
|
| 47 |
+
},
|
| 48 |
+
{
|
| 49 |
+
"type": "text",
|
| 50 |
+
"text": "Pre-trained large-scale language models (LLMs) excel at producing coherent articles, yet their outputs may be untruthful, toxic, or fail to align with user expectations. Current approaches focus on using reinforcement learning with human feedback (RLHF) to improve model alignment, which works by transforming coarse human preferences of LLM outputs into a feedback signal that guides the model learning process. However, because this approach operates on sequence-level feedback, it lacks the precision to identify the exact parts of the output affecting user preferences. To address this gap, we propose a method to enhance LLM alignment through fine-grained token-level supervision. Specifically, we ask annotators to minimally edit less preferred responses within the standard reward modeling dataset to make them more favorable, ensuring changes are made only where necessary while retaining most of the original content. The refined dataset is used to train a token-level reward model, which is then used for training our fine-grained Proximal Policy Optimization (PPO) model. Our experiment results demonstrate that this approach can achieve up to an absolute improvement of $5.1\\%$ in LLM performance, in terms of win rate against the reference model, compared with the traditional PPO model.",
|
| 51 |
+
"bbox": [
|
| 52 |
+
144,
|
| 53 |
+
282,
|
| 54 |
+
460,
|
| 55 |
+
709
|
| 56 |
+
],
|
| 57 |
+
"page_idx": 0
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"type": "text",
|
| 61 |
+
"text": "1 Introduction",
|
| 62 |
+
"text_level": 1,
|
| 63 |
+
"bbox": [
|
| 64 |
+
114,
|
| 65 |
+
726,
|
| 66 |
+
260,
|
| 67 |
+
741
|
| 68 |
+
],
|
| 69 |
+
"page_idx": 0
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"type": "text",
|
| 73 |
+
"text": "One key objective in advancing large language models (LLMs) is to ensure safe, beneficial human interaction. However, current pre-trained models, mostly trained on web and book texts, often generate biased or toxic text, misaligning with human intentions. To address this issue, numerous studies (Ouyang et al., 2022; Rafailov et al., 2023; Bai et al., 2022b,a; Yuan et al., 2023; Touvron",
|
| 74 |
+
"bbox": [
|
| 75 |
+
112,
|
| 76 |
+
753,
|
| 77 |
+
489,
|
| 78 |
+
881
|
| 79 |
+
],
|
| 80 |
+
"page_idx": 0
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"type": "text",
|
| 84 |
+
"text": "et al., 2023; Ramamurthy et al., 2022) have integrated human feedback into the training process. A significant advancement is reinforcement learning from human feedback (RLHF) (Ouyang et al., 2022), which usually consists of two phases: First, a reward model (RM) is trained from preference data, which comprises various responses alongside their human-assigned preference scores for a given prompt. Then, this reward model is applied to optimize a final model using Proximal Policy Optimization (PPO) (Schulman et al., 2017).",
|
| 85 |
+
"bbox": [
|
| 86 |
+
507,
|
| 87 |
+
253,
|
| 88 |
+
884,
|
| 89 |
+
428
|
| 90 |
+
],
|
| 91 |
+
"page_idx": 0
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"type": "text",
|
| 95 |
+
"text": "Recent works (Wu et al., 2023; Rafailov et al., 2023; Fernandes et al., 2023; Guo et al., 2023; Wang et al., 2024) discovered limitations of the current RM, specifically their misalignment with human values. This misalignment stems from two main issues: $(i)$ the presence of incorrect and ambiguous preference pairs in the human-labeled datasets; $(ii)$ the limited insight inherent in sequence-level feedback. Specifically, from a data collection standpoint, the task of comparing the overall quality of model outputs is challenging for human annotators when outputs exhibit both desired and undesired behaviors in different parts. Moreover from the RM perspective, the reliance on preference-based data labeling leads to sparse training signals. This sparsity discourages the model's ability to distinguish finer details between responses and further limits the capacity for reward optimization.",
|
| 96 |
+
"bbox": [
|
| 97 |
+
507,
|
| 98 |
+
430,
|
| 99 |
+
884,
|
| 100 |
+
734
|
| 101 |
+
],
|
| 102 |
+
"page_idx": 0
|
| 103 |
+
},
|
| 104 |
+
{
|
| 105 |
+
"type": "text",
|
| 106 |
+
"text": "To tackle this challenge, we propose the following two-fold contributions as illustrated in Figure 1:",
|
| 107 |
+
"bbox": [
|
| 108 |
+
507,
|
| 109 |
+
734,
|
| 110 |
+
882,
|
| 111 |
+
766
|
| 112 |
+
],
|
| 113 |
+
"page_idx": 0
|
| 114 |
+
},
|
| 115 |
+
{
|
| 116 |
+
"type": "list",
|
| 117 |
+
"sub_type": "text",
|
| 118 |
+
"list_items": [
|
| 119 |
+
"- We introduce a new data collection approach that asks annotators to edit responses from existing RM datasets to be more preferable. By comparing the original and edited responses, we obtain detailed token-level insights that are essential for training our fine-tuned reward model.",
|
| 120 |
+
"- We propose a new token-level reward modeling approach that provides reward signals at the token level. Different from coarse-grained"
|
| 121 |
+
],
|
| 122 |
+
"bbox": [
|
| 123 |
+
507,
|
| 124 |
+
774,
|
| 125 |
+
882,
|
| 126 |
+
917
|
| 127 |
+
],
|
| 128 |
+
"page_idx": 0
|
| 129 |
+
},
|
| 130 |
+
{
|
| 131 |
+
"type": "page_footnote",
|
| 132 |
+
"text": "* Corresponding authors.",
|
| 133 |
+
"bbox": [
|
| 134 |
+
139,
|
| 135 |
+
891,
|
| 136 |
+
302,
|
| 137 |
+
904
|
| 138 |
+
],
|
| 139 |
+
"page_idx": 0
|
| 140 |
+
},
|
| 141 |
+
{
|
| 142 |
+
"type": "page_footnote",
|
| 143 |
+
"text": "† Author performed the work while interned at Amazon.",
|
| 144 |
+
"bbox": [
|
| 145 |
+
139,
|
| 146 |
+
904,
|
| 147 |
+
485,
|
| 148 |
+
917
|
| 149 |
+
],
|
| 150 |
+
"page_idx": 0
|
| 151 |
+
},
|
| 152 |
+
{
|
| 153 |
+
"type": "page_number",
|
| 154 |
+
"text": "673",
|
| 155 |
+
"bbox": [
|
| 156 |
+
485,
|
| 157 |
+
927,
|
| 158 |
+
515,
|
| 159 |
+
939
|
| 160 |
+
],
|
| 161 |
+
"page_idx": 0
|
| 162 |
+
},
|
| 163 |
+
{
|
| 164 |
+
"type": "footer",
|
| 165 |
+
"text": "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 673-680 August 11-16, 2024 ©2024 Association for Computational Linguistics",
|
| 166 |
+
"bbox": [
|
| 167 |
+
95,
|
| 168 |
+
945,
|
| 169 |
+
900,
|
| 170 |
+
972
|
| 171 |
+
],
|
| 172 |
+
"page_idx": 0
|
| 173 |
+
},
|
| 174 |
+
{
|
| 175 |
+
"type": "text",
|
| 176 |
+
"text": "Phase 1: Data collection",
|
| 177 |
+
"text_level": 1,
|
| 178 |
+
"bbox": [
|
| 179 |
+
122,
|
| 180 |
+
96,
|
| 181 |
+
277,
|
| 182 |
+
107
|
| 183 |
+
],
|
| 184 |
+
"page_idx": 1
|
| 185 |
+
},
|
| 186 |
+
{
|
| 187 |
+
"type": "text",
|
| 188 |
+
"text": "Prompt: What are the top three most abundant elements found in the Earth's crust?",
|
| 189 |
+
"bbox": [
|
| 190 |
+
119,
|
| 191 |
+
110,
|
| 192 |
+
531,
|
| 193 |
+
120
|
| 194 |
+
],
|
| 195 |
+
"page_idx": 1
|
| 196 |
+
},
|
| 197 |
+
{
|
| 198 |
+
"type": "text",
|
| 199 |
+
"text": "Model response A:",
|
| 200 |
+
"text_level": 1,
|
| 201 |
+
"bbox": [
|
| 202 |
+
119,
|
| 203 |
+
123,
|
| 204 |
+
221,
|
| 205 |
+
133
|
| 206 |
+
],
|
| 207 |
+
"page_idx": 1
|
| 208 |
+
},
|
| 209 |
+
{
|
| 210 |
+
"type": "text",
|
| 211 |
+
"text": "The three most abundant elements are oxygen, silicon, and iron. Oxygen combines with other elements to form oxides and silicon is the most common mineral ...",
|
| 212 |
+
"bbox": [
|
| 213 |
+
119,
|
| 214 |
+
134,
|
| 215 |
+
475,
|
| 216 |
+
162
|
| 217 |
+
],
|
| 218 |
+
"page_idx": 1
|
| 219 |
+
},
|
| 220 |
+
{
|
| 221 |
+
"type": "text",
|
| 222 |
+
"text": "Model response B:",
|
| 223 |
+
"text_level": 1,
|
| 224 |
+
"bbox": [
|
| 225 |
+
119,
|
| 226 |
+
170,
|
| 227 |
+
221,
|
| 228 |
+
181
|
| 229 |
+
],
|
| 230 |
+
"page_idx": 1
|
| 231 |
+
},
|
| 232 |
+
{
|
| 233 |
+
"type": "text",
|
| 234 |
+
"text": "The Earth's crust is rich in elements, with oxygen, silicon, and calcium. Oxygen makes up about $46.6\\%$ of the Earth's crust ...",
|
| 235 |
+
"bbox": [
|
| 236 |
+
119,
|
| 237 |
+
181,
|
| 238 |
+
426,
|
| 239 |
+
200
|
| 240 |
+
],
|
| 241 |
+
"page_idx": 1
|
| 242 |
+
},
|
| 243 |
+
{
|
| 244 |
+
"type": "text",
|
| 245 |
+
"text": "Human feedback:",
|
| 246 |
+
"text_level": 1,
|
| 247 |
+
"bbox": [
|
| 248 |
+
119,
|
| 249 |
+
212,
|
| 250 |
+
215,
|
| 251 |
+
223
|
| 252 |
+
],
|
| 253 |
+
"page_idx": 1
|
| 254 |
+
},
|
| 255 |
+
{
|
| 256 |
+
"type": "image",
|
| 257 |
+
"img_path": "images/9dff904be8f4f15936aa88a4f5bf3650d3c4ecef06e07bc5df4ffdd9ad656bc9.jpg",
|
| 258 |
+
"image_caption": [],
|
| 259 |
+
"image_footnote": [],
|
| 260 |
+
"bbox": [
|
| 261 |
+
210,
|
| 262 |
+
228,
|
| 263 |
+
248,
|
| 264 |
+
256
|
| 265 |
+
],
|
| 266 |
+
"page_idx": 1
|
| 267 |
+
},
|
| 268 |
+
{
|
| 269 |
+
"type": "image",
|
| 270 |
+
"img_path": "images/b2afbe7da829cd2e4f31becb498a990381b57544bfb5272d81a7d79fb51799ba.jpg",
|
| 271 |
+
"image_caption": [],
|
| 272 |
+
"image_footnote": [],
|
| 273 |
+
"bbox": [
|
| 274 |
+
267,
|
| 275 |
+
235,
|
| 276 |
+
344,
|
| 277 |
+
255
|
| 278 |
+
],
|
| 279 |
+
"page_idx": 1
|
| 280 |
+
},
|
| 281 |
+
{
|
| 282 |
+
"type": "text",
|
| 283 |
+
"text": "Phase 2: Reward modeling",
|
| 284 |
+
"text_level": 1,
|
| 285 |
+
"bbox": [
|
| 286 |
+
121,
|
| 287 |
+
269,
|
| 288 |
+
292,
|
| 289 |
+
282
|
| 290 |
+
],
|
| 291 |
+
"page_idx": 1
|
| 292 |
+
},
|
| 293 |
+
{
|
| 294 |
+
"type": "text",
|
| 295 |
+
"text": "New prompt: What is the smallest ocean in the world?",
|
| 296 |
+
"bbox": [
|
| 297 |
+
121,
|
| 298 |
+
285,
|
| 299 |
+
391,
|
| 300 |
+
294
|
| 301 |
+
],
|
| 302 |
+
"page_idx": 1
|
| 303 |
+
},
|
| 304 |
+
{
|
| 305 |
+
"type": "image",
|
| 306 |
+
"img_path": "images/4fa54d6127405bd0a2bdfa95976cfb1e5078455dc75795bc29aad84c5ccfa62a.jpg",
|
| 307 |
+
"image_caption": [
|
| 308 |
+
"Sequence-level RM",
|
| 309 |
+
"Figure 1: The comparison between sequence-level reward modeling (Left) and our method of fine-grained reward modeling via minimal editing (Right). Our approach diverges from sequence-level reward modeling in two key aspects: (1) Data Collection, where we ask a human or LLM to edit the model response; and (2) Reward Modeling, which enables our model to assign rewards to individual tokens, as opposed to assessing the entire sequence collectively."
|
| 310 |
+
],
|
| 311 |
+
"image_footnote": [],
|
| 312 |
+
"bbox": [
|
| 313 |
+
132,
|
| 314 |
+
299,
|
| 315 |
+
186,
|
| 316 |
+
346
|
| 317 |
+
],
|
| 318 |
+
"page_idx": 1
|
| 319 |
+
},
|
| 320 |
+
{
|
| 321 |
+
"type": "text",
|
| 322 |
+
"text": "The Arctic Ocean is the smallest ocean in the world. It locates in the northmost part of the Earth and covers an area of about 12.5 million square kilometers.",
|
| 323 |
+
"bbox": [
|
| 324 |
+
201,
|
| 325 |
+
306,
|
| 326 |
+
470,
|
| 327 |
+
338
|
| 328 |
+
],
|
| 329 |
+
"page_idx": 1
|
| 330 |
+
},
|
| 331 |
+
{
|
| 332 |
+
"type": "text",
|
| 333 |
+
"text": "Sequence reward: -0.2",
|
| 334 |
+
"bbox": [
|
| 335 |
+
329,
|
| 336 |
+
344,
|
| 337 |
+
448,
|
| 338 |
+
355
|
| 339 |
+
],
|
| 340 |
+
"page_idx": 1
|
| 341 |
+
},
|
| 342 |
+
{
|
| 343 |
+
"type": "text",
|
| 344 |
+
"text": "Model response:",
|
| 345 |
+
"text_level": 1,
|
| 346 |
+
"bbox": [
|
| 347 |
+
490,
|
| 348 |
+
124,
|
| 349 |
+
581,
|
| 350 |
+
134
|
| 351 |
+
],
|
| 352 |
+
"page_idx": 1
|
| 353 |
+
},
|
| 354 |
+
{
|
| 355 |
+
"type": "text",
|
| 356 |
+
"text": "The three most abundant elements are oxygen, silicon, and iron. Oxygen combines with other elements to form oxides and silicon is the most common mineral group to form ricks like quartz. Iron is vital for forming many minerals and rocks, contributing to the Earth's magnetic field and core composition.",
|
| 357 |
+
"bbox": [
|
| 358 |
+
490,
|
| 359 |
+
134,
|
| 360 |
+
873,
|
| 361 |
+
175
|
| 362 |
+
],
|
| 363 |
+
"page_idx": 1
|
| 364 |
+
},
|
| 365 |
+
{
|
| 366 |
+
"type": "text",
|
| 367 |
+
"text": "Revised response via editing:",
|
| 368 |
+
"text_level": 1,
|
| 369 |
+
"bbox": [
|
| 370 |
+
490,
|
| 371 |
+
193,
|
| 372 |
+
648,
|
| 373 |
+
203
|
| 374 |
+
],
|
| 375 |
+
"page_idx": 1
|
| 376 |
+
},
|
| 377 |
+
{
|
| 378 |
+
"type": "text",
|
| 379 |
+
"text": "The three most abundant elements are oxygen, silicon, and aluminum. Oxygen combines with other elements to form oxides and silicon is the most common mineral group to form ricks like quartz. Aluminum is vital for forming many minerals and is significant in the crust's overall composition, found in clays and feldspars, contributing to the crust's structure and stability.",
|
| 380 |
+
"bbox": [
|
| 381 |
+
490,
|
| 382 |
+
204,
|
| 383 |
+
877,
|
| 384 |
+
256
|
| 385 |
+
],
|
| 386 |
+
"page_idx": 1
|
| 387 |
+
},
|
| 388 |
+
{
|
| 389 |
+
"type": "text",
|
| 390 |
+
"text": "Token reward: +1",
|
| 391 |
+
"bbox": [
|
| 392 |
+
657,
|
| 393 |
+
288,
|
| 394 |
+
739,
|
| 395 |
+
299
|
| 396 |
+
],
|
| 397 |
+
"page_idx": 1
|
| 398 |
+
},
|
| 399 |
+
{
|
| 400 |
+
"type": "text",
|
| 401 |
+
"text": "The Arctic Ocean is the smallest ocean in the world. It locates in the northmost part of the Earth and covers an area of about 12.5 million square kilometers.",
|
| 402 |
+
"bbox": [
|
| 403 |
+
596,
|
| 404 |
+
307,
|
| 405 |
+
865,
|
| 406 |
+
338
|
| 407 |
+
],
|
| 408 |
+
"page_idx": 1
|
| 409 |
+
},
|
| 410 |
+
{
|
| 411 |
+
"type": "text",
|
| 412 |
+
"text": "Token reward: -2",
|
| 413 |
+
"bbox": [
|
| 414 |
+
638,
|
| 415 |
+
344,
|
| 416 |
+
719,
|
| 417 |
+
355
|
| 418 |
+
],
|
| 419 |
+
"page_idx": 1
|
| 420 |
+
},
|
| 421 |
+
{
|
| 422 |
+
"type": "text",
|
| 423 |
+
"text": "sequence-level rewards, our approach offers more granular feedback, pinpointing the specific parts of a response that are effective or need improvement, which hence helps RL optimization.",
|
| 424 |
+
"bbox": [
|
| 425 |
+
127,
|
| 426 |
+
445,
|
| 427 |
+
487,
|
| 428 |
+
508
|
| 429 |
+
],
|
| 430 |
+
"page_idx": 1
|
| 431 |
+
},
|
| 432 |
+
{
|
| 433 |
+
"type": "text",
|
| 434 |
+
"text": "Experiment results using AlpacaFarm (Dubois et al., 2023) environment indicate that our proposed approach improves LLMs' performance up to $5.1\\%$ against the baseline in terms of win rate, given the same amount of data for training.",
|
| 435 |
+
"bbox": [
|
| 436 |
+
112,
|
| 437 |
+
521,
|
| 438 |
+
487,
|
| 439 |
+
602
|
| 440 |
+
],
|
| 441 |
+
"page_idx": 1
|
| 442 |
+
},
|
| 443 |
+
{
|
| 444 |
+
"type": "text",
|
| 445 |
+
"text": "2 Method",
|
| 446 |
+
"text_level": 1,
|
| 447 |
+
"bbox": [
|
| 448 |
+
112,
|
| 449 |
+
615,
|
| 450 |
+
216,
|
| 451 |
+
630
|
| 452 |
+
],
|
| 453 |
+
"page_idx": 1
|
| 454 |
+
},
|
| 455 |
+
{
|
| 456 |
+
"type": "text",
|
| 457 |
+
"text": "In this section, we introduce our approach to fine-grained data collection through editing and token-level reward modeling.",
|
| 458 |
+
"bbox": [
|
| 459 |
+
112,
|
| 460 |
+
642,
|
| 461 |
+
489,
|
| 462 |
+
689
|
| 463 |
+
],
|
| 464 |
+
"page_idx": 1
|
| 465 |
+
},
|
| 466 |
+
{
|
| 467 |
+
"type": "text",
|
| 468 |
+
"text": "2.1 Fine-grained data collection via minimal editing",
|
| 469 |
+
"text_level": 1,
|
| 470 |
+
"bbox": [
|
| 471 |
+
112,
|
| 472 |
+
702,
|
| 473 |
+
478,
|
| 474 |
+
733
|
| 475 |
+
],
|
| 476 |
+
"page_idx": 1
|
| 477 |
+
},
|
| 478 |
+
{
|
| 479 |
+
"type": "text",
|
| 480 |
+
"text": "The conventional RLHF pipeline, as outlined in prior works (Ouyang et al., 2022; Dubois et al., 2023), involves three key stages: supervised finetuning (SFT), reward modeling (RM), and proximal policy optimization (PPO). In the RM phase, the standard practice entails collecting a dataset of human evaluations comparing two or more model outputs in response to a series of prompts. The dataset is represented as $\\mathcal{D} = \\{x^{(i)},y_w^{(i)},y_l^{(i)}\\}_{i = 1}^N$ where $x$ denotes a prompt and $(y_{w},y_{l})$ indicates the preferred and less preferred responses, respectively.",
|
| 481 |
+
"bbox": [
|
| 482 |
+
112,
|
| 483 |
+
740,
|
| 484 |
+
489,
|
| 485 |
+
919
|
| 486 |
+
],
|
| 487 |
+
"page_idx": 1
|
| 488 |
+
},
|
| 489 |
+
{
|
| 490 |
+
"type": "text",
|
| 491 |
+
"text": "Utilizing such a dataset, earlier RLHF research focused on developing a reward model $R_{\\phi}$ that determines the more favored model output. This holistic reward model associates each input prompt $x$ and its corresponding output $y$ with one scalar value reflecting the output's overall quality.",
|
| 492 |
+
"bbox": [
|
| 493 |
+
507,
|
| 494 |
+
445,
|
| 495 |
+
884,
|
| 496 |
+
542
|
| 497 |
+
],
|
| 498 |
+
"page_idx": 1
|
| 499 |
+
},
|
| 500 |
+
{
|
| 501 |
+
"type": "text",
|
| 502 |
+
"text": "However, as shown in the left panel of Figure 1, annotating a pair of model outputs that are substantially different can be a difficult task for humans, especially when each response exhibits a mix of desirable and undesirable behaviors. To address this issue, we introduce a novel data collection technique aimed at obtaining fine-grained supervision, which offers richer, comparative information beyond simple binary choices. Instead of annotating entire responses, our method involves targeted editing by humans or language models, as depicted in the right panel of Figure 1. The goal is to retain the majority of the original response while making improvements to specific areas in need of enhancement. Specifically, we introduce a response editing process in which we ask humans or prompt LLMs to perform targeted modifications. For fine-grained data collection, our method works for both human annotators and language models, following (Ding et al., 2022; Gilardi et al., 2023; Wang et al., 2022; Chiang and Lee, 2023).",
|
| 503 |
+
"bbox": [
|
| 504 |
+
507,
|
| 505 |
+
546,
|
| 506 |
+
884,
|
| 507 |
+
883
|
| 508 |
+
],
|
| 509 |
+
"page_idx": 1
|
| 510 |
+
},
|
| 511 |
+
{
|
| 512 |
+
"type": "text",
|
| 513 |
+
"text": "In practice, we prompt a proprietary LLM, such as Claude-2 (Bai et al., 2022b), to apply edits to",
|
| 514 |
+
"bbox": [
|
| 515 |
+
507,
|
| 516 |
+
887,
|
| 517 |
+
882,
|
| 518 |
+
919
|
| 519 |
+
],
|
| 520 |
+
"page_idx": 1
|
| 521 |
+
},
|
| 522 |
+
{
|
| 523 |
+
"type": "header",
|
| 524 |
+
"text": "Sequence-level reward",
|
| 525 |
+
"bbox": [
|
| 526 |
+
208,
|
| 527 |
+
82,
|
| 528 |
+
366,
|
| 529 |
+
93
|
| 530 |
+
],
|
| 531 |
+
"page_idx": 1
|
| 532 |
+
},
|
| 533 |
+
{
|
| 534 |
+
"type": "header",
|
| 535 |
+
"text": "Fine-grained reward via minimal editing",
|
| 536 |
+
"bbox": [
|
| 537 |
+
547,
|
| 538 |
+
82,
|
| 539 |
+
818,
|
| 540 |
+
95
|
| 541 |
+
],
|
| 542 |
+
"page_idx": 1
|
| 543 |
+
},
|
| 544 |
+
{
|
| 545 |
+
"type": "page_number",
|
| 546 |
+
"text": "674",
|
| 547 |
+
"bbox": [
|
| 548 |
+
485,
|
| 549 |
+
928,
|
| 550 |
+
515,
|
| 551 |
+
940
|
| 552 |
+
],
|
| 553 |
+
"page_idx": 1
|
| 554 |
+
},
|
| 555 |
+
{
|
| 556 |
+
"type": "text",
|
| 557 |
+
"text": "the original output. In the experiment, the original preference pairs $(y_w, y_l)$ were not included and we only utilized $y_l$ from the original dataset for minimal editing. This approach maintains the same amount of data as the baseline methods, ensuring a fair comparison. Details of the prompt used for editing can be found in Appendix A.1, and the examples of fine-grained annotation with minimal editing are shown in Appendix A.2. Our method is based on the assumption that the edits inherently improve a response, making changes only when they enhance alignment with human values. The approach enables the refinement of responses by providing clear insights into the specific areas that require improvement.",
|
| 558 |
+
"bbox": [
|
| 559 |
+
112,
|
| 560 |
+
84,
|
| 561 |
+
489,
|
| 562 |
+
325
|
| 563 |
+
],
|
| 564 |
+
"page_idx": 2
|
| 565 |
+
},
|
| 566 |
+
{
|
| 567 |
+
"type": "text",
|
| 568 |
+
"text": "2.2 Token-level reward modeling",
|
| 569 |
+
"text_level": 1,
|
| 570 |
+
"bbox": [
|
| 571 |
+
112,
|
| 572 |
+
336,
|
| 573 |
+
389,
|
| 574 |
+
351
|
| 575 |
+
],
|
| 576 |
+
"page_idx": 2
|
| 577 |
+
},
|
| 578 |
+
{
|
| 579 |
+
"type": "text",
|
| 580 |
+
"text": "In this section, we will first introduce the RL environment and then define our token-level reward modeling scheme.",
|
| 581 |
+
"bbox": [
|
| 582 |
+
112,
|
| 583 |
+
357,
|
| 584 |
+
489,
|
| 585 |
+
404
|
| 586 |
+
],
|
| 587 |
+
"page_idx": 2
|
| 588 |
+
},
|
| 589 |
+
{
|
| 590 |
+
"type": "text",
|
| 591 |
+
"text": "Language generation can be defined as a Markov Decision Process (MDP) $\\langle S, \\mathcal{A}, \\mathcal{R}, \\mathcal{P}, \\gamma \\rangle$ . $S$ refers to the state space and we define the start state $s_1$ as the input prompts $\\{x\\}$ . An action at t-step $a_t$ is a generated token. The transition function of the environment is denoted as $\\mathcal{P}: S \\times \\mathcal{A} \\to S$ , $s_t = \\{x, a_1, \\dots, a_{t-1}\\}$ . A response $y$ of length $T$ is then $y = \\{a_1, \\dots, a_T\\}$ . In our token-level reward scheme, a reward is assigned to each generated token $a_t$ by $\\mathcal{R}: S \\times \\mathcal{A} \\to \\mathbb{R}$ , where at each time step $t$ there is a learned reward function $r_t = r_\\phi(s_t, a_t)$ . Therefore, for each response, we have a trajectory $\\tau = \\{s_1, a_1, r_1, \\dots, s_t, a_t, r_t, \\dots, s_T, a_T, r_T\\}$ .",
|
| 592 |
+
"bbox": [
|
| 593 |
+
112,
|
| 594 |
+
406,
|
| 595 |
+
489,
|
| 596 |
+
613
|
| 597 |
+
],
|
| 598 |
+
"page_idx": 2
|
| 599 |
+
},
|
| 600 |
+
{
|
| 601 |
+
"type": "text",
|
| 602 |
+
"text": "We define the reward of the whole trajectory as the average of rewards assigned to each token:",
|
| 603 |
+
"bbox": [
|
| 604 |
+
112,
|
| 605 |
+
615,
|
| 606 |
+
487,
|
| 607 |
+
646
|
| 608 |
+
],
|
| 609 |
+
"page_idx": 2
|
| 610 |
+
},
|
| 611 |
+
{
|
| 612 |
+
"type": "equation",
|
| 613 |
+
"text": "\n$$\nR (\\tau) = \\frac {1}{T} \\sum_ {t = 1} ^ {T} r _ {t}. \\tag {1}\n$$\n",
|
| 614 |
+
"text_format": "latex",
|
| 615 |
+
"bbox": [
|
| 616 |
+
231,
|
| 617 |
+
658,
|
| 618 |
+
487,
|
| 619 |
+
701
|
| 620 |
+
],
|
| 621 |
+
"page_idx": 2
|
| 622 |
+
},
|
| 623 |
+
{
|
| 624 |
+
"type": "text",
|
| 625 |
+
"text": "Following the Bradley-Terry (BT) model (Bradley and Terry, 1952) for preference modeling, we formulate the distribution of human preference for responses as below:",
|
| 626 |
+
"bbox": [
|
| 627 |
+
112,
|
| 628 |
+
711,
|
| 629 |
+
489,
|
| 630 |
+
775
|
| 631 |
+
],
|
| 632 |
+
"page_idx": 2
|
| 633 |
+
},
|
| 634 |
+
{
|
| 635 |
+
"type": "equation",
|
| 636 |
+
"text": "\n$$\n\\begin{array}{l} p \\left(\\tau^ {i} \\succ \\tau^ {j}\\right) = \\frac {\\exp \\left(R \\left(\\tau^ {i}\\right)\\right)}{\\exp \\left(R \\left(\\tau^ {i}\\right)\\right) + \\exp \\left(R \\left(\\tau^ {j}\\right)\\right)} \\tag {2} \\\\ = \\sigma (R (\\tau^ {i}) - R (\\tau^ {j})), \\\\ \\end{array}\n$$\n",
|
| 637 |
+
"text_format": "latex",
|
| 638 |
+
"bbox": [
|
| 639 |
+
132,
|
| 640 |
+
785,
|
| 641 |
+
487,
|
| 642 |
+
841
|
| 643 |
+
],
|
| 644 |
+
"page_idx": 2
|
| 645 |
+
},
|
| 646 |
+
{
|
| 647 |
+
"type": "text",
|
| 648 |
+
"text": "where $\\tau^i$ and $\\tau^j$ represent two different responses generated from the same prompt. Under the setting of our fine-grained supervision dataset, we assume $\\tau^i$ only makes edits on $\\tau^j$ while maintaining most",
|
| 649 |
+
"bbox": [
|
| 650 |
+
112,
|
| 651 |
+
854,
|
| 652 |
+
487,
|
| 653 |
+
917
|
| 654 |
+
],
|
| 655 |
+
"page_idx": 2
|
| 656 |
+
},
|
| 657 |
+
{
|
| 658 |
+
"type": "text",
|
| 659 |
+
"text": "parts unchanged. We define $U_0 = \\{t | a_t^i = a_t^j\\}$ and $U_1 = \\{t | a_t^i \\neq a_t^j\\}$ to represent the unchanged and changed parts.",
|
| 660 |
+
"bbox": [
|
| 661 |
+
507,
|
| 662 |
+
83,
|
| 663 |
+
880,
|
| 664 |
+
131
|
| 665 |
+
],
|
| 666 |
+
"page_idx": 2
|
| 667 |
+
},
|
| 668 |
+
{
|
| 669 |
+
"type": "text",
|
| 670 |
+
"text": "Regarding the reward model as a binary classifier, we use negative log-likelihood as the loss function. By plugging in Equation 1, we have:",
|
| 671 |
+
"bbox": [
|
| 672 |
+
507,
|
| 673 |
+
133,
|
| 674 |
+
882,
|
| 675 |
+
181
|
| 676 |
+
],
|
| 677 |
+
"page_idx": 2
|
| 678 |
+
},
|
| 679 |
+
{
|
| 680 |
+
"type": "equation",
|
| 681 |
+
"text": "\n$$\n\\begin{array}{l} \\mathcal {L} = - \\mathbb {E} _ {(\\tau^ {i}, \\tau^ {j}) \\sim \\mathcal {D}} \\left[ \\log \\sigma (R (\\tau^ {i}) - R (\\tau^ {j})) \\right] \\\\ = - \\mathbb {E} _ {(\\tau^ {i}, \\tau^ {j}) \\sim \\mathcal {D}} [ \\log \\sigma \\left(\\left(\\frac {1}{T ^ {i}} - \\frac {1}{T ^ {j}}\\right) \\sum_ {t \\in U _ {0}} r _ {t} \\right. \\tag {3} \\\\ + \\frac {1}{T ^ {i}} \\sum_ {t \\in U _ {1}} r _ {t} ^ {i} - \\frac {1}{T ^ {j}} \\sum_ {t \\in U _ {1}} r _ {t} ^ {j}) ], \\\\ \\end{array}\n$$\n",
|
| 682 |
+
"text_format": "latex",
|
| 683 |
+
"bbox": [
|
| 684 |
+
526,
|
| 685 |
+
189,
|
| 686 |
+
882,
|
| 687 |
+
288
|
| 688 |
+
],
|
| 689 |
+
"page_idx": 2
|
| 690 |
+
},
|
| 691 |
+
{
|
| 692 |
+
"type": "text",
|
| 693 |
+
"text": "Ideally, we aim for the unchanged part to maintain a consistent reward. Under this assumption, and if the two responses are of equal length, the first term of the loss function can be removed:",
|
| 694 |
+
"bbox": [
|
| 695 |
+
507,
|
| 696 |
+
298,
|
| 697 |
+
882,
|
| 698 |
+
361
|
| 699 |
+
],
|
| 700 |
+
"page_idx": 2
|
| 701 |
+
},
|
| 702 |
+
{
|
| 703 |
+
"type": "equation",
|
| 704 |
+
"text": "\n$$\n\\mathcal {L} \\approx - \\mathbb {E} _ {\\left(\\tau^ {i}, \\tau^ {j}\\right) \\sim \\mathcal {D}} [ \\log \\sigma \\left(\\frac {1}{T ^ {i}} \\sum_ {t \\in U _ {1}} r _ {t} ^ {i} - \\frac {1}{T ^ {j}} \\sum_ {t \\in U _ {1}} r _ {t} ^ {j}\\right) ] \\tag {4}\n$$\n",
|
| 705 |
+
"text_format": "latex",
|
| 706 |
+
"bbox": [
|
| 707 |
+
507,
|
| 708 |
+
384,
|
| 709 |
+
882,
|
| 710 |
+
437
|
| 711 |
+
],
|
| 712 |
+
"page_idx": 2
|
| 713 |
+
},
|
| 714 |
+
{
|
| 715 |
+
"type": "text",
|
| 716 |
+
"text": "For the edited part, the loss function is thus designed to maximize the reward for the preferred response and minimize it for the less favored one.",
|
| 717 |
+
"bbox": [
|
| 718 |
+
507,
|
| 719 |
+
448,
|
| 720 |
+
882,
|
| 721 |
+
495
|
| 722 |
+
],
|
| 723 |
+
"page_idx": 2
|
| 724 |
+
},
|
| 725 |
+
{
|
| 726 |
+
"type": "text",
|
| 727 |
+
"text": "With a trained token-level reward model, we can integrate it into the Proximal Policy Optimization (PPO) (Schulman et al., 2017) algorithm. In the traditional PPO-RLHF method, each token in the sequence is assigned a reward of the form $[-KL_1, -KL_2, \\dots, R - KL_n]$ , where $KL_i$ denotes the Kullback-Leibler divergence (Kullback and Leibler, 1951) for the generated token sequence up to that point, and $R$ represents the sequence-level reward from the reward model. Generalized Advantage Estimation (GAE) (Schulman et al., 2015) is then employed to calculate the advantage at the token level.",
|
| 728 |
+
"bbox": [
|
| 729 |
+
507,
|
| 730 |
+
495,
|
| 731 |
+
884,
|
| 732 |
+
705
|
| 733 |
+
],
|
| 734 |
+
"page_idx": 2
|
| 735 |
+
},
|
| 736 |
+
{
|
| 737 |
+
"type": "text",
|
| 738 |
+
"text": "In contrast, our approach assigns a reward $R_{i}$ directly from the token-level reward model to each token in the sequence, resulting in a reward vector of $[R_1,R_2,\\dots,R_n]$ . This approach enhances the granularity of feedback at each step of the sequence generation process, without changing the underlying GAE and policy update procedure. Consequently, the computational cost remains comparable to the standard RLHF approach.",
|
| 739 |
+
"bbox": [
|
| 740 |
+
507,
|
| 741 |
+
706,
|
| 742 |
+
882,
|
| 743 |
+
850
|
| 744 |
+
],
|
| 745 |
+
"page_idx": 2
|
| 746 |
+
},
|
| 747 |
+
{
|
| 748 |
+
"type": "text",
|
| 749 |
+
"text": "3 Experiments",
|
| 750 |
+
"text_level": 1,
|
| 751 |
+
"bbox": [
|
| 752 |
+
507,
|
| 753 |
+
862,
|
| 754 |
+
655,
|
| 755 |
+
878
|
| 756 |
+
],
|
| 757 |
+
"page_idx": 2
|
| 758 |
+
},
|
| 759 |
+
{
|
| 760 |
+
"type": "text",
|
| 761 |
+
"text": "In this section, we demonstrate our experimental setup and empirical results in detail.",
|
| 762 |
+
"bbox": [
|
| 763 |
+
507,
|
| 764 |
+
887,
|
| 765 |
+
880,
|
| 766 |
+
917
|
| 767 |
+
],
|
| 768 |
+
"page_idx": 2
|
| 769 |
+
},
|
| 770 |
+
{
|
| 771 |
+
"type": "page_number",
|
| 772 |
+
"text": "675",
|
| 773 |
+
"bbox": [
|
| 774 |
+
485,
|
| 775 |
+
927,
|
| 776 |
+
515,
|
| 777 |
+
940
|
| 778 |
+
],
|
| 779 |
+
"page_idx": 2
|
| 780 |
+
},
|
| 781 |
+
{
|
| 782 |
+
"type": "table",
|
| 783 |
+
"img_path": "images/9f674e8089abb909da9e8426a749fd6d3bc4b0122416efe4e7a3b2fff939b91a.jpg",
|
| 784 |
+
"table_caption": [],
|
| 785 |
+
"table_footnote": [],
|
| 786 |
+
"table_body": "<table><tr><td>Model</td><td>Win rate (%)</td></tr><tr><td>Fine-grained Token-level PPO</td><td>51.6 ± 1.8</td></tr><tr><td>Fine-grained PPO</td><td>51.2 ± 1.8</td></tr><tr><td>Davinci003 (Brown et al., 2020)</td><td>50.0</td></tr><tr><td>PPO-RLHF (Ouyang et al., 2022)</td><td>46.5 ± 1.8</td></tr></table>",
|
| 787 |
+
"bbox": [
|
| 788 |
+
115,
|
| 789 |
+
80,
|
| 790 |
+
500,
|
| 791 |
+
166
|
| 792 |
+
],
|
| 793 |
+
"page_idx": 3
|
| 794 |
+
},
|
| 795 |
+
{
|
| 796 |
+
"type": "text",
|
| 797 |
+
"text": "Table 1: Evaluation results by Claude. Davinci003 is the reference model. All results of other models are from (Dubois et al., 2023).",
|
| 798 |
+
"bbox": [
|
| 799 |
+
112,
|
| 800 |
+
175,
|
| 801 |
+
487,
|
| 802 |
+
219
|
| 803 |
+
],
|
| 804 |
+
"page_idx": 3
|
| 805 |
+
},
|
| 806 |
+
{
|
| 807 |
+
"type": "text",
|
| 808 |
+
"text": "3.1 Experimental setup",
|
| 809 |
+
"text_level": 1,
|
| 810 |
+
"bbox": [
|
| 811 |
+
112,
|
| 812 |
+
246,
|
| 813 |
+
314,
|
| 814 |
+
262
|
| 815 |
+
],
|
| 816 |
+
"page_idx": 3
|
| 817 |
+
},
|
| 818 |
+
{
|
| 819 |
+
"type": "text",
|
| 820 |
+
"text": "In constructing our dataset, we follow the framework established by AlpacaFarm (Dubois et al., 2023), which offers a simulation environment that includes data splits for SFT, RM, PPO, and evaluation processes. Building on this, we develop our refined RM dataset using the fine-grained approach, where we employ Claude-2 (Bai et al., 2022b) to perform targeted editing. Edits are generated on the less preferred responses from the original pairwise data, ensuring lightweight yet effective modifications.",
|
| 821 |
+
"bbox": [
|
| 822 |
+
112,
|
| 823 |
+
268,
|
| 824 |
+
489,
|
| 825 |
+
444
|
| 826 |
+
],
|
| 827 |
+
"page_idx": 3
|
| 828 |
+
},
|
| 829 |
+
{
|
| 830 |
+
"type": "text",
|
| 831 |
+
"text": "We evaluate our method by finetuning the pretrained $LLaMA-7B$ (Touvron et al., 2023) model. To assess the quality of our model's generation compared to baseline models, we employ a win-rate measurement, where the model $p_{\\theta}$ is evaluated against a reference model $p_{\\mathrm{ref}}$ . This method involves pairwise comparisons to estimate how often $p_{\\theta}$ 's outputs are preferred over $p_{\\mathrm{ref}}$ 's for given instructions. Both our model and the baselines are evaluated against the same reference model, Davinci003, aligning with AlpacaFarm (Dubois et al., 2023). To assess the win rate, we employ Claude as the judge, following the simulated approach in (Zheng et al., 2023).",
|
| 832 |
+
"bbox": [
|
| 833 |
+
112,
|
| 834 |
+
447,
|
| 835 |
+
489,
|
| 836 |
+
671
|
| 837 |
+
],
|
| 838 |
+
"page_idx": 3
|
| 839 |
+
},
|
| 840 |
+
{
|
| 841 |
+
"type": "text",
|
| 842 |
+
"text": "To evaluate the effectiveness of our data annotation approach and token-level reward model, we train two models: (i) Fine-grained PPO that only uses our fine-grained RM dataset with editing while still trained with a sequence-level reward, and (ii) Fine-grained Token-level PPO that incorporates both the fine-grained RM dataset and token-level reward modeling, and hence applies token-level reward to PPO.",
|
| 843 |
+
"bbox": [
|
| 844 |
+
112,
|
| 845 |
+
673,
|
| 846 |
+
489,
|
| 847 |
+
816
|
| 848 |
+
],
|
| 849 |
+
"page_idx": 3
|
| 850 |
+
},
|
| 851 |
+
{
|
| 852 |
+
"type": "text",
|
| 853 |
+
"text": "3.2 Experiment results",
|
| 854 |
+
"text_level": 1,
|
| 855 |
+
"bbox": [
|
| 856 |
+
112,
|
| 857 |
+
832,
|
| 858 |
+
310,
|
| 859 |
+
848
|
| 860 |
+
],
|
| 861 |
+
"page_idx": 3
|
| 862 |
+
},
|
| 863 |
+
{
|
| 864 |
+
"type": "text",
|
| 865 |
+
"text": "Results in human value alignment Table 1 showcases our methods (highlighted) alongside the baseline PPO-RLHF model, both trained on $LLaMA - 7B$ (Touvron et al., 2023). Results indicate",
|
| 866 |
+
"bbox": [
|
| 867 |
+
112,
|
| 868 |
+
854,
|
| 869 |
+
487,
|
| 870 |
+
917
|
| 871 |
+
],
|
| 872 |
+
"page_idx": 3
|
| 873 |
+
},
|
| 874 |
+
{
|
| 875 |
+
"type": "table",
|
| 876 |
+
"img_path": "images/ab7c494b5996c3e85859f3f08f46e304866d00f6fac620e3468541bd8ef26296.jpg",
|
| 877 |
+
"table_caption": [],
|
| 878 |
+
"table_footnote": [],
|
| 879 |
+
"table_body": "<table><tr><td>Model</td><td>Accuracy (%)</td></tr><tr><td>RM w/ Fine-grained dataset</td><td>85.2 ± 1.8</td></tr><tr><td>RM w/o Fine-grained dataset</td><td>58.2 ± 1.8</td></tr></table>",
|
| 880 |
+
"bbox": [
|
| 881 |
+
514,
|
| 882 |
+
80,
|
| 883 |
+
878,
|
| 884 |
+
134
|
| 885 |
+
],
|
| 886 |
+
"page_idx": 3
|
| 887 |
+
},
|
| 888 |
+
{
|
| 889 |
+
"type": "table",
|
| 890 |
+
"img_path": "images/eabffaf99cf4ffd76b3f985a4dfadf76076a5f88b5f61af92aca2cec71b45134.jpg",
|
| 891 |
+
"table_caption": [
|
| 892 |
+
"Table 2: Reward model accuracy. Leveraging the fine-grained dataset enhances the reward model's ability to assign correct rewards to responses."
|
| 893 |
+
],
|
| 894 |
+
"table_footnote": [],
|
| 895 |
+
"table_body": "<table><tr><td>Model</td><td>Step</td><td>Tr. hours</td></tr><tr><td>RLHF (Ouyang et al., 2022)</td><td>RM</td><td>0.2</td></tr><tr><td>Fine-grained RLHF</td><td>RM</td><td>0.3</td></tr><tr><td>RLHF (Ouyang et al., 2022)</td><td>PPO</td><td>4</td></tr><tr><td>Fine-grained RLHF</td><td>PPO</td><td>2</td></tr></table>",
|
| 896 |
+
"bbox": [
|
| 897 |
+
510,
|
| 898 |
+
200,
|
| 899 |
+
884,
|
| 900 |
+
287
|
| 901 |
+
],
|
| 902 |
+
"page_idx": 3
|
| 903 |
+
},
|
| 904 |
+
{
|
| 905 |
+
"type": "text",
|
| 906 |
+
"text": "Table 3: Training efficiency. Highlighted numbers represent the training hours (Tr. hours) of the fine-grained PPO model trained with token-level rewards.",
|
| 907 |
+
"bbox": [
|
| 908 |
+
507,
|
| 909 |
+
294,
|
| 910 |
+
882,
|
| 911 |
+
338
|
| 912 |
+
],
|
| 913 |
+
"page_idx": 3
|
| 914 |
+
},
|
| 915 |
+
{
|
| 916 |
+
"type": "text",
|
| 917 |
+
"text": "that our novel data collection technique, when integrated with standard PPO training, leads to an absolute performance increase of $4.7\\%$ compared to traditional methods (refer to lines 2 vs. 4). This highlights the effectiveness of our fine-grained data collection strategy. Moreover, when trained with the same fine-grained dataset, the token-level reward model (line 1) demonstrates further alignment improvements compared to the PPO alone (line 2), indicating the importance of token-level rewards. Together, these findings affirm that our approach significantly outperforms the traditional PPO-RLHF model.",
|
| 918 |
+
"bbox": [
|
| 919 |
+
507,
|
| 920 |
+
366,
|
| 921 |
+
882,
|
| 922 |
+
573
|
| 923 |
+
],
|
| 924 |
+
"page_idx": 3
|
| 925 |
+
},
|
| 926 |
+
{
|
| 927 |
+
"type": "text",
|
| 928 |
+
"text": "Reward model analysis To explain the observed performance increase, we further investigate the effectiveness of the reward model. We test its accuracy in assigning higher rewards to superior responses within the evaluation set. As shown in Table 2, our fine-grained dataset enables the learned reward model to reach an accuracy of approximately $85.2\\%$ , outperforming the model trained with the original dataset. This result demonstrates that our data collection method enhances the capability of our reward model to identify and appropriately reward better responses.",
|
| 929 |
+
"bbox": [
|
| 930 |
+
507,
|
| 931 |
+
586,
|
| 932 |
+
882,
|
| 933 |
+
778
|
| 934 |
+
],
|
| 935 |
+
"page_idx": 3
|
| 936 |
+
},
|
| 937 |
+
{
|
| 938 |
+
"type": "text",
|
| 939 |
+
"text": "Training efficiency Table 3 illustrates the training costs for different models. Note that all the models are trained on 8 NVIDIA A100 GPUs (80G) with the same batch size for both phases. While the training time for the reward modeling phase is comparable between our method and the baseline, our fine-grained reward model significantly boosts the efficiency of RL optimization.",
|
| 940 |
+
"bbox": [
|
| 941 |
+
507,
|
| 942 |
+
790,
|
| 943 |
+
884,
|
| 944 |
+
917
|
| 945 |
+
],
|
| 946 |
+
"page_idx": 3
|
| 947 |
+
},
|
| 948 |
+
{
|
| 949 |
+
"type": "page_number",
|
| 950 |
+
"text": "676",
|
| 951 |
+
"bbox": [
|
| 952 |
+
485,
|
| 953 |
+
928,
|
| 954 |
+
515,
|
| 955 |
+
940
|
| 956 |
+
],
|
| 957 |
+
"page_idx": 3
|
| 958 |
+
},
|
| 959 |
+
{
|
| 960 |
+
"type": "text",
|
| 961 |
+
"text": "It reduces the time required for PPO to converge to its optimal performance by half, due to our more precise and fine-grained reward function. Based on the experiment results, our reward function can provide more accurate and denser training signals, which can help RL algorithms converge faster. This improvement in training efficiency could be important for LLM alignment, especially when the size of the LLM becomes increasingly large.",
|
| 962 |
+
"bbox": [
|
| 963 |
+
112,
|
| 964 |
+
84,
|
| 965 |
+
489,
|
| 966 |
+
230
|
| 967 |
+
],
|
| 968 |
+
"page_idx": 4
|
| 969 |
+
},
|
| 970 |
+
{
|
| 971 |
+
"type": "text",
|
| 972 |
+
"text": "4 Limitations",
|
| 973 |
+
"text_level": 1,
|
| 974 |
+
"bbox": [
|
| 975 |
+
112,
|
| 976 |
+
240,
|
| 977 |
+
250,
|
| 978 |
+
255
|
| 979 |
+
],
|
| 980 |
+
"page_idx": 4
|
| 981 |
+
},
|
| 982 |
+
{
|
| 983 |
+
"type": "text",
|
| 984 |
+
"text": "Although the empirical results show that our approach achieves better performance in model alignment, we struggle to provide rigorous mathematical proof to conclusively demonstrate the effectiveness of this reward allocation strategy, specifically in Equation 4.",
|
| 985 |
+
"bbox": [
|
| 986 |
+
112,
|
| 987 |
+
266,
|
| 988 |
+
490,
|
| 989 |
+
363
|
| 990 |
+
],
|
| 991 |
+
"page_idx": 4
|
| 992 |
+
},
|
| 993 |
+
{
|
| 994 |
+
"type": "text",
|
| 995 |
+
"text": "5 Conclusion",
|
| 996 |
+
"text_level": 1,
|
| 997 |
+
"bbox": [
|
| 998 |
+
112,
|
| 999 |
+
374,
|
| 1000 |
+
247,
|
| 1001 |
+
388
|
| 1002 |
+
],
|
| 1003 |
+
"page_idx": 4
|
| 1004 |
+
},
|
| 1005 |
+
{
|
| 1006 |
+
"type": "text",
|
| 1007 |
+
"text": "In this paper, we introduce a fine-grained RLHF framework that includes a data collection technique alongside a token-level reward model. This approach enables better value alignment by learning a more accurate reward model, facilitating faster convergence for PPO. Our experimental results show performance improvement based on automatic evaluations compared to the baseline method.",
|
| 1008 |
+
"bbox": [
|
| 1009 |
+
112,
|
| 1010 |
+
399,
|
| 1011 |
+
489,
|
| 1012 |
+
527
|
| 1013 |
+
],
|
| 1014 |
+
"page_idx": 4
|
| 1015 |
+
},
|
| 1016 |
+
{
|
| 1017 |
+
"type": "text",
|
| 1018 |
+
"text": "Acknowledgments",
|
| 1019 |
+
"text_level": 1,
|
| 1020 |
+
"bbox": [
|
| 1021 |
+
114,
|
| 1022 |
+
539,
|
| 1023 |
+
278,
|
| 1024 |
+
556
|
| 1025 |
+
],
|
| 1026 |
+
"page_idx": 4
|
| 1027 |
+
},
|
| 1028 |
+
{
|
| 1029 |
+
"type": "text",
|
| 1030 |
+
"text": "We would like to thank Yi Xu, Puyang Xu and other members of Amazon, as well as Ying Nian Wu and Minglu Zhao and from University of California, Los Angeles for their valuable discussions and constructive feedback. Dehong Xu's research for this work was financially supported by Amazon during his internship at Amazon.",
|
| 1031 |
+
"bbox": [
|
| 1032 |
+
112,
|
| 1033 |
+
565,
|
| 1034 |
+
489,
|
| 1035 |
+
678
|
| 1036 |
+
],
|
| 1037 |
+
"page_idx": 4
|
| 1038 |
+
},
|
| 1039 |
+
{
|
| 1040 |
+
"type": "text",
|
| 1041 |
+
"text": "References",
|
| 1042 |
+
"text_level": 1,
|
| 1043 |
+
"bbox": [
|
| 1044 |
+
510,
|
| 1045 |
+
83,
|
| 1046 |
+
608,
|
| 1047 |
+
98
|
| 1048 |
+
],
|
| 1049 |
+
"page_idx": 4
|
| 1050 |
+
},
|
| 1051 |
+
{
|
| 1052 |
+
"type": "list",
|
| 1053 |
+
"sub_type": "ref_text",
|
| 1054 |
+
"list_items": [
|
| 1055 |
+
"Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022a. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862.",
|
| 1056 |
+
"Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. 2022b. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073.",
|
| 1057 |
+
"Ralph Allan Bradley and Milton E Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324-345.",
|
| 1058 |
+
"Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.",
|
| 1059 |
+
"Cheng-Han Chiang and Hung-yi Lee. 2023. Can large language models be an alternative to human evaluations? arXiv preprint arXiv:2305.01937.",
|
| 1060 |
+
"Bosheng Ding, Chengwei Qin, Linlin Liu, Yew Ken Chia, Shafiq Joty, Boyang Li, and Lidong Bing. 2022. Is gpt-3 a good data annotator? arXiv preprint arXiv:2212.10450.",
|
| 1061 |
+
"Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. 2023. Alpacafarm: A simulation framework for methods that learn from human feedback. arXiv preprint arXiv:2305.14387.",
|
| 1062 |
+
"Patrick Fernandes, Aman Madaan, Emmy Liu, Antonio Farinhas, Pedro Henrique Martins, Amanda Bertsch, José GC de Souza, Shuyan Zhou, Tongshuang Wu, Graham Neubig, et al. 2023. Bridging the gap: A survey on integrating (human) feedback for natural language generation. arXiv preprint arXiv:2305.00955.",
|
| 1063 |
+
"Fabrizio Gilardi, Meysam Alizadeh, and Mael Kubli. 2023. Chatgpt outperforms crowd workers for text-annotation tasks. Proceedings of the National Academy of Sciences, 120(30):e2305016120.",
|
| 1064 |
+
"Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, and Ji-Rong Wen. 2023. Beyond imitation: Leveraging fine-grained quality signals for alignment. arXiv preprint arXiv:2311.04072.",
|
| 1065 |
+
"Solomon Kullback and Richard A Leibler. 1951. On information and sufficiency. The annals of mathematical statistics, 22(1):79-86."
|
| 1066 |
+
],
|
| 1067 |
+
"bbox": [
|
| 1068 |
+
510,
|
| 1069 |
+
108,
|
| 1070 |
+
884,
|
| 1071 |
+
917
|
| 1072 |
+
],
|
| 1073 |
+
"page_idx": 4
|
| 1074 |
+
},
|
| 1075 |
+
{
|
| 1076 |
+
"type": "page_number",
|
| 1077 |
+
"text": "677",
|
| 1078 |
+
"bbox": [
|
| 1079 |
+
485,
|
| 1080 |
+
928,
|
| 1081 |
+
515,
|
| 1082 |
+
940
|
| 1083 |
+
],
|
| 1084 |
+
"page_idx": 4
|
| 1085 |
+
},
|
| 1086 |
+
{
|
| 1087 |
+
"type": "list",
|
| 1088 |
+
"sub_type": "ref_text",
|
| 1089 |
+
"list_items": [
|
| 1090 |
+
"Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730-27744.",
|
| 1091 |
+
"Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290.",
|
| 1092 |
+
"Rajkumar Ramamurthy, Prithviraj Ammanabrolu, Kianté Brantley, Jack Hessel, Rafet Sifa, Christian Bauckhage, Hannaneh Hajishirzi, and Yejin Choi. 2022. Is reinforcement learning (not) for natural language processing?: Benchmarks, baselines, and building blocks for natural language policy optimization. arXiv preprint arXiv:2210.01241.",
|
| 1093 |
+
"John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. 2015. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438.",
|
| 1094 |
+
"John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.",
|
| 1095 |
+
"Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.",
|
| 1096 |
+
"Binghai Wang, Rui Zheng, Lu Chen, Yan Liu, Shihan Dou, Caishuang Huang, Wei Shen, Senjie Jin, Enyu Zhou, Chenyu Shi, et al. 2024. Secrets of rlhf in large language models part ii: Reward modeling. arXiv preprint arXiv:2401.06080.",
|
| 1097 |
+
"Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2022. Self-instruct: Aligning language models with self-generated instructions. arXiv preprint arXiv:2212.10560.",
|
| 1098 |
+
"Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A Smith, Mari Ostendorf, and Hannaneh Hajishirzi. 2023. Fine-grained human feedback gives better rewards for language model training. arXiv preprint arXiv:2306.01693.",
|
| 1099 |
+
"Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. 2023. Rrhf: Rank responses to align language models with human feedback without tears. arXiv preprint arXiv:2304.05302.",
|
| 1100 |
+
"Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2023."
|
| 1101 |
+
],
|
| 1102 |
+
"bbox": [
|
| 1103 |
+
115,
|
| 1104 |
+
85,
|
| 1105 |
+
489,
|
| 1106 |
+
917
|
| 1107 |
+
],
|
| 1108 |
+
"page_idx": 5
|
| 1109 |
+
},
|
| 1110 |
+
{
|
| 1111 |
+
"type": "text",
|
| 1112 |
+
"text": "Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685.",
|
| 1113 |
+
"bbox": [
|
| 1114 |
+
527,
|
| 1115 |
+
85,
|
| 1116 |
+
880,
|
| 1117 |
+
112
|
| 1118 |
+
],
|
| 1119 |
+
"page_idx": 5
|
| 1120 |
+
},
|
| 1121 |
+
{
|
| 1122 |
+
"type": "page_number",
|
| 1123 |
+
"text": "678",
|
| 1124 |
+
"bbox": [
|
| 1125 |
+
485,
|
| 1126 |
+
928,
|
| 1127 |
+
515,
|
| 1128 |
+
940
|
| 1129 |
+
],
|
| 1130 |
+
"page_idx": 5
|
| 1131 |
+
},
|
| 1132 |
+
{
|
| 1133 |
+
"type": "text",
|
| 1134 |
+
"text": "A Appendix",
|
| 1135 |
+
"text_level": 1,
|
| 1136 |
+
"bbox": [
|
| 1137 |
+
114,
|
| 1138 |
+
84,
|
| 1139 |
+
238,
|
| 1140 |
+
99
|
| 1141 |
+
],
|
| 1142 |
+
"page_idx": 6
|
| 1143 |
+
},
|
| 1144 |
+
{
|
| 1145 |
+
"type": "text",
|
| 1146 |
+
"text": "A.1 Prompt for Minimal Editing",
|
| 1147 |
+
"text_level": 1,
|
| 1148 |
+
"bbox": [
|
| 1149 |
+
114,
|
| 1150 |
+
109,
|
| 1151 |
+
386,
|
| 1152 |
+
124
|
| 1153 |
+
],
|
| 1154 |
+
"page_idx": 6
|
| 1155 |
+
},
|
| 1156 |
+
{
|
| 1157 |
+
"type": "text",
|
| 1158 |
+
"text": "Figure 2 shows the prompt for Claude-2 to generate the fine-grained dataset by minimal editing.",
|
| 1159 |
+
"bbox": [
|
| 1160 |
+
112,
|
| 1161 |
+
130,
|
| 1162 |
+
823,
|
| 1163 |
+
146
|
| 1164 |
+
],
|
| 1165 |
+
"page_idx": 6
|
| 1166 |
+
},
|
| 1167 |
+
{
|
| 1168 |
+
"type": "text",
|
| 1169 |
+
"text": "Claude-2 Prompt:",
|
| 1170 |
+
"text_level": 1,
|
| 1171 |
+
"bbox": [
|
| 1172 |
+
119,
|
| 1173 |
+
156,
|
| 1174 |
+
238,
|
| 1175 |
+
168
|
| 1176 |
+
],
|
| 1177 |
+
"page_idx": 6
|
| 1178 |
+
},
|
| 1179 |
+
{
|
| 1180 |
+
"type": "text",
|
| 1181 |
+
"text": "Human: Question: <question>",
|
| 1182 |
+
"bbox": [
|
| 1183 |
+
136,
|
| 1184 |
+
173,
|
| 1185 |
+
322,
|
| 1186 |
+
187
|
| 1187 |
+
],
|
| 1188 |
+
"page_idx": 6
|
| 1189 |
+
},
|
| 1190 |
+
{
|
| 1191 |
+
"type": "text",
|
| 1192 |
+
"text": "Input: <input>",
|
| 1193 |
+
"bbox": [
|
| 1194 |
+
137,
|
| 1195 |
+
189,
|
| 1196 |
+
225,
|
| 1197 |
+
200
|
| 1198 |
+
],
|
| 1199 |
+
"page_idx": 6
|
| 1200 |
+
},
|
| 1201 |
+
{
|
| 1202 |
+
"type": "text",
|
| 1203 |
+
"text": "Response: <original response>",
|
| 1204 |
+
"bbox": [
|
| 1205 |
+
137,
|
| 1206 |
+
216,
|
| 1207 |
+
329,
|
| 1208 |
+
229
|
| 1209 |
+
],
|
| 1210 |
+
"page_idx": 6
|
| 1211 |
+
},
|
| 1212 |
+
{
|
| 1213 |
+
"type": "text",
|
| 1214 |
+
"text": "Please make minimal changes to the above response in order to make it a more helpful response to the question. NOTE It is important to only make minimal changes to the response - try to keep most of the response unchanged.",
|
| 1215 |
+
"bbox": [
|
| 1216 |
+
134,
|
| 1217 |
+
244,
|
| 1218 |
+
878,
|
| 1219 |
+
272
|
| 1220 |
+
],
|
| 1221 |
+
"page_idx": 6
|
| 1222 |
+
},
|
| 1223 |
+
{
|
| 1224 |
+
"type": "text",
|
| 1225 |
+
"text": "Assistant: Here is my attempt to make minimal changes to improve the response:",
|
| 1226 |
+
"bbox": [
|
| 1227 |
+
136,
|
| 1228 |
+
287,
|
| 1229 |
+
638,
|
| 1230 |
+
300
|
| 1231 |
+
],
|
| 1232 |
+
"page_idx": 6
|
| 1233 |
+
},
|
| 1234 |
+
{
|
| 1235 |
+
"type": "text",
|
| 1236 |
+
"text": "Figure 2: Prompt for Claude",
|
| 1237 |
+
"bbox": [
|
| 1238 |
+
405,
|
| 1239 |
+
316,
|
| 1240 |
+
589,
|
| 1241 |
+
330
|
| 1242 |
+
],
|
| 1243 |
+
"page_idx": 6
|
| 1244 |
+
},
|
| 1245 |
+
{
|
| 1246 |
+
"type": "text",
|
| 1247 |
+
"text": "A.2 More examples of minimal editing",
|
| 1248 |
+
"text_level": 1,
|
| 1249 |
+
"bbox": [
|
| 1250 |
+
114,
|
| 1251 |
+
356,
|
| 1252 |
+
431,
|
| 1253 |
+
372
|
| 1254 |
+
],
|
| 1255 |
+
"page_idx": 6
|
| 1256 |
+
},
|
| 1257 |
+
{
|
| 1258 |
+
"type": "text",
|
| 1259 |
+
"text": "In this section, we provide more examples of fine-grained annotation via minimal editing. As shown in Figure 3, 4 and 5, the annotators kept most of the response unchanged and only revised the red parts to the yellow parts.",
|
| 1260 |
+
"bbox": [
|
| 1261 |
+
112,
|
| 1262 |
+
376,
|
| 1263 |
+
880,
|
| 1264 |
+
425
|
| 1265 |
+
],
|
| 1266 |
+
"page_idx": 6
|
| 1267 |
+
},
|
| 1268 |
+
{
|
| 1269 |
+
"type": "text",
|
| 1270 |
+
"text": "Question: Explain what a generative adversarial network (GAN) is.",
|
| 1271 |
+
"bbox": [
|
| 1272 |
+
164,
|
| 1273 |
+
445,
|
| 1274 |
+
526,
|
| 1275 |
+
457
|
| 1276 |
+
],
|
| 1277 |
+
"page_idx": 6
|
| 1278 |
+
},
|
| 1279 |
+
{
|
| 1280 |
+
"type": "text",
|
| 1281 |
+
"text": "Response:",
|
| 1282 |
+
"text_level": 1,
|
| 1283 |
+
"bbox": [
|
| 1284 |
+
164,
|
| 1285 |
+
470,
|
| 1286 |
+
228,
|
| 1287 |
+
481
|
| 1288 |
+
],
|
| 1289 |
+
"page_idx": 6
|
| 1290 |
+
},
|
| 1291 |
+
{
|
| 1292 |
+
"type": "text",
|
| 1293 |
+
"text": "A generative adversarial network (GAN) is a type of machine learning algorithm that combines a generative model and a discriminative model in a single framework. The generative model creates data points, while the discriminative model decides whether the generated data points are real or fake. The goal of the generative model is to fool the discriminative model, which encourages the generative model to create data points that are hard to distinguish from the real data. By studying the behavior of the two models, GANs can be used for tasks such as image generation, natural language processing, and more.",
|
| 1294 |
+
"bbox": [
|
| 1295 |
+
163,
|
| 1296 |
+
482,
|
| 1297 |
+
815,
|
| 1298 |
+
556
|
| 1299 |
+
],
|
| 1300 |
+
"page_idx": 6
|
| 1301 |
+
},
|
| 1302 |
+
{
|
| 1303 |
+
"type": "text",
|
| 1304 |
+
"text": "Revised response (Claude-2):",
|
| 1305 |
+
"text_level": 1,
|
| 1306 |
+
"bbox": [
|
| 1307 |
+
164,
|
| 1308 |
+
569,
|
| 1309 |
+
336,
|
| 1310 |
+
580
|
| 1311 |
+
],
|
| 1312 |
+
"page_idx": 6
|
| 1313 |
+
},
|
| 1314 |
+
{
|
| 1315 |
+
"type": "text",
|
| 1316 |
+
"text": "A generative adversarial network (GAN) is a type of machine learning algorithm that combines a generative model and a discriminative model in a single framework. The generative model creates synthetic data points, while the discriminative model decides whether the generated data points are real or artificially generated. The goal of the generative model is to generate data that fools the discriminative model into thinking it is real, which encourages the generative model to create more realistic synthetic data that is hard to distinguish from the real data. By studying the interaction between the two models as they compete, GANs can be used for tasks such as generating realistic images, improving natural language processing systems, and more. The GAN framework allows for generative models to create higher quality synthesized data.",
|
| 1317 |
+
"bbox": [
|
| 1318 |
+
163,
|
| 1319 |
+
581,
|
| 1320 |
+
830,
|
| 1321 |
+
668
|
| 1322 |
+
],
|
| 1323 |
+
"page_idx": 6
|
| 1324 |
+
},
|
| 1325 |
+
{
|
| 1326 |
+
"type": "text",
|
| 1327 |
+
"text": "Figure 3: Example of fine-grained annotation via minimal editing: edit words may cause safety issues.",
|
| 1328 |
+
"bbox": [
|
| 1329 |
+
181,
|
| 1330 |
+
686,
|
| 1331 |
+
811,
|
| 1332 |
+
700
|
| 1333 |
+
],
|
| 1334 |
+
"page_idx": 6
|
| 1335 |
+
},
|
| 1336 |
+
{
|
| 1337 |
+
"type": "page_number",
|
| 1338 |
+
"text": "679",
|
| 1339 |
+
"bbox": [
|
| 1340 |
+
485,
|
| 1341 |
+
928,
|
| 1342 |
+
515,
|
| 1343 |
+
940
|
| 1344 |
+
],
|
| 1345 |
+
"page_idx": 6
|
| 1346 |
+
},
|
| 1347 |
+
{
|
| 1348 |
+
"type": "text",
|
| 1349 |
+
"text": "Question",
|
| 1350 |
+
"text_level": 1,
|
| 1351 |
+
"bbox": [
|
| 1352 |
+
164,
|
| 1353 |
+
178,
|
| 1354 |
+
220,
|
| 1355 |
+
186
|
| 1356 |
+
],
|
| 1357 |
+
"page_idx": 7
|
| 1358 |
+
},
|
| 1359 |
+
{
|
| 1360 |
+
"type": "text",
|
| 1361 |
+
"text": "Explain how machine learning algorithms work.",
|
| 1362 |
+
"bbox": [
|
| 1363 |
+
164,
|
| 1364 |
+
189,
|
| 1365 |
+
421,
|
| 1366 |
+
200
|
| 1367 |
+
],
|
| 1368 |
+
"page_idx": 7
|
| 1369 |
+
},
|
| 1370 |
+
{
|
| 1371 |
+
"type": "text",
|
| 1372 |
+
"text": "Response",
|
| 1373 |
+
"text_level": 1,
|
| 1374 |
+
"bbox": [
|
| 1375 |
+
164,
|
| 1376 |
+
214,
|
| 1377 |
+
226,
|
| 1378 |
+
224
|
| 1379 |
+
],
|
| 1380 |
+
"page_idx": 7
|
| 1381 |
+
},
|
| 1382 |
+
{
|
| 1383 |
+
"type": "text",
|
| 1384 |
+
"text": "Machine learning algorithms are a type of algorithms that use statistical techniques to learn and improve performance on a specific task. They learn from sample data and identify patterns in data to make data-driven decisions. Machine learning algorithms can be divided into two categories: supervised learning and unsupervised learning. In supervised learning, the algorithm is trained on labeled data and is able to make inferences based on the training data. In unsupervised learning, the algorithm is trained on unlabeled data and is able to identify patterns in the data.",
|
| 1385 |
+
"bbox": [
|
| 1386 |
+
163,
|
| 1387 |
+
225,
|
| 1388 |
+
831,
|
| 1389 |
+
288
|
| 1390 |
+
],
|
| 1391 |
+
"page_idx": 7
|
| 1392 |
+
},
|
| 1393 |
+
{
|
| 1394 |
+
"type": "text",
|
| 1395 |
+
"text": "Revised response (Cladue-2):",
|
| 1396 |
+
"text_level": 1,
|
| 1397 |
+
"bbox": [
|
| 1398 |
+
164,
|
| 1399 |
+
300,
|
| 1400 |
+
336,
|
| 1401 |
+
311
|
| 1402 |
+
],
|
| 1403 |
+
"page_idx": 7
|
| 1404 |
+
},
|
| 1405 |
+
{
|
| 1406 |
+
"type": "text",
|
| 1407 |
+
"text": "Machine learning algorithms are a type of algorithms that use statistical techniques to learn and improve performance on a specific task over time. They learn from sample data and identify patterns in data to make data-driven decisions without being explicitly programmed for the task. Machine learning algorithms can be divided into two main categories: supervised learning and unsupervised learning. In supervised learning, the algorithm is trained on labeled data, where the data is already tagged with the right answers. It is then able to make inferences on new unlabeled data based on its training. In unsupervised learning, the algorithm is trained on unlabeled data and is able to identify patterns and relationships in the data on its own, without any human guidance. The key aspect is that machine learning algorithms learn from experience and improve their performance independently.",
|
| 1408 |
+
"bbox": [
|
| 1409 |
+
163,
|
| 1410 |
+
313,
|
| 1411 |
+
828,
|
| 1412 |
+
411
|
| 1413 |
+
],
|
| 1414 |
+
"page_idx": 7
|
| 1415 |
+
},
|
| 1416 |
+
{
|
| 1417 |
+
"type": "text",
|
| 1418 |
+
"text": "Figure 4: Example of fine-grained annotation via minimal editing: provide more explanation on academic words.",
|
| 1419 |
+
"bbox": [
|
| 1420 |
+
149,
|
| 1421 |
+
428,
|
| 1422 |
+
843,
|
| 1423 |
+
441
|
| 1424 |
+
],
|
| 1425 |
+
"page_idx": 7
|
| 1426 |
+
},
|
| 1427 |
+
{
|
| 1428 |
+
"type": "text",
|
| 1429 |
+
"text": "Question",
|
| 1430 |
+
"text_level": 1,
|
| 1431 |
+
"bbox": [
|
| 1432 |
+
245,
|
| 1433 |
+
642,
|
| 1434 |
+
304,
|
| 1435 |
+
652
|
| 1436 |
+
],
|
| 1437 |
+
"page_idx": 7
|
| 1438 |
+
},
|
| 1439 |
+
{
|
| 1440 |
+
"type": "text",
|
| 1441 |
+
"text": "Produce an alternate version of the following quote using a different literary device",
|
| 1442 |
+
"bbox": [
|
| 1443 |
+
245,
|
| 1444 |
+
656,
|
| 1445 |
+
719,
|
| 1446 |
+
667
|
| 1447 |
+
],
|
| 1448 |
+
"page_idx": 7
|
| 1449 |
+
},
|
| 1450 |
+
{
|
| 1451 |
+
"type": "text",
|
| 1452 |
+
"text": "Input",
|
| 1453 |
+
"text_level": 1,
|
| 1454 |
+
"bbox": [
|
| 1455 |
+
245,
|
| 1456 |
+
682,
|
| 1457 |
+
280,
|
| 1458 |
+
692
|
| 1459 |
+
],
|
| 1460 |
+
"page_idx": 7
|
| 1461 |
+
},
|
| 1462 |
+
{
|
| 1463 |
+
"type": "text",
|
| 1464 |
+
"text": "Better to have loved and lost, than never to have loved at all.",
|
| 1465 |
+
"bbox": [
|
| 1466 |
+
245,
|
| 1467 |
+
695,
|
| 1468 |
+
591,
|
| 1469 |
+
706
|
| 1470 |
+
],
|
| 1471 |
+
"page_idx": 7
|
| 1472 |
+
},
|
| 1473 |
+
{
|
| 1474 |
+
"type": "text",
|
| 1475 |
+
"text": "Response",
|
| 1476 |
+
"text_level": 1,
|
| 1477 |
+
"bbox": [
|
| 1478 |
+
245,
|
| 1479 |
+
721,
|
| 1480 |
+
309,
|
| 1481 |
+
732
|
| 1482 |
+
],
|
| 1483 |
+
"page_idx": 7
|
| 1484 |
+
},
|
| 1485 |
+
{
|
| 1486 |
+
"type": "text",
|
| 1487 |
+
"text": "It is better to have had a flame and lost it, than never to have been burned at all.",
|
| 1488 |
+
"bbox": [
|
| 1489 |
+
245,
|
| 1490 |
+
734,
|
| 1491 |
+
702,
|
| 1492 |
+
745
|
| 1493 |
+
],
|
| 1494 |
+
"page_idx": 7
|
| 1495 |
+
},
|
| 1496 |
+
{
|
| 1497 |
+
"type": "text",
|
| 1498 |
+
"text": "Revised response (Claude-2):",
|
| 1499 |
+
"text_level": 1,
|
| 1500 |
+
"bbox": [
|
| 1501 |
+
245,
|
| 1502 |
+
760,
|
| 1503 |
+
426,
|
| 1504 |
+
771
|
| 1505 |
+
],
|
| 1506 |
+
"page_idx": 7
|
| 1507 |
+
},
|
| 1508 |
+
{
|
| 1509 |
+
"type": "text",
|
| 1510 |
+
"text": "It is better to have had a flame and been singed, than never to have felt its warmth at all.",
|
| 1511 |
+
"bbox": [
|
| 1512 |
+
245,
|
| 1513 |
+
772,
|
| 1514 |
+
749,
|
| 1515 |
+
785
|
| 1516 |
+
],
|
| 1517 |
+
"page_idx": 7
|
| 1518 |
+
},
|
| 1519 |
+
{
|
| 1520 |
+
"type": "text",
|
| 1521 |
+
"text": "Figure 5: Example of fine-grained annotation via minimal editing: change the literary device that follows the instruction better.",
|
| 1522 |
+
"bbox": [
|
| 1523 |
+
112,
|
| 1524 |
+
807,
|
| 1525 |
+
880,
|
| 1526 |
+
821
|
| 1527 |
+
],
|
| 1528 |
+
"page_idx": 7
|
| 1529 |
+
},
|
| 1530 |
+
{
|
| 1531 |
+
"type": "page_number",
|
| 1532 |
+
"text": "680",
|
| 1533 |
+
"bbox": [
|
| 1534 |
+
485,
|
| 1535 |
+
928,
|
| 1536 |
+
515,
|
| 1537 |
+
939
|
| 1538 |
+
],
|
| 1539 |
+
"page_idx": 7
|
| 1540 |
+
}
|
| 1541 |
+
]
|
2024/Aligning Large Language Models via Fine-grained Supervision/d2093850-e28f-43df-a51c-a0b07b648dd6_model.json
ADDED
|
@@ -0,0 +1,1778 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
[
|
| 3 |
+
{
|
| 4 |
+
"type": "title",
|
| 5 |
+
"bbox": [
|
| 6 |
+
0.174,
|
| 7 |
+
0.09,
|
| 8 |
+
0.826,
|
| 9 |
+
0.112
|
| 10 |
+
],
|
| 11 |
+
"angle": 0,
|
| 12 |
+
"content": "Aligning Large Language Models via Fine-grained Supervision"
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"bbox": [
|
| 17 |
+
0.189,
|
| 18 |
+
0.13,
|
| 19 |
+
0.815,
|
| 20 |
+
0.15
|
| 21 |
+
],
|
| 22 |
+
"angle": 0,
|
| 23 |
+
"content": "Dehong Xu\\(^{1*}\\), Liang Qiu\\(^{2*}\\), Minseok Kim\\(^{2}\\), Faisal Ladhak\\(^{2}\\), Jaeyoung Do\\(^{3}\\)"
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"bbox": [
|
| 28 |
+
0.174,
|
| 29 |
+
0.164,
|
| 30 |
+
0.828,
|
| 31 |
+
0.214
|
| 32 |
+
],
|
| 33 |
+
"angle": 0,
|
| 34 |
+
"content": "\\(^{1}\\)Department of Statistics, UCLA \\(^{2}\\)Amazon \n\\(^{3}\\)Department of Electrical and Computer Engineering, Seoul National University Correspondence: xudehong1996@ucla.edu, liangqxx@amazon.com"
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "title",
|
| 38 |
+
"bbox": [
|
| 39 |
+
0.261,
|
| 40 |
+
0.253,
|
| 41 |
+
0.341,
|
| 42 |
+
0.269
|
| 43 |
+
],
|
| 44 |
+
"angle": 0,
|
| 45 |
+
"content": "Abstract"
|
| 46 |
+
},
|
| 47 |
+
{
|
| 48 |
+
"type": "text",
|
| 49 |
+
"bbox": [
|
| 50 |
+
0.145,
|
| 51 |
+
0.284,
|
| 52 |
+
0.461,
|
| 53 |
+
0.71
|
| 54 |
+
],
|
| 55 |
+
"angle": 0,
|
| 56 |
+
"content": "Pre-trained large-scale language models (LLMs) excel at producing coherent articles, yet their outputs may be untruthful, toxic, or fail to align with user expectations. Current approaches focus on using reinforcement learning with human feedback (RLHF) to improve model alignment, which works by transforming coarse human preferences of LLM outputs into a feedback signal that guides the model learning process. However, because this approach operates on sequence-level feedback, it lacks the precision to identify the exact parts of the output affecting user preferences. To address this gap, we propose a method to enhance LLM alignment through fine-grained token-level supervision. Specifically, we ask annotators to minimally edit less preferred responses within the standard reward modeling dataset to make them more favorable, ensuring changes are made only where necessary while retaining most of the original content. The refined dataset is used to train a token-level reward model, which is then used for training our fine-grained Proximal Policy Optimization (PPO) model. Our experiment results demonstrate that this approach can achieve up to an absolute improvement of \\(5.1\\%\\) in LLM performance, in terms of win rate against the reference model, compared with the traditional PPO model."
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"type": "title",
|
| 60 |
+
"bbox": [
|
| 61 |
+
0.115,
|
| 62 |
+
0.727,
|
| 63 |
+
0.262,
|
| 64 |
+
0.743
|
| 65 |
+
],
|
| 66 |
+
"angle": 0,
|
| 67 |
+
"content": "1 Introduction"
|
| 68 |
+
},
|
| 69 |
+
{
|
| 70 |
+
"type": "text",
|
| 71 |
+
"bbox": [
|
| 72 |
+
0.113,
|
| 73 |
+
0.755,
|
| 74 |
+
0.49,
|
| 75 |
+
0.882
|
| 76 |
+
],
|
| 77 |
+
"angle": 0,
|
| 78 |
+
"content": "One key objective in advancing large language models (LLMs) is to ensure safe, beneficial human interaction. However, current pre-trained models, mostly trained on web and book texts, often generate biased or toxic text, misaligning with human intentions. To address this issue, numerous studies (Ouyang et al., 2022; Rafailov et al., 2023; Bai et al., 2022b,a; Yuan et al., 2023; Touvron"
|
| 79 |
+
},
|
| 80 |
+
{
|
| 81 |
+
"type": "text",
|
| 82 |
+
"bbox": [
|
| 83 |
+
0.508,
|
| 84 |
+
0.254,
|
| 85 |
+
0.885,
|
| 86 |
+
0.429
|
| 87 |
+
],
|
| 88 |
+
"angle": 0,
|
| 89 |
+
"content": "et al., 2023; Ramamurthy et al., 2022) have integrated human feedback into the training process. A significant advancement is reinforcement learning from human feedback (RLHF) (Ouyang et al., 2022), which usually consists of two phases: First, a reward model (RM) is trained from preference data, which comprises various responses alongside their human-assigned preference scores for a given prompt. Then, this reward model is applied to optimize a final model using Proximal Policy Optimization (PPO) (Schulman et al., 2017)."
|
| 90 |
+
},
|
| 91 |
+
{
|
| 92 |
+
"type": "text",
|
| 93 |
+
"bbox": [
|
| 94 |
+
0.508,
|
| 95 |
+
0.431,
|
| 96 |
+
0.885,
|
| 97 |
+
0.735
|
| 98 |
+
],
|
| 99 |
+
"angle": 0,
|
| 100 |
+
"content": "Recent works (Wu et al., 2023; Rafailov et al., 2023; Fernandes et al., 2023; Guo et al., 2023; Wang et al., 2024) discovered limitations of the current RM, specifically their misalignment with human values. This misalignment stems from two main issues: \\((i)\\) the presence of incorrect and ambiguous preference pairs in the human-labeled datasets; \\((ii)\\) the limited insight inherent in sequence-level feedback. Specifically, from a data collection standpoint, the task of comparing the overall quality of model outputs is challenging for human annotators when outputs exhibit both desired and undesired behaviors in different parts. Moreover from the RM perspective, the reliance on preference-based data labeling leads to sparse training signals. This sparsity discourages the model's ability to distinguish finer details between responses and further limits the capacity for reward optimization."
|
| 101 |
+
},
|
| 102 |
+
{
|
| 103 |
+
"type": "text",
|
| 104 |
+
"bbox": [
|
| 105 |
+
0.509,
|
| 106 |
+
0.736,
|
| 107 |
+
0.884,
|
| 108 |
+
0.768
|
| 109 |
+
],
|
| 110 |
+
"angle": 0,
|
| 111 |
+
"content": "To tackle this challenge, we propose the following two-fold contributions as illustrated in Figure 1:"
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"type": "text",
|
| 115 |
+
"bbox": [
|
| 116 |
+
0.509,
|
| 117 |
+
0.775,
|
| 118 |
+
0.883,
|
| 119 |
+
0.87
|
| 120 |
+
],
|
| 121 |
+
"angle": 0,
|
| 122 |
+
"content": "- We introduce a new data collection approach that asks annotators to edit responses from existing RM datasets to be more preferable. By comparing the original and edited responses, we obtain detailed token-level insights that are essential for training our fine-tuned reward model."
|
| 123 |
+
},
|
| 124 |
+
{
|
| 125 |
+
"type": "text",
|
| 126 |
+
"bbox": [
|
| 127 |
+
0.509,
|
| 128 |
+
0.872,
|
| 129 |
+
0.883,
|
| 130 |
+
0.919
|
| 131 |
+
],
|
| 132 |
+
"angle": 0,
|
| 133 |
+
"content": "- We propose a new token-level reward modeling approach that provides reward signals at the token level. Different from coarse-grained"
|
| 134 |
+
},
|
| 135 |
+
{
|
| 136 |
+
"type": "list",
|
| 137 |
+
"bbox": [
|
| 138 |
+
0.509,
|
| 139 |
+
0.775,
|
| 140 |
+
0.883,
|
| 141 |
+
0.919
|
| 142 |
+
],
|
| 143 |
+
"angle": 0,
|
| 144 |
+
"content": null
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"type": "page_footnote",
|
| 148 |
+
"bbox": [
|
| 149 |
+
0.141,
|
| 150 |
+
0.892,
|
| 151 |
+
0.303,
|
| 152 |
+
0.905
|
| 153 |
+
],
|
| 154 |
+
"angle": 0,
|
| 155 |
+
"content": "* Corresponding authors."
|
| 156 |
+
},
|
| 157 |
+
{
|
| 158 |
+
"type": "page_footnote",
|
| 159 |
+
"bbox": [
|
| 160 |
+
0.141,
|
| 161 |
+
0.905,
|
| 162 |
+
0.486,
|
| 163 |
+
0.919
|
| 164 |
+
],
|
| 165 |
+
"angle": 0,
|
| 166 |
+
"content": "† Author performed the work while interned at Amazon."
|
| 167 |
+
},
|
| 168 |
+
{
|
| 169 |
+
"type": "list",
|
| 170 |
+
"bbox": [
|
| 171 |
+
0.141,
|
| 172 |
+
0.892,
|
| 173 |
+
0.486,
|
| 174 |
+
0.919
|
| 175 |
+
],
|
| 176 |
+
"angle": 0,
|
| 177 |
+
"content": null
|
| 178 |
+
},
|
| 179 |
+
{
|
| 180 |
+
"type": "page_number",
|
| 181 |
+
"bbox": [
|
| 182 |
+
0.486,
|
| 183 |
+
0.928,
|
| 184 |
+
0.516,
|
| 185 |
+
0.94
|
| 186 |
+
],
|
| 187 |
+
"angle": 0,
|
| 188 |
+
"content": "673"
|
| 189 |
+
},
|
| 190 |
+
{
|
| 191 |
+
"type": "footer",
|
| 192 |
+
"bbox": [
|
| 193 |
+
0.097,
|
| 194 |
+
0.946,
|
| 195 |
+
0.902,
|
| 196 |
+
0.973
|
| 197 |
+
],
|
| 198 |
+
"angle": 0,
|
| 199 |
+
"content": "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 673-680 August 11-16, 2024 ©2024 Association for Computational Linguistics"
|
| 200 |
+
}
|
| 201 |
+
],
|
| 202 |
+
[
|
| 203 |
+
{
|
| 204 |
+
"type": "header",
|
| 205 |
+
"bbox": [
|
| 206 |
+
0.21,
|
| 207 |
+
0.083,
|
| 208 |
+
0.368,
|
| 209 |
+
0.095
|
| 210 |
+
],
|
| 211 |
+
"angle": 0,
|
| 212 |
+
"content": "Sequence-level reward"
|
| 213 |
+
},
|
| 214 |
+
{
|
| 215 |
+
"type": "header",
|
| 216 |
+
"bbox": [
|
| 217 |
+
0.549,
|
| 218 |
+
0.083,
|
| 219 |
+
0.82,
|
| 220 |
+
0.096
|
| 221 |
+
],
|
| 222 |
+
"angle": 0,
|
| 223 |
+
"content": "Fine-grained reward via minimal editing"
|
| 224 |
+
},
|
| 225 |
+
{
|
| 226 |
+
"type": "title",
|
| 227 |
+
"bbox": [
|
| 228 |
+
0.123,
|
| 229 |
+
0.097,
|
| 230 |
+
0.278,
|
| 231 |
+
0.108
|
| 232 |
+
],
|
| 233 |
+
"angle": 0,
|
| 234 |
+
"content": "Phase 1: Data collection"
|
| 235 |
+
},
|
| 236 |
+
{
|
| 237 |
+
"type": "text",
|
| 238 |
+
"bbox": [
|
| 239 |
+
0.121,
|
| 240 |
+
0.111,
|
| 241 |
+
0.532,
|
| 242 |
+
0.121
|
| 243 |
+
],
|
| 244 |
+
"angle": 0,
|
| 245 |
+
"content": "Prompt: What are the top three most abundant elements found in the Earth's crust?"
|
| 246 |
+
},
|
| 247 |
+
{
|
| 248 |
+
"type": "title",
|
| 249 |
+
"bbox": [
|
| 250 |
+
0.121,
|
| 251 |
+
0.124,
|
| 252 |
+
0.223,
|
| 253 |
+
0.134
|
| 254 |
+
],
|
| 255 |
+
"angle": 0,
|
| 256 |
+
"content": "Model response A:"
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"type": "text",
|
| 260 |
+
"bbox": [
|
| 261 |
+
0.121,
|
| 262 |
+
0.135,
|
| 263 |
+
0.477,
|
| 264 |
+
0.163
|
| 265 |
+
],
|
| 266 |
+
"angle": 0,
|
| 267 |
+
"content": "The three most abundant elements are oxygen, silicon, and iron. Oxygen combines with other elements to form oxides and silicon is the most common mineral ..."
|
| 268 |
+
},
|
| 269 |
+
{
|
| 270 |
+
"type": "title",
|
| 271 |
+
"bbox": [
|
| 272 |
+
0.121,
|
| 273 |
+
0.171,
|
| 274 |
+
0.223,
|
| 275 |
+
0.182
|
| 276 |
+
],
|
| 277 |
+
"angle": 0,
|
| 278 |
+
"content": "Model response B:"
|
| 279 |
+
},
|
| 280 |
+
{
|
| 281 |
+
"type": "text",
|
| 282 |
+
"bbox": [
|
| 283 |
+
0.121,
|
| 284 |
+
0.182,
|
| 285 |
+
0.427,
|
| 286 |
+
0.202
|
| 287 |
+
],
|
| 288 |
+
"angle": 0,
|
| 289 |
+
"content": "The Earth's crust is rich in elements, with oxygen, silicon, and calcium. Oxygen makes up about \\(46.6\\%\\) of the Earth's crust ..."
|
| 290 |
+
},
|
| 291 |
+
{
|
| 292 |
+
"type": "title",
|
| 293 |
+
"bbox": [
|
| 294 |
+
0.121,
|
| 295 |
+
0.214,
|
| 296 |
+
0.216,
|
| 297 |
+
0.224
|
| 298 |
+
],
|
| 299 |
+
"angle": 0,
|
| 300 |
+
"content": "Human feedback:"
|
| 301 |
+
},
|
| 302 |
+
{
|
| 303 |
+
"type": "image",
|
| 304 |
+
"bbox": [
|
| 305 |
+
0.211,
|
| 306 |
+
0.229,
|
| 307 |
+
0.25,
|
| 308 |
+
0.257
|
| 309 |
+
],
|
| 310 |
+
"angle": 0,
|
| 311 |
+
"content": null
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"type": "image",
|
| 315 |
+
"bbox": [
|
| 316 |
+
0.268,
|
| 317 |
+
0.236,
|
| 318 |
+
0.345,
|
| 319 |
+
0.256
|
| 320 |
+
],
|
| 321 |
+
"angle": 0,
|
| 322 |
+
"content": null
|
| 323 |
+
},
|
| 324 |
+
{
|
| 325 |
+
"type": "title",
|
| 326 |
+
"bbox": [
|
| 327 |
+
0.122,
|
| 328 |
+
0.271,
|
| 329 |
+
0.293,
|
| 330 |
+
0.284
|
| 331 |
+
],
|
| 332 |
+
"angle": 0,
|
| 333 |
+
"content": "Phase 2: Reward modeling"
|
| 334 |
+
},
|
| 335 |
+
{
|
| 336 |
+
"type": "text",
|
| 337 |
+
"bbox": [
|
| 338 |
+
0.122,
|
| 339 |
+
0.286,
|
| 340 |
+
0.392,
|
| 341 |
+
0.296
|
| 342 |
+
],
|
| 343 |
+
"angle": 0,
|
| 344 |
+
"content": "New prompt: What is the smallest ocean in the world?"
|
| 345 |
+
},
|
| 346 |
+
{
|
| 347 |
+
"type": "image",
|
| 348 |
+
"bbox": [
|
| 349 |
+
0.134,
|
| 350 |
+
0.3,
|
| 351 |
+
0.188,
|
| 352 |
+
0.347
|
| 353 |
+
],
|
| 354 |
+
"angle": 0,
|
| 355 |
+
"content": null
|
| 356 |
+
},
|
| 357 |
+
{
|
| 358 |
+
"type": "image_caption",
|
| 359 |
+
"bbox": [
|
| 360 |
+
0.122,
|
| 361 |
+
0.349,
|
| 362 |
+
0.225,
|
| 363 |
+
0.358
|
| 364 |
+
],
|
| 365 |
+
"angle": 0,
|
| 366 |
+
"content": "Sequence-level RM"
|
| 367 |
+
},
|
| 368 |
+
{
|
| 369 |
+
"type": "text",
|
| 370 |
+
"bbox": [
|
| 371 |
+
0.203,
|
| 372 |
+
0.307,
|
| 373 |
+
0.471,
|
| 374 |
+
0.339
|
| 375 |
+
],
|
| 376 |
+
"angle": 0,
|
| 377 |
+
"content": "The Arctic Ocean is the smallest ocean in the world. It locates in the northmost part of the Earth and covers an area of about 12.5 million square kilometers."
|
| 378 |
+
},
|
| 379 |
+
{
|
| 380 |
+
"type": "text",
|
| 381 |
+
"bbox": [
|
| 382 |
+
0.33,
|
| 383 |
+
0.346,
|
| 384 |
+
0.45,
|
| 385 |
+
0.356
|
| 386 |
+
],
|
| 387 |
+
"angle": 0,
|
| 388 |
+
"content": "Sequence reward: -0.2"
|
| 389 |
+
},
|
| 390 |
+
{
|
| 391 |
+
"type": "title",
|
| 392 |
+
"bbox": [
|
| 393 |
+
0.492,
|
| 394 |
+
0.125,
|
| 395 |
+
0.582,
|
| 396 |
+
0.135
|
| 397 |
+
],
|
| 398 |
+
"angle": 0,
|
| 399 |
+
"content": "Model response:"
|
| 400 |
+
},
|
| 401 |
+
{
|
| 402 |
+
"type": "text",
|
| 403 |
+
"bbox": [
|
| 404 |
+
0.491,
|
| 405 |
+
0.135,
|
| 406 |
+
0.875,
|
| 407 |
+
0.177
|
| 408 |
+
],
|
| 409 |
+
"angle": 0,
|
| 410 |
+
"content": "The three most abundant elements are oxygen, silicon, and iron. Oxygen combines with other elements to form oxides and silicon is the most common mineral group to form ricks like quartz. Iron is vital for forming many minerals and rocks, contributing to the Earth's magnetic field and core composition."
|
| 411 |
+
},
|
| 412 |
+
{
|
| 413 |
+
"type": "title",
|
| 414 |
+
"bbox": [
|
| 415 |
+
0.492,
|
| 416 |
+
0.194,
|
| 417 |
+
0.65,
|
| 418 |
+
0.204
|
| 419 |
+
],
|
| 420 |
+
"angle": 0,
|
| 421 |
+
"content": "Revised response via editing:"
|
| 422 |
+
},
|
| 423 |
+
{
|
| 424 |
+
"type": "text",
|
| 425 |
+
"bbox": [
|
| 426 |
+
0.491,
|
| 427 |
+
0.205,
|
| 428 |
+
0.878,
|
| 429 |
+
0.257
|
| 430 |
+
],
|
| 431 |
+
"angle": 0,
|
| 432 |
+
"content": "The three most abundant elements are oxygen, silicon, and aluminum. Oxygen combines with other elements to form oxides and silicon is the most common mineral group to form ricks like quartz. Aluminum is vital for forming many minerals and is significant in the crust's overall composition, found in clays and feldspars, contributing to the crust's structure and stability."
|
| 433 |
+
},
|
| 434 |
+
{
|
| 435 |
+
"type": "text",
|
| 436 |
+
"bbox": [
|
| 437 |
+
0.658,
|
| 438 |
+
0.29,
|
| 439 |
+
0.741,
|
| 440 |
+
0.3
|
| 441 |
+
],
|
| 442 |
+
"angle": 0,
|
| 443 |
+
"content": "Token reward: +1"
|
| 444 |
+
},
|
| 445 |
+
{
|
| 446 |
+
"type": "text",
|
| 447 |
+
"bbox": [
|
| 448 |
+
0.597,
|
| 449 |
+
0.308,
|
| 450 |
+
0.866,
|
| 451 |
+
0.34
|
| 452 |
+
],
|
| 453 |
+
"angle": 0,
|
| 454 |
+
"content": "The Arctic Ocean is the smallest ocean in the world. It locates in the northmost part of the Earth and covers an area of about 12.5 million square kilometers."
|
| 455 |
+
},
|
| 456 |
+
{
|
| 457 |
+
"type": "text",
|
| 458 |
+
"bbox": [
|
| 459 |
+
0.64,
|
| 460 |
+
0.346,
|
| 461 |
+
0.721,
|
| 462 |
+
0.356
|
| 463 |
+
],
|
| 464 |
+
"angle": 0,
|
| 465 |
+
"content": "Token reward: -2"
|
| 466 |
+
},
|
| 467 |
+
{
|
| 468 |
+
"type": "image_caption",
|
| 469 |
+
"bbox": [
|
| 470 |
+
0.113,
|
| 471 |
+
0.372,
|
| 472 |
+
0.885,
|
| 473 |
+
0.422
|
| 474 |
+
],
|
| 475 |
+
"angle": 0,
|
| 476 |
+
"content": "Figure 1: The comparison between sequence-level reward modeling (Left) and our method of fine-grained reward modeling via minimal editing (Right). Our approach diverges from sequence-level reward modeling in two key aspects: (1) Data Collection, where we ask a human or LLM to edit the model response; and (2) Reward Modeling, which enables our model to assign rewards to individual tokens, as opposed to assessing the entire sequence collectively."
|
| 477 |
+
},
|
| 478 |
+
{
|
| 479 |
+
"type": "text",
|
| 480 |
+
"bbox": [
|
| 481 |
+
0.128,
|
| 482 |
+
0.447,
|
| 483 |
+
0.489,
|
| 484 |
+
0.51
|
| 485 |
+
],
|
| 486 |
+
"angle": 0,
|
| 487 |
+
"content": "sequence-level rewards, our approach offers more granular feedback, pinpointing the specific parts of a response that are effective or need improvement, which hence helps RL optimization."
|
| 488 |
+
},
|
| 489 |
+
{
|
| 490 |
+
"type": "text",
|
| 491 |
+
"bbox": [
|
| 492 |
+
0.113,
|
| 493 |
+
0.523,
|
| 494 |
+
0.489,
|
| 495 |
+
0.603
|
| 496 |
+
],
|
| 497 |
+
"angle": 0,
|
| 498 |
+
"content": "Experiment results using AlpacaFarm (Dubois et al., 2023) environment indicate that our proposed approach improves LLMs' performance up to \\(5.1\\%\\) against the baseline in terms of win rate, given the same amount of data for training."
|
| 499 |
+
},
|
| 500 |
+
{
|
| 501 |
+
"type": "title",
|
| 502 |
+
"bbox": [
|
| 503 |
+
0.114,
|
| 504 |
+
0.616,
|
| 505 |
+
0.218,
|
| 506 |
+
0.631
|
| 507 |
+
],
|
| 508 |
+
"angle": 0,
|
| 509 |
+
"content": "2 Method"
|
| 510 |
+
},
|
| 511 |
+
{
|
| 512 |
+
"type": "text",
|
| 513 |
+
"bbox": [
|
| 514 |
+
0.113,
|
| 515 |
+
0.643,
|
| 516 |
+
0.49,
|
| 517 |
+
0.69
|
| 518 |
+
],
|
| 519 |
+
"angle": 0,
|
| 520 |
+
"content": "In this section, we introduce our approach to fine-grained data collection through editing and token-level reward modeling."
|
| 521 |
+
},
|
| 522 |
+
{
|
| 523 |
+
"type": "title",
|
| 524 |
+
"bbox": [
|
| 525 |
+
0.114,
|
| 526 |
+
0.703,
|
| 527 |
+
0.48,
|
| 528 |
+
0.734
|
| 529 |
+
],
|
| 530 |
+
"angle": 0,
|
| 531 |
+
"content": "2.1 Fine-grained data collection via minimal editing"
|
| 532 |
+
},
|
| 533 |
+
{
|
| 534 |
+
"type": "text",
|
| 535 |
+
"bbox": [
|
| 536 |
+
0.113,
|
| 537 |
+
0.741,
|
| 538 |
+
0.49,
|
| 539 |
+
0.92
|
| 540 |
+
],
|
| 541 |
+
"angle": 0,
|
| 542 |
+
"content": "The conventional RLHF pipeline, as outlined in prior works (Ouyang et al., 2022; Dubois et al., 2023), involves three key stages: supervised finetuning (SFT), reward modeling (RM), and proximal policy optimization (PPO). In the RM phase, the standard practice entails collecting a dataset of human evaluations comparing two or more model outputs in response to a series of prompts. The dataset is represented as \\(\\mathcal{D} = \\{x^{(i)},y_w^{(i)},y_l^{(i)}\\}_{i = 1}^N\\) where \\(x\\) denotes a prompt and \\((y_{w},y_{l})\\) indicates the preferred and less preferred responses, respectively."
|
| 543 |
+
},
|
| 544 |
+
{
|
| 545 |
+
"type": "text",
|
| 546 |
+
"bbox": [
|
| 547 |
+
0.508,
|
| 548 |
+
0.446,
|
| 549 |
+
0.885,
|
| 550 |
+
0.543
|
| 551 |
+
],
|
| 552 |
+
"angle": 0,
|
| 553 |
+
"content": "Utilizing such a dataset, earlier RLHF research focused on developing a reward model \\( R_{\\phi} \\) that determines the more favored model output. This holistic reward model associates each input prompt \\( x \\) and its corresponding output \\( y \\) with one scalar value reflecting the output's overall quality."
|
| 554 |
+
},
|
| 555 |
+
{
|
| 556 |
+
"type": "text",
|
| 557 |
+
"bbox": [
|
| 558 |
+
0.508,
|
| 559 |
+
0.547,
|
| 560 |
+
0.885,
|
| 561 |
+
0.884
|
| 562 |
+
],
|
| 563 |
+
"angle": 0,
|
| 564 |
+
"content": "However, as shown in the left panel of Figure 1, annotating a pair of model outputs that are substantially different can be a difficult task for humans, especially when each response exhibits a mix of desirable and undesirable behaviors. To address this issue, we introduce a novel data collection technique aimed at obtaining fine-grained supervision, which offers richer, comparative information beyond simple binary choices. Instead of annotating entire responses, our method involves targeted editing by humans or language models, as depicted in the right panel of Figure 1. The goal is to retain the majority of the original response while making improvements to specific areas in need of enhancement. Specifically, we introduce a response editing process in which we ask humans or prompt LLMs to perform targeted modifications. For fine-grained data collection, our method works for both human annotators and language models, following (Ding et al., 2022; Gilardi et al., 2023; Wang et al., 2022; Chiang and Lee, 2023)."
|
| 565 |
+
},
|
| 566 |
+
{
|
| 567 |
+
"type": "text",
|
| 568 |
+
"bbox": [
|
| 569 |
+
0.509,
|
| 570 |
+
0.888,
|
| 571 |
+
0.884,
|
| 572 |
+
0.92
|
| 573 |
+
],
|
| 574 |
+
"angle": 0,
|
| 575 |
+
"content": "In practice, we prompt a proprietary LLM, such as Claude-2 (Bai et al., 2022b), to apply edits to"
|
| 576 |
+
},
|
| 577 |
+
{
|
| 578 |
+
"type": "page_number",
|
| 579 |
+
"bbox": [
|
| 580 |
+
0.486,
|
| 581 |
+
0.929,
|
| 582 |
+
0.516,
|
| 583 |
+
0.941
|
| 584 |
+
],
|
| 585 |
+
"angle": 0,
|
| 586 |
+
"content": "674"
|
| 587 |
+
}
|
| 588 |
+
],
|
| 589 |
+
[
|
| 590 |
+
{
|
| 591 |
+
"type": "text",
|
| 592 |
+
"bbox": [
|
| 593 |
+
0.113,
|
| 594 |
+
0.085,
|
| 595 |
+
0.49,
|
| 596 |
+
0.326
|
| 597 |
+
],
|
| 598 |
+
"angle": 0,
|
| 599 |
+
"content": "the original output. In the experiment, the original preference pairs \\((y_w, y_l)\\) were not included and we only utilized \\(y_l\\) from the original dataset for minimal editing. This approach maintains the same amount of data as the baseline methods, ensuring a fair comparison. Details of the prompt used for editing can be found in Appendix A.1, and the examples of fine-grained annotation with minimal editing are shown in Appendix A.2. Our method is based on the assumption that the edits inherently improve a response, making changes only when they enhance alignment with human values. The approach enables the refinement of responses by providing clear insights into the specific areas that require improvement."
|
| 600 |
+
},
|
| 601 |
+
{
|
| 602 |
+
"type": "title",
|
| 603 |
+
"bbox": [
|
| 604 |
+
0.114,
|
| 605 |
+
0.337,
|
| 606 |
+
0.391,
|
| 607 |
+
0.353
|
| 608 |
+
],
|
| 609 |
+
"angle": 0,
|
| 610 |
+
"content": "2.2 Token-level reward modeling"
|
| 611 |
+
},
|
| 612 |
+
{
|
| 613 |
+
"type": "text",
|
| 614 |
+
"bbox": [
|
| 615 |
+
0.113,
|
| 616 |
+
0.358,
|
| 617 |
+
0.49,
|
| 618 |
+
0.405
|
| 619 |
+
],
|
| 620 |
+
"angle": 0,
|
| 621 |
+
"content": "In this section, we will first introduce the RL environment and then define our token-level reward modeling scheme."
|
| 622 |
+
},
|
| 623 |
+
{
|
| 624 |
+
"type": "text",
|
| 625 |
+
"bbox": [
|
| 626 |
+
0.113,
|
| 627 |
+
0.407,
|
| 628 |
+
0.49,
|
| 629 |
+
0.614
|
| 630 |
+
],
|
| 631 |
+
"angle": 0,
|
| 632 |
+
"content": "Language generation can be defined as a Markov Decision Process (MDP) \\(\\langle S, \\mathcal{A}, \\mathcal{R}, \\mathcal{P}, \\gamma \\rangle\\). \\(S\\) refers to the state space and we define the start state \\(s_1\\) as the input prompts \\(\\{x\\}\\). An action at t-step \\(a_t\\) is a generated token. The transition function of the environment is denoted as \\(\\mathcal{P}: S \\times \\mathcal{A} \\to S\\), \\(s_t = \\{x, a_1, \\dots, a_{t-1}\\}\\). A response \\(y\\) of length \\(T\\) is then \\(y = \\{a_1, \\dots, a_T\\}\\). In our token-level reward scheme, a reward is assigned to each generated token \\(a_t\\) by \\(\\mathcal{R}: S \\times \\mathcal{A} \\to \\mathbb{R}\\), where at each time step \\(t\\) there is a learned reward function \\(r_t = r_\\phi(s_t, a_t)\\). Therefore, for each response, we have a trajectory \\(\\tau = \\{s_1, a_1, r_1, \\dots, s_t, a_t, r_t, \\dots, s_T, a_T, r_T\\}\\)."
|
| 633 |
+
},
|
| 634 |
+
{
|
| 635 |
+
"type": "text",
|
| 636 |
+
"bbox": [
|
| 637 |
+
0.114,
|
| 638 |
+
0.616,
|
| 639 |
+
0.488,
|
| 640 |
+
0.647
|
| 641 |
+
],
|
| 642 |
+
"angle": 0,
|
| 643 |
+
"content": "We define the reward of the whole trajectory as the average of rewards assigned to each token:"
|
| 644 |
+
},
|
| 645 |
+
{
|
| 646 |
+
"type": "equation",
|
| 647 |
+
"bbox": [
|
| 648 |
+
0.232,
|
| 649 |
+
0.659,
|
| 650 |
+
0.489,
|
| 651 |
+
0.702
|
| 652 |
+
],
|
| 653 |
+
"angle": 0,
|
| 654 |
+
"content": "\\[\nR (\\tau) = \\frac {1}{T} \\sum_ {t = 1} ^ {T} r _ {t}. \\tag {1}\n\\]"
|
| 655 |
+
},
|
| 656 |
+
{
|
| 657 |
+
"type": "text",
|
| 658 |
+
"bbox": [
|
| 659 |
+
0.114,
|
| 660 |
+
0.712,
|
| 661 |
+
0.49,
|
| 662 |
+
0.776
|
| 663 |
+
],
|
| 664 |
+
"angle": 0,
|
| 665 |
+
"content": "Following the Bradley-Terry (BT) model (Bradley and Terry, 1952) for preference modeling, we formulate the distribution of human preference for responses as below:"
|
| 666 |
+
},
|
| 667 |
+
{
|
| 668 |
+
"type": "equation",
|
| 669 |
+
"bbox": [
|
| 670 |
+
0.134,
|
| 671 |
+
0.787,
|
| 672 |
+
0.488,
|
| 673 |
+
0.842
|
| 674 |
+
],
|
| 675 |
+
"angle": 0,
|
| 676 |
+
"content": "\\[\n\\begin{array}{l} p \\left(\\tau^ {i} \\succ \\tau^ {j}\\right) = \\frac {\\exp \\left(R \\left(\\tau^ {i}\\right)\\right)}{\\exp \\left(R \\left(\\tau^ {i}\\right)\\right) + \\exp \\left(R \\left(\\tau^ {j}\\right)\\right)} \\tag {2} \\\\ = \\sigma (R (\\tau^ {i}) - R (\\tau^ {j})), \\\\ \\end{array}\n\\]"
|
| 677 |
+
},
|
| 678 |
+
{
|
| 679 |
+
"type": "text",
|
| 680 |
+
"bbox": [
|
| 681 |
+
0.113,
|
| 682 |
+
0.855,
|
| 683 |
+
0.489,
|
| 684 |
+
0.919
|
| 685 |
+
],
|
| 686 |
+
"angle": 0,
|
| 687 |
+
"content": "where \\(\\tau^i\\) and \\(\\tau^j\\) represent two different responses generated from the same prompt. Under the setting of our fine-grained supervision dataset, we assume \\(\\tau^i\\) only makes edits on \\(\\tau^j\\) while maintaining most"
|
| 688 |
+
},
|
| 689 |
+
{
|
| 690 |
+
"type": "text",
|
| 691 |
+
"bbox": [
|
| 692 |
+
0.509,
|
| 693 |
+
0.084,
|
| 694 |
+
0.882,
|
| 695 |
+
0.133
|
| 696 |
+
],
|
| 697 |
+
"angle": 0,
|
| 698 |
+
"content": "parts unchanged. We define \\( U_0 = \\{t | a_t^i = a_t^j\\} \\) and \\( U_1 = \\{t | a_t^i \\neq a_t^j\\} \\) to represent the unchanged and changed parts."
|
| 699 |
+
},
|
| 700 |
+
{
|
| 701 |
+
"type": "text",
|
| 702 |
+
"bbox": [
|
| 703 |
+
0.508,
|
| 704 |
+
0.134,
|
| 705 |
+
0.884,
|
| 706 |
+
0.182
|
| 707 |
+
],
|
| 708 |
+
"angle": 0,
|
| 709 |
+
"content": "Regarding the reward model as a binary classifier, we use negative log-likelihood as the loss function. By plugging in Equation 1, we have:"
|
| 710 |
+
},
|
| 711 |
+
{
|
| 712 |
+
"type": "equation",
|
| 713 |
+
"bbox": [
|
| 714 |
+
0.527,
|
| 715 |
+
0.19,
|
| 716 |
+
0.883,
|
| 717 |
+
0.29
|
| 718 |
+
],
|
| 719 |
+
"angle": 0,
|
| 720 |
+
"content": "\\[\n\\begin{array}{l} \\mathcal {L} = - \\mathbb {E} _ {(\\tau^ {i}, \\tau^ {j}) \\sim \\mathcal {D}} \\left[ \\log \\sigma (R (\\tau^ {i}) - R (\\tau^ {j})) \\right] \\\\ = - \\mathbb {E} _ {(\\tau^ {i}, \\tau^ {j}) \\sim \\mathcal {D}} [ \\log \\sigma \\left(\\left(\\frac {1}{T ^ {i}} - \\frac {1}{T ^ {j}}\\right) \\sum_ {t \\in U _ {0}} r _ {t} \\right. \\tag {3} \\\\ + \\frac {1}{T ^ {i}} \\sum_ {t \\in U _ {1}} r _ {t} ^ {i} - \\frac {1}{T ^ {j}} \\sum_ {t \\in U _ {1}} r _ {t} ^ {j}) ], \\\\ \\end{array}\n\\]"
|
| 721 |
+
},
|
| 722 |
+
{
|
| 723 |
+
"type": "text",
|
| 724 |
+
"bbox": [
|
| 725 |
+
0.508,
|
| 726 |
+
0.299,
|
| 727 |
+
0.884,
|
| 728 |
+
0.362
|
| 729 |
+
],
|
| 730 |
+
"angle": 0,
|
| 731 |
+
"content": "Ideally, we aim for the unchanged part to maintain a consistent reward. Under this assumption, and if the two responses are of equal length, the first term of the loss function can be removed:"
|
| 732 |
+
},
|
| 733 |
+
{
|
| 734 |
+
"type": "equation",
|
| 735 |
+
"bbox": [
|
| 736 |
+
0.509,
|
| 737 |
+
0.385,
|
| 738 |
+
0.883,
|
| 739 |
+
0.438
|
| 740 |
+
],
|
| 741 |
+
"angle": 0,
|
| 742 |
+
"content": "\\[\n\\mathcal {L} \\approx - \\mathbb {E} _ {\\left(\\tau^ {i}, \\tau^ {j}\\right) \\sim \\mathcal {D}} [ \\log \\sigma \\left(\\frac {1}{T ^ {i}} \\sum_ {t \\in U _ {1}} r _ {t} ^ {i} - \\frac {1}{T ^ {j}} \\sum_ {t \\in U _ {1}} r _ {t} ^ {j}\\right) ] \\tag {4}\n\\]"
|
| 743 |
+
},
|
| 744 |
+
{
|
| 745 |
+
"type": "text",
|
| 746 |
+
"bbox": [
|
| 747 |
+
0.508,
|
| 748 |
+
0.449,
|
| 749 |
+
0.884,
|
| 750 |
+
0.496
|
| 751 |
+
],
|
| 752 |
+
"angle": 0,
|
| 753 |
+
"content": "For the edited part, the loss function is thus designed to maximize the reward for the preferred response and minimize it for the less favored one."
|
| 754 |
+
},
|
| 755 |
+
{
|
| 756 |
+
"type": "text",
|
| 757 |
+
"bbox": [
|
| 758 |
+
0.508,
|
| 759 |
+
0.497,
|
| 760 |
+
0.885,
|
| 761 |
+
0.706
|
| 762 |
+
],
|
| 763 |
+
"angle": 0,
|
| 764 |
+
"content": "With a trained token-level reward model, we can integrate it into the Proximal Policy Optimization (PPO) (Schulman et al., 2017) algorithm. In the traditional PPO-RLHF method, each token in the sequence is assigned a reward of the form \\([-KL_1, -KL_2, \\dots, R - KL_n]\\), where \\(KL_i\\) denotes the Kullback-Leibler divergence (Kullback and Leibler, 1951) for the generated token sequence up to that point, and \\(R\\) represents the sequence-level reward from the reward model. Generalized Advantage Estimation (GAE) (Schulman et al., 2015) is then employed to calculate the advantage at the token level."
|
| 765 |
+
},
|
| 766 |
+
{
|
| 767 |
+
"type": "text",
|
| 768 |
+
"bbox": [
|
| 769 |
+
0.508,
|
| 770 |
+
0.707,
|
| 771 |
+
0.884,
|
| 772 |
+
0.851
|
| 773 |
+
],
|
| 774 |
+
"angle": 0,
|
| 775 |
+
"content": "In contrast, our approach assigns a reward \\( R_{i} \\) directly from the token-level reward model to each token in the sequence, resulting in a reward vector of \\( [R_1,R_2,\\dots,R_n] \\). This approach enhances the granularity of feedback at each step of the sequence generation process, without changing the underlying GAE and policy update procedure. Consequently, the computational cost remains comparable to the standard RLHF approach."
|
| 776 |
+
},
|
| 777 |
+
{
|
| 778 |
+
"type": "title",
|
| 779 |
+
"bbox": [
|
| 780 |
+
0.509,
|
| 781 |
+
0.863,
|
| 782 |
+
0.656,
|
| 783 |
+
0.879
|
| 784 |
+
],
|
| 785 |
+
"angle": 0,
|
| 786 |
+
"content": "3 Experiments"
|
| 787 |
+
},
|
| 788 |
+
{
|
| 789 |
+
"type": "text",
|
| 790 |
+
"bbox": [
|
| 791 |
+
0.508,
|
| 792 |
+
0.888,
|
| 793 |
+
0.882,
|
| 794 |
+
0.919
|
| 795 |
+
],
|
| 796 |
+
"angle": 0,
|
| 797 |
+
"content": "In this section, we demonstrate our experimental setup and empirical results in detail."
|
| 798 |
+
},
|
| 799 |
+
{
|
| 800 |
+
"type": "page_number",
|
| 801 |
+
"bbox": [
|
| 802 |
+
0.486,
|
| 803 |
+
0.928,
|
| 804 |
+
0.516,
|
| 805 |
+
0.941
|
| 806 |
+
],
|
| 807 |
+
"angle": 0,
|
| 808 |
+
"content": "675"
|
| 809 |
+
}
|
| 810 |
+
],
|
| 811 |
+
[
|
| 812 |
+
{
|
| 813 |
+
"type": "table",
|
| 814 |
+
"bbox": [
|
| 815 |
+
0.116,
|
| 816 |
+
0.082,
|
| 817 |
+
0.501,
|
| 818 |
+
0.167
|
| 819 |
+
],
|
| 820 |
+
"angle": 0,
|
| 821 |
+
"content": "<table><tr><td>Model</td><td>Win rate (%)</td></tr><tr><td>Fine-grained Token-level PPO</td><td>51.6 ± 1.8</td></tr><tr><td>Fine-grained PPO</td><td>51.2 ± 1.8</td></tr><tr><td>Davinci003 (Brown et al., 2020)</td><td>50.0</td></tr><tr><td>PPO-RLHF (Ouyang et al., 2022)</td><td>46.5 ± 1.8</td></tr></table>"
|
| 822 |
+
},
|
| 823 |
+
{
|
| 824 |
+
"type": "table_caption",
|
| 825 |
+
"bbox": [
|
| 826 |
+
0.114,
|
| 827 |
+
0.176,
|
| 828 |
+
0.489,
|
| 829 |
+
0.22
|
| 830 |
+
],
|
| 831 |
+
"angle": 0,
|
| 832 |
+
"content": "Table 1: Evaluation results by Claude. Davinci003 is the reference model. All results of other models are from (Dubois et al., 2023)."
|
| 833 |
+
},
|
| 834 |
+
{
|
| 835 |
+
"type": "title",
|
| 836 |
+
"bbox": [
|
| 837 |
+
0.114,
|
| 838 |
+
0.247,
|
| 839 |
+
0.315,
|
| 840 |
+
0.263
|
| 841 |
+
],
|
| 842 |
+
"angle": 0,
|
| 843 |
+
"content": "3.1 Experimental setup"
|
| 844 |
+
},
|
| 845 |
+
{
|
| 846 |
+
"type": "text",
|
| 847 |
+
"bbox": [
|
| 848 |
+
0.113,
|
| 849 |
+
0.269,
|
| 850 |
+
0.49,
|
| 851 |
+
0.445
|
| 852 |
+
],
|
| 853 |
+
"angle": 0,
|
| 854 |
+
"content": "In constructing our dataset, we follow the framework established by AlpacaFarm (Dubois et al., 2023), which offers a simulation environment that includes data splits for SFT, RM, PPO, and evaluation processes. Building on this, we develop our refined RM dataset using the fine-grained approach, where we employ Claude-2 (Bai et al., 2022b) to perform targeted editing. Edits are generated on the less preferred responses from the original pairwise data, ensuring lightweight yet effective modifications."
|
| 855 |
+
},
|
| 856 |
+
{
|
| 857 |
+
"type": "text",
|
| 858 |
+
"bbox": [
|
| 859 |
+
0.113,
|
| 860 |
+
0.448,
|
| 861 |
+
0.49,
|
| 862 |
+
0.672
|
| 863 |
+
],
|
| 864 |
+
"angle": 0,
|
| 865 |
+
"content": "We evaluate our method by finetuning the pretrained \\(LLaMA-7B\\) (Touvron et al., 2023) model. To assess the quality of our model's generation compared to baseline models, we employ a win-rate measurement, where the model \\(p_{\\theta}\\) is evaluated against a reference model \\(p_{\\mathrm{ref}}\\). This method involves pairwise comparisons to estimate how often \\(p_{\\theta}\\)'s outputs are preferred over \\(p_{\\mathrm{ref}}\\)'s for given instructions. Both our model and the baselines are evaluated against the same reference model, Davinci003, aligning with AlpacaFarm (Dubois et al., 2023). To assess the win rate, we employ Claude as the judge, following the simulated approach in (Zheng et al., 2023)."
|
| 866 |
+
},
|
| 867 |
+
{
|
| 868 |
+
"type": "text",
|
| 869 |
+
"bbox": [
|
| 870 |
+
0.113,
|
| 871 |
+
0.674,
|
| 872 |
+
0.49,
|
| 873 |
+
0.817
|
| 874 |
+
],
|
| 875 |
+
"angle": 0,
|
| 876 |
+
"content": "To evaluate the effectiveness of our data annotation approach and token-level reward model, we train two models: (i) Fine-grained PPO that only uses our fine-grained RM dataset with editing while still trained with a sequence-level reward, and (ii) Fine-grained Token-level PPO that incorporates both the fine-grained RM dataset and token-level reward modeling, and hence applies token-level reward to PPO."
|
| 877 |
+
},
|
| 878 |
+
{
|
| 879 |
+
"type": "title",
|
| 880 |
+
"bbox": [
|
| 881 |
+
0.114,
|
| 882 |
+
0.833,
|
| 883 |
+
0.312,
|
| 884 |
+
0.849
|
| 885 |
+
],
|
| 886 |
+
"angle": 0,
|
| 887 |
+
"content": "3.2 Experiment results"
|
| 888 |
+
},
|
| 889 |
+
{
|
| 890 |
+
"type": "text",
|
| 891 |
+
"bbox": [
|
| 892 |
+
0.113,
|
| 893 |
+
0.855,
|
| 894 |
+
0.489,
|
| 895 |
+
0.919
|
| 896 |
+
],
|
| 897 |
+
"angle": 0,
|
| 898 |
+
"content": "Results in human value alignment Table 1 showcases our methods (highlighted) alongside the baseline PPO-RLHF model, both trained on \\(LLaMA - 7B\\) (Touvron et al., 2023). Results indicate"
|
| 899 |
+
},
|
| 900 |
+
{
|
| 901 |
+
"type": "table",
|
| 902 |
+
"bbox": [
|
| 903 |
+
0.515,
|
| 904 |
+
0.082,
|
| 905 |
+
0.88,
|
| 906 |
+
0.135
|
| 907 |
+
],
|
| 908 |
+
"angle": 0,
|
| 909 |
+
"content": "<table><tr><td>Model</td><td>Accuracy (%)</td></tr><tr><td>RM w/ Fine-grained dataset</td><td>85.2 ± 1.8</td></tr><tr><td>RM w/o Fine-grained dataset</td><td>58.2 ± 1.8</td></tr></table>"
|
| 910 |
+
},
|
| 911 |
+
{
|
| 912 |
+
"type": "table_caption",
|
| 913 |
+
"bbox": [
|
| 914 |
+
0.509,
|
| 915 |
+
0.144,
|
| 916 |
+
0.884,
|
| 917 |
+
0.188
|
| 918 |
+
],
|
| 919 |
+
"angle": 0,
|
| 920 |
+
"content": "Table 2: Reward model accuracy. Leveraging the fine-grained dataset enhances the reward model's ability to assign correct rewards to responses."
|
| 921 |
+
},
|
| 922 |
+
{
|
| 923 |
+
"type": "table",
|
| 924 |
+
"bbox": [
|
| 925 |
+
0.511,
|
| 926 |
+
0.202,
|
| 927 |
+
0.885,
|
| 928 |
+
0.288
|
| 929 |
+
],
|
| 930 |
+
"angle": 0,
|
| 931 |
+
"content": "<table><tr><td>Model</td><td>Step</td><td>Tr. hours</td></tr><tr><td>RLHF (Ouyang et al., 2022)</td><td>RM</td><td>0.2</td></tr><tr><td>Fine-grained RLHF</td><td>RM</td><td>0.3</td></tr><tr><td>RLHF (Ouyang et al., 2022)</td><td>PPO</td><td>4</td></tr><tr><td>Fine-grained RLHF</td><td>PPO</td><td>2</td></tr></table>"
|
| 932 |
+
},
|
| 933 |
+
{
|
| 934 |
+
"type": "table_caption",
|
| 935 |
+
"bbox": [
|
| 936 |
+
0.508,
|
| 937 |
+
0.296,
|
| 938 |
+
0.884,
|
| 939 |
+
0.34
|
| 940 |
+
],
|
| 941 |
+
"angle": 0,
|
| 942 |
+
"content": "Table 3: Training efficiency. Highlighted numbers represent the training hours (Tr. hours) of the fine-grained PPO model trained with token-level rewards."
|
| 943 |
+
},
|
| 944 |
+
{
|
| 945 |
+
"type": "text",
|
| 946 |
+
"bbox": [
|
| 947 |
+
0.508,
|
| 948 |
+
0.367,
|
| 949 |
+
0.884,
|
| 950 |
+
0.574
|
| 951 |
+
],
|
| 952 |
+
"angle": 0,
|
| 953 |
+
"content": "that our novel data collection technique, when integrated with standard PPO training, leads to an absolute performance increase of \\(4.7\\%\\) compared to traditional methods (refer to lines 2 vs. 4). This highlights the effectiveness of our fine-grained data collection strategy. Moreover, when trained with the same fine-grained dataset, the token-level reward model (line 1) demonstrates further alignment improvements compared to the PPO alone (line 2), indicating the importance of token-level rewards. Together, these findings affirm that our approach significantly outperforms the traditional PPO-RLHF model."
|
| 954 |
+
},
|
| 955 |
+
{
|
| 956 |
+
"type": "text",
|
| 957 |
+
"bbox": [
|
| 958 |
+
0.508,
|
| 959 |
+
0.587,
|
| 960 |
+
0.884,
|
| 961 |
+
0.78
|
| 962 |
+
],
|
| 963 |
+
"angle": 0,
|
| 964 |
+
"content": "Reward model analysis To explain the observed performance increase, we further investigate the effectiveness of the reward model. We test its accuracy in assigning higher rewards to superior responses within the evaluation set. As shown in Table 2, our fine-grained dataset enables the learned reward model to reach an accuracy of approximately \\(85.2\\%\\), outperforming the model trained with the original dataset. This result demonstrates that our data collection method enhances the capability of our reward model to identify and appropriately reward better responses."
|
| 965 |
+
},
|
| 966 |
+
{
|
| 967 |
+
"type": "text",
|
| 968 |
+
"bbox": [
|
| 969 |
+
0.508,
|
| 970 |
+
0.791,
|
| 971 |
+
0.885,
|
| 972 |
+
0.919
|
| 973 |
+
],
|
| 974 |
+
"angle": 0,
|
| 975 |
+
"content": "Training efficiency Table 3 illustrates the training costs for different models. Note that all the models are trained on 8 NVIDIA A100 GPUs (80G) with the same batch size for both phases. While the training time for the reward modeling phase is comparable between our method and the baseline, our fine-grained reward model significantly boosts the efficiency of RL optimization."
|
| 976 |
+
},
|
| 977 |
+
{
|
| 978 |
+
"type": "page_number",
|
| 979 |
+
"bbox": [
|
| 980 |
+
0.486,
|
| 981 |
+
0.929,
|
| 982 |
+
0.517,
|
| 983 |
+
0.941
|
| 984 |
+
],
|
| 985 |
+
"angle": 0,
|
| 986 |
+
"content": "676"
|
| 987 |
+
}
|
| 988 |
+
],
|
| 989 |
+
[
|
| 990 |
+
{
|
| 991 |
+
"type": "text",
|
| 992 |
+
"bbox": [
|
| 993 |
+
0.113,
|
| 994 |
+
0.085,
|
| 995 |
+
0.49,
|
| 996 |
+
0.231
|
| 997 |
+
],
|
| 998 |
+
"angle": 0,
|
| 999 |
+
"content": "It reduces the time required for PPO to converge to its optimal performance by half, due to our more precise and fine-grained reward function. Based on the experiment results, our reward function can provide more accurate and denser training signals, which can help RL algorithms converge faster. This improvement in training efficiency could be important for LLM alignment, especially when the size of the LLM becomes increasingly large."
|
| 1000 |
+
},
|
| 1001 |
+
{
|
| 1002 |
+
"type": "title",
|
| 1003 |
+
"bbox": [
|
| 1004 |
+
0.114,
|
| 1005 |
+
0.241,
|
| 1006 |
+
0.251,
|
| 1007 |
+
0.256
|
| 1008 |
+
],
|
| 1009 |
+
"angle": 0,
|
| 1010 |
+
"content": "4 Limitations"
|
| 1011 |
+
},
|
| 1012 |
+
{
|
| 1013 |
+
"type": "text",
|
| 1014 |
+
"bbox": [
|
| 1015 |
+
0.113,
|
| 1016 |
+
0.267,
|
| 1017 |
+
0.491,
|
| 1018 |
+
0.364
|
| 1019 |
+
],
|
| 1020 |
+
"angle": 0,
|
| 1021 |
+
"content": "Although the empirical results show that our approach achieves better performance in model alignment, we struggle to provide rigorous mathematical proof to conclusively demonstrate the effectiveness of this reward allocation strategy, specifically in Equation 4."
|
| 1022 |
+
},
|
| 1023 |
+
{
|
| 1024 |
+
"type": "title",
|
| 1025 |
+
"bbox": [
|
| 1026 |
+
0.114,
|
| 1027 |
+
0.375,
|
| 1028 |
+
0.248,
|
| 1029 |
+
0.39
|
| 1030 |
+
],
|
| 1031 |
+
"angle": 0,
|
| 1032 |
+
"content": "5 Conclusion"
|
| 1033 |
+
},
|
| 1034 |
+
{
|
| 1035 |
+
"type": "text",
|
| 1036 |
+
"bbox": [
|
| 1037 |
+
0.113,
|
| 1038 |
+
0.4,
|
| 1039 |
+
0.49,
|
| 1040 |
+
0.528
|
| 1041 |
+
],
|
| 1042 |
+
"angle": 0,
|
| 1043 |
+
"content": "In this paper, we introduce a fine-grained RLHF framework that includes a data collection technique alongside a token-level reward model. This approach enables better value alignment by learning a more accurate reward model, facilitating faster convergence for PPO. Our experimental results show performance improvement based on automatic evaluations compared to the baseline method."
|
| 1044 |
+
},
|
| 1045 |
+
{
|
| 1046 |
+
"type": "title",
|
| 1047 |
+
"bbox": [
|
| 1048 |
+
0.115,
|
| 1049 |
+
0.541,
|
| 1050 |
+
0.279,
|
| 1051 |
+
0.557
|
| 1052 |
+
],
|
| 1053 |
+
"angle": 0,
|
| 1054 |
+
"content": "Acknowledgments"
|
| 1055 |
+
},
|
| 1056 |
+
{
|
| 1057 |
+
"type": "text",
|
| 1058 |
+
"bbox": [
|
| 1059 |
+
0.113,
|
| 1060 |
+
0.566,
|
| 1061 |
+
0.49,
|
| 1062 |
+
0.679
|
| 1063 |
+
],
|
| 1064 |
+
"angle": 0,
|
| 1065 |
+
"content": "We would like to thank Yi Xu, Puyang Xu and other members of Amazon, as well as Ying Nian Wu and Minglu Zhao and from University of California, Los Angeles for their valuable discussions and constructive feedback. Dehong Xu's research for this work was financially supported by Amazon during his internship at Amazon."
|
| 1066 |
+
},
|
| 1067 |
+
{
|
| 1068 |
+
"type": "title",
|
| 1069 |
+
"bbox": [
|
| 1070 |
+
0.511,
|
| 1071 |
+
0.084,
|
| 1072 |
+
0.61,
|
| 1073 |
+
0.099
|
| 1074 |
+
],
|
| 1075 |
+
"angle": 0,
|
| 1076 |
+
"content": "References"
|
| 1077 |
+
},
|
| 1078 |
+
{
|
| 1079 |
+
"type": "ref_text",
|
| 1080 |
+
"bbox": [
|
| 1081 |
+
0.511,
|
| 1082 |
+
0.109,
|
| 1083 |
+
0.885,
|
| 1084 |
+
0.189
|
| 1085 |
+
],
|
| 1086 |
+
"angle": 0,
|
| 1087 |
+
"content": "Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022a. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862."
|
| 1088 |
+
},
|
| 1089 |
+
{
|
| 1090 |
+
"type": "ref_text",
|
| 1091 |
+
"bbox": [
|
| 1092 |
+
0.511,
|
| 1093 |
+
0.2,
|
| 1094 |
+
0.885,
|
| 1095 |
+
0.279
|
| 1096 |
+
],
|
| 1097 |
+
"angle": 0,
|
| 1098 |
+
"content": "Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. 2022b. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073."
|
| 1099 |
+
},
|
| 1100 |
+
{
|
| 1101 |
+
"type": "ref_text",
|
| 1102 |
+
"bbox": [
|
| 1103 |
+
0.511,
|
| 1104 |
+
0.291,
|
| 1105 |
+
0.885,
|
| 1106 |
+
0.344
|
| 1107 |
+
],
|
| 1108 |
+
"angle": 0,
|
| 1109 |
+
"content": "Ralph Allan Bradley and Milton E Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324-345."
|
| 1110 |
+
},
|
| 1111 |
+
{
|
| 1112 |
+
"type": "ref_text",
|
| 1113 |
+
"bbox": [
|
| 1114 |
+
0.511,
|
| 1115 |
+
0.357,
|
| 1116 |
+
0.885,
|
| 1117 |
+
0.436
|
| 1118 |
+
],
|
| 1119 |
+
"angle": 0,
|
| 1120 |
+
"content": "Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901."
|
| 1121 |
+
},
|
| 1122 |
+
{
|
| 1123 |
+
"type": "ref_text",
|
| 1124 |
+
"bbox": [
|
| 1125 |
+
0.511,
|
| 1126 |
+
0.448,
|
| 1127 |
+
0.885,
|
| 1128 |
+
0.488
|
| 1129 |
+
],
|
| 1130 |
+
"angle": 0,
|
| 1131 |
+
"content": "Cheng-Han Chiang and Hung-yi Lee. 2023. Can large language models be an alternative to human evaluations? arXiv preprint arXiv:2305.01937."
|
| 1132 |
+
},
|
| 1133 |
+
{
|
| 1134 |
+
"type": "ref_text",
|
| 1135 |
+
"bbox": [
|
| 1136 |
+
0.511,
|
| 1137 |
+
0.5,
|
| 1138 |
+
0.885,
|
| 1139 |
+
0.552
|
| 1140 |
+
],
|
| 1141 |
+
"angle": 0,
|
| 1142 |
+
"content": "Bosheng Ding, Chengwei Qin, Linlin Liu, Yew Ken Chia, Shafiq Joty, Boyang Li, and Lidong Bing. 2022. Is gpt-3 a good data annotator? arXiv preprint arXiv:2212.10450."
|
| 1143 |
+
},
|
| 1144 |
+
{
|
| 1145 |
+
"type": "ref_text",
|
| 1146 |
+
"bbox": [
|
| 1147 |
+
0.511,
|
| 1148 |
+
0.565,
|
| 1149 |
+
0.885,
|
| 1150 |
+
0.644
|
| 1151 |
+
],
|
| 1152 |
+
"angle": 0,
|
| 1153 |
+
"content": "Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. 2023. Alpacafarm: A simulation framework for methods that learn from human feedback. arXiv preprint arXiv:2305.14387."
|
| 1154 |
+
},
|
| 1155 |
+
{
|
| 1156 |
+
"type": "ref_text",
|
| 1157 |
+
"bbox": [
|
| 1158 |
+
0.511,
|
| 1159 |
+
0.657,
|
| 1160 |
+
0.885,
|
| 1161 |
+
0.737
|
| 1162 |
+
],
|
| 1163 |
+
"angle": 0,
|
| 1164 |
+
"content": "Patrick Fernandes, Aman Madaan, Emmy Liu, Antonio Farinhas, Pedro Henrique Martins, Amanda Bertsch, José GC de Souza, Shuyan Zhou, Tongshuang Wu, Graham Neubig, et al. 2023. Bridging the gap: A survey on integrating (human) feedback for natural language generation. arXiv preprint arXiv:2305.00955."
|
| 1165 |
+
},
|
| 1166 |
+
{
|
| 1167 |
+
"type": "ref_text",
|
| 1168 |
+
"bbox": [
|
| 1169 |
+
0.511,
|
| 1170 |
+
0.748,
|
| 1171 |
+
0.885,
|
| 1172 |
+
0.801
|
| 1173 |
+
],
|
| 1174 |
+
"angle": 0,
|
| 1175 |
+
"content": "Fabrizio Gilardi, Meysam Alizadeh, and Mael Kubli. 2023. Chatgpt outperforms crowd workers for text-annotation tasks. Proceedings of the National Academy of Sciences, 120(30):e2305016120."
|
| 1176 |
+
},
|
| 1177 |
+
{
|
| 1178 |
+
"type": "ref_text",
|
| 1179 |
+
"bbox": [
|
| 1180 |
+
0.511,
|
| 1181 |
+
0.814,
|
| 1182 |
+
0.885,
|
| 1183 |
+
0.867
|
| 1184 |
+
],
|
| 1185 |
+
"angle": 0,
|
| 1186 |
+
"content": "Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, and Ji-Rong Wen. 2023. Beyond imitation: Leveraging fine-grained quality signals for alignment. arXiv preprint arXiv:2311.04072."
|
| 1187 |
+
},
|
| 1188 |
+
{
|
| 1189 |
+
"type": "ref_text",
|
| 1190 |
+
"bbox": [
|
| 1191 |
+
0.511,
|
| 1192 |
+
0.878,
|
| 1193 |
+
0.885,
|
| 1194 |
+
0.919
|
| 1195 |
+
],
|
| 1196 |
+
"angle": 0,
|
| 1197 |
+
"content": "Solomon Kullback and Richard A Leibler. 1951. On information and sufficiency. The annals of mathematical statistics, 22(1):79-86."
|
| 1198 |
+
},
|
| 1199 |
+
{
|
| 1200 |
+
"type": "list",
|
| 1201 |
+
"bbox": [
|
| 1202 |
+
0.511,
|
| 1203 |
+
0.109,
|
| 1204 |
+
0.885,
|
| 1205 |
+
0.919
|
| 1206 |
+
],
|
| 1207 |
+
"angle": 0,
|
| 1208 |
+
"content": null
|
| 1209 |
+
},
|
| 1210 |
+
{
|
| 1211 |
+
"type": "page_number",
|
| 1212 |
+
"bbox": [
|
| 1213 |
+
0.486,
|
| 1214 |
+
0.929,
|
| 1215 |
+
0.516,
|
| 1216 |
+
0.941
|
| 1217 |
+
],
|
| 1218 |
+
"angle": 0,
|
| 1219 |
+
"content": "677"
|
| 1220 |
+
}
|
| 1221 |
+
],
|
| 1222 |
+
[
|
| 1223 |
+
{
|
| 1224 |
+
"type": "ref_text",
|
| 1225 |
+
"bbox": [
|
| 1226 |
+
0.117,
|
| 1227 |
+
0.086,
|
| 1228 |
+
0.49,
|
| 1229 |
+
0.166
|
| 1230 |
+
],
|
| 1231 |
+
"angle": 0,
|
| 1232 |
+
"content": "Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730-27744."
|
| 1233 |
+
},
|
| 1234 |
+
{
|
| 1235 |
+
"type": "ref_text",
|
| 1236 |
+
"bbox": [
|
| 1237 |
+
0.117,
|
| 1238 |
+
0.175,
|
| 1239 |
+
0.488,
|
| 1240 |
+
0.24
|
| 1241 |
+
],
|
| 1242 |
+
"angle": 0,
|
| 1243 |
+
"content": "Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290."
|
| 1244 |
+
},
|
| 1245 |
+
{
|
| 1246 |
+
"type": "ref_text",
|
| 1247 |
+
"bbox": [
|
| 1248 |
+
0.117,
|
| 1249 |
+
0.25,
|
| 1250 |
+
0.488,
|
| 1251 |
+
0.342
|
| 1252 |
+
],
|
| 1253 |
+
"angle": 0,
|
| 1254 |
+
"content": "Rajkumar Ramamurthy, Prithviraj Ammanabrolu, Kianté Brantley, Jack Hessel, Rafet Sifa, Christian Bauckhage, Hannaneh Hajishirzi, and Yejin Choi. 2022. Is reinforcement learning (not) for natural language processing?: Benchmarks, baselines, and building blocks for natural language policy optimization. arXiv preprint arXiv:2210.01241."
|
| 1255 |
+
},
|
| 1256 |
+
{
|
| 1257 |
+
"type": "ref_text",
|
| 1258 |
+
"bbox": [
|
| 1259 |
+
0.117,
|
| 1260 |
+
0.351,
|
| 1261 |
+
0.488,
|
| 1262 |
+
0.404
|
| 1263 |
+
],
|
| 1264 |
+
"angle": 0,
|
| 1265 |
+
"content": "John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. 2015. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438."
|
| 1266 |
+
},
|
| 1267 |
+
{
|
| 1268 |
+
"type": "ref_text",
|
| 1269 |
+
"bbox": [
|
| 1270 |
+
0.117,
|
| 1271 |
+
0.414,
|
| 1272 |
+
0.488,
|
| 1273 |
+
0.466
|
| 1274 |
+
],
|
| 1275 |
+
"angle": 0,
|
| 1276 |
+
"content": "John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347."
|
| 1277 |
+
},
|
| 1278 |
+
{
|
| 1279 |
+
"type": "ref_text",
|
| 1280 |
+
"bbox": [
|
| 1281 |
+
0.117,
|
| 1282 |
+
0.476,
|
| 1283 |
+
0.488,
|
| 1284 |
+
0.554
|
| 1285 |
+
],
|
| 1286 |
+
"angle": 0,
|
| 1287 |
+
"content": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971."
|
| 1288 |
+
},
|
| 1289 |
+
{
|
| 1290 |
+
"type": "ref_text",
|
| 1291 |
+
"bbox": [
|
| 1292 |
+
0.117,
|
| 1293 |
+
0.565,
|
| 1294 |
+
0.488,
|
| 1295 |
+
0.631
|
| 1296 |
+
],
|
| 1297 |
+
"angle": 0,
|
| 1298 |
+
"content": "Binghai Wang, Rui Zheng, Lu Chen, Yan Liu, Shihan Dou, Caishuang Huang, Wei Shen, Senjie Jin, Enyu Zhou, Chenyu Shi, et al. 2024. Secrets of rlhf in large language models part ii: Reward modeling. arXiv preprint arXiv:2401.06080."
|
| 1299 |
+
},
|
| 1300 |
+
{
|
| 1301 |
+
"type": "ref_text",
|
| 1302 |
+
"bbox": [
|
| 1303 |
+
0.117,
|
| 1304 |
+
0.64,
|
| 1305 |
+
0.488,
|
| 1306 |
+
0.706
|
| 1307 |
+
],
|
| 1308 |
+
"angle": 0,
|
| 1309 |
+
"content": "Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2022. Self-instruct: Aligning language models with self-generated instructions. arXiv preprint arXiv:2212.10560."
|
| 1310 |
+
},
|
| 1311 |
+
{
|
| 1312 |
+
"type": "ref_text",
|
| 1313 |
+
"bbox": [
|
| 1314 |
+
0.117,
|
| 1315 |
+
0.715,
|
| 1316 |
+
0.488,
|
| 1317 |
+
0.793
|
| 1318 |
+
],
|
| 1319 |
+
"angle": 0,
|
| 1320 |
+
"content": "Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A Smith, Mari Ostendorf, and Hannaneh Hajishirzi. 2023. Fine-grained human feedback gives better rewards for language model training. arXiv preprint arXiv:2306.01693."
|
| 1321 |
+
},
|
| 1322 |
+
{
|
| 1323 |
+
"type": "ref_text",
|
| 1324 |
+
"bbox": [
|
| 1325 |
+
0.117,
|
| 1326 |
+
0.804,
|
| 1327 |
+
0.488,
|
| 1328 |
+
0.869
|
| 1329 |
+
],
|
| 1330 |
+
"angle": 0,
|
| 1331 |
+
"content": "Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. 2023. Rrhf: Rank responses to align language models with human feedback without tears. arXiv preprint arXiv:2304.05302."
|
| 1332 |
+
},
|
| 1333 |
+
{
|
| 1334 |
+
"type": "ref_text",
|
| 1335 |
+
"bbox": [
|
| 1336 |
+
0.117,
|
| 1337 |
+
0.879,
|
| 1338 |
+
0.488,
|
| 1339 |
+
0.919
|
| 1340 |
+
],
|
| 1341 |
+
"angle": 0,
|
| 1342 |
+
"content": "Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2023."
|
| 1343 |
+
},
|
| 1344 |
+
{
|
| 1345 |
+
"type": "list",
|
| 1346 |
+
"bbox": [
|
| 1347 |
+
0.117,
|
| 1348 |
+
0.086,
|
| 1349 |
+
0.49,
|
| 1350 |
+
0.919
|
| 1351 |
+
],
|
| 1352 |
+
"angle": 0,
|
| 1353 |
+
"content": null
|
| 1354 |
+
},
|
| 1355 |
+
{
|
| 1356 |
+
"type": "text",
|
| 1357 |
+
"bbox": [
|
| 1358 |
+
0.529,
|
| 1359 |
+
0.086,
|
| 1360 |
+
0.882,
|
| 1361 |
+
0.113
|
| 1362 |
+
],
|
| 1363 |
+
"angle": 0,
|
| 1364 |
+
"content": "Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685."
|
| 1365 |
+
},
|
| 1366 |
+
{
|
| 1367 |
+
"type": "page_number",
|
| 1368 |
+
"bbox": [
|
| 1369 |
+
0.486,
|
| 1370 |
+
0.929,
|
| 1371 |
+
0.516,
|
| 1372 |
+
0.941
|
| 1373 |
+
],
|
| 1374 |
+
"angle": 0,
|
| 1375 |
+
"content": "678"
|
| 1376 |
+
}
|
| 1377 |
+
],
|
| 1378 |
+
[
|
| 1379 |
+
{
|
| 1380 |
+
"type": "title",
|
| 1381 |
+
"bbox": [
|
| 1382 |
+
0.115,
|
| 1383 |
+
0.085,
|
| 1384 |
+
0.239,
|
| 1385 |
+
0.101
|
| 1386 |
+
],
|
| 1387 |
+
"angle": 0,
|
| 1388 |
+
"content": "A Appendix"
|
| 1389 |
+
},
|
| 1390 |
+
{
|
| 1391 |
+
"type": "title",
|
| 1392 |
+
"bbox": [
|
| 1393 |
+
0.115,
|
| 1394 |
+
0.11,
|
| 1395 |
+
0.388,
|
| 1396 |
+
0.126
|
| 1397 |
+
],
|
| 1398 |
+
"angle": 0,
|
| 1399 |
+
"content": "A.1 Prompt for Minimal Editing"
|
| 1400 |
+
},
|
| 1401 |
+
{
|
| 1402 |
+
"type": "text",
|
| 1403 |
+
"bbox": [
|
| 1404 |
+
0.114,
|
| 1405 |
+
0.131,
|
| 1406 |
+
0.825,
|
| 1407 |
+
0.147
|
| 1408 |
+
],
|
| 1409 |
+
"angle": 0,
|
| 1410 |
+
"content": "Figure 2 shows the prompt for Claude-2 to generate the fine-grained dataset by minimal editing."
|
| 1411 |
+
},
|
| 1412 |
+
{
|
| 1413 |
+
"type": "title",
|
| 1414 |
+
"bbox": [
|
| 1415 |
+
0.12,
|
| 1416 |
+
0.158,
|
| 1417 |
+
0.239,
|
| 1418 |
+
0.17
|
| 1419 |
+
],
|
| 1420 |
+
"angle": 0,
|
| 1421 |
+
"content": "Claude-2 Prompt:"
|
| 1422 |
+
},
|
| 1423 |
+
{
|
| 1424 |
+
"type": "text",
|
| 1425 |
+
"bbox": [
|
| 1426 |
+
0.137,
|
| 1427 |
+
0.174,
|
| 1428 |
+
0.324,
|
| 1429 |
+
0.188
|
| 1430 |
+
],
|
| 1431 |
+
"angle": 0,
|
| 1432 |
+
"content": "Human: Question: <question>"
|
| 1433 |
+
},
|
| 1434 |
+
{
|
| 1435 |
+
"type": "text",
|
| 1436 |
+
"bbox": [
|
| 1437 |
+
0.138,
|
| 1438 |
+
0.19,
|
| 1439 |
+
0.226,
|
| 1440 |
+
0.201
|
| 1441 |
+
],
|
| 1442 |
+
"angle": 0,
|
| 1443 |
+
"content": "Input: <input>"
|
| 1444 |
+
},
|
| 1445 |
+
{
|
| 1446 |
+
"type": "text",
|
| 1447 |
+
"bbox": [
|
| 1448 |
+
0.138,
|
| 1449 |
+
0.217,
|
| 1450 |
+
0.331,
|
| 1451 |
+
0.23
|
| 1452 |
+
],
|
| 1453 |
+
"angle": 0,
|
| 1454 |
+
"content": "Response: <original response>"
|
| 1455 |
+
},
|
| 1456 |
+
{
|
| 1457 |
+
"type": "text",
|
| 1458 |
+
"bbox": [
|
| 1459 |
+
0.136,
|
| 1460 |
+
0.245,
|
| 1461 |
+
0.879,
|
| 1462 |
+
0.273
|
| 1463 |
+
],
|
| 1464 |
+
"angle": 0,
|
| 1465 |
+
"content": "Please make minimal changes to the above response in order to make it a more helpful response to the question. NOTE It is important to only make minimal changes to the response - try to keep most of the response unchanged."
|
| 1466 |
+
},
|
| 1467 |
+
{
|
| 1468 |
+
"type": "text",
|
| 1469 |
+
"bbox": [
|
| 1470 |
+
0.137,
|
| 1471 |
+
0.288,
|
| 1472 |
+
0.639,
|
| 1473 |
+
0.301
|
| 1474 |
+
],
|
| 1475 |
+
"angle": 0,
|
| 1476 |
+
"content": "Assistant: Here is my attempt to make minimal changes to improve the response:"
|
| 1477 |
+
},
|
| 1478 |
+
{
|
| 1479 |
+
"type": "image_caption",
|
| 1480 |
+
"bbox": [
|
| 1481 |
+
0.406,
|
| 1482 |
+
0.317,
|
| 1483 |
+
0.591,
|
| 1484 |
+
0.331
|
| 1485 |
+
],
|
| 1486 |
+
"angle": 0,
|
| 1487 |
+
"content": "Figure 2: Prompt for Claude"
|
| 1488 |
+
},
|
| 1489 |
+
{
|
| 1490 |
+
"type": "title",
|
| 1491 |
+
"bbox": [
|
| 1492 |
+
0.115,
|
| 1493 |
+
0.357,
|
| 1494 |
+
0.433,
|
| 1495 |
+
0.373
|
| 1496 |
+
],
|
| 1497 |
+
"angle": 0,
|
| 1498 |
+
"content": "A.2 More examples of minimal editing"
|
| 1499 |
+
},
|
| 1500 |
+
{
|
| 1501 |
+
"type": "text",
|
| 1502 |
+
"bbox": [
|
| 1503 |
+
0.113,
|
| 1504 |
+
0.378,
|
| 1505 |
+
0.882,
|
| 1506 |
+
0.426
|
| 1507 |
+
],
|
| 1508 |
+
"angle": 0,
|
| 1509 |
+
"content": "In this section, we provide more examples of fine-grained annotation via minimal editing. As shown in Figure 3, 4 and 5, the annotators kept most of the response unchanged and only revised the red parts to the yellow parts."
|
| 1510 |
+
},
|
| 1511 |
+
{
|
| 1512 |
+
"type": "text",
|
| 1513 |
+
"bbox": [
|
| 1514 |
+
0.165,
|
| 1515 |
+
0.446,
|
| 1516 |
+
0.527,
|
| 1517 |
+
0.458
|
| 1518 |
+
],
|
| 1519 |
+
"angle": 0,
|
| 1520 |
+
"content": "Question: Explain what a generative adversarial network (GAN) is."
|
| 1521 |
+
},
|
| 1522 |
+
{
|
| 1523 |
+
"type": "title",
|
| 1524 |
+
"bbox": [
|
| 1525 |
+
0.165,
|
| 1526 |
+
0.471,
|
| 1527 |
+
0.23,
|
| 1528 |
+
0.482
|
| 1529 |
+
],
|
| 1530 |
+
"angle": 0,
|
| 1531 |
+
"content": "Response:"
|
| 1532 |
+
},
|
| 1533 |
+
{
|
| 1534 |
+
"type": "text",
|
| 1535 |
+
"bbox": [
|
| 1536 |
+
0.164,
|
| 1537 |
+
0.483,
|
| 1538 |
+
0.816,
|
| 1539 |
+
0.557
|
| 1540 |
+
],
|
| 1541 |
+
"angle": 0,
|
| 1542 |
+
"content": "A generative adversarial network (GAN) is a type of machine learning algorithm that combines a generative model and a discriminative model in a single framework. The generative model creates data points, while the discriminative model decides whether the generated data points are real or fake. The goal of the generative model is to fool the discriminative model, which encourages the generative model to create data points that are hard to distinguish from the real data. By studying the behavior of the two models, GANs can be used for tasks such as image generation, natural language processing, and more."
|
| 1543 |
+
},
|
| 1544 |
+
{
|
| 1545 |
+
"type": "title",
|
| 1546 |
+
"bbox": [
|
| 1547 |
+
0.165,
|
| 1548 |
+
0.57,
|
| 1549 |
+
0.337,
|
| 1550 |
+
0.581
|
| 1551 |
+
],
|
| 1552 |
+
"angle": 0,
|
| 1553 |
+
"content": "Revised response (Claude-2):"
|
| 1554 |
+
},
|
| 1555 |
+
{
|
| 1556 |
+
"type": "text",
|
| 1557 |
+
"bbox": [
|
| 1558 |
+
0.164,
|
| 1559 |
+
0.582,
|
| 1560 |
+
0.831,
|
| 1561 |
+
0.669
|
| 1562 |
+
],
|
| 1563 |
+
"angle": 0,
|
| 1564 |
+
"content": "A generative adversarial network (GAN) is a type of machine learning algorithm that combines a generative model and a discriminative model in a single framework. The generative model creates synthetic data points, while the discriminative model decides whether the generated data points are real or artificially generated. The goal of the generative model is to generate data that fools the discriminative model into thinking it is real, which encourages the generative model to create more realistic synthetic data that is hard to distinguish from the real data. By studying the interaction between the two models as they compete, GANs can be used for tasks such as generating realistic images, improving natural language processing systems, and more. The GAN framework allows for generative models to create higher quality synthesized data."
|
| 1565 |
+
},
|
| 1566 |
+
{
|
| 1567 |
+
"type": "text",
|
| 1568 |
+
"bbox": [
|
| 1569 |
+
0.182,
|
| 1570 |
+
0.687,
|
| 1571 |
+
0.813,
|
| 1572 |
+
0.701
|
| 1573 |
+
],
|
| 1574 |
+
"angle": 0,
|
| 1575 |
+
"content": "Figure 3: Example of fine-grained annotation via minimal editing: edit words may cause safety issues."
|
| 1576 |
+
},
|
| 1577 |
+
{
|
| 1578 |
+
"type": "page_number",
|
| 1579 |
+
"bbox": [
|
| 1580 |
+
0.486,
|
| 1581 |
+
0.929,
|
| 1582 |
+
0.516,
|
| 1583 |
+
0.941
|
| 1584 |
+
],
|
| 1585 |
+
"angle": 0,
|
| 1586 |
+
"content": "679"
|
| 1587 |
+
}
|
| 1588 |
+
],
|
| 1589 |
+
[
|
| 1590 |
+
{
|
| 1591 |
+
"type": "title",
|
| 1592 |
+
"bbox": [
|
| 1593 |
+
0.166,
|
| 1594 |
+
0.179,
|
| 1595 |
+
0.221,
|
| 1596 |
+
0.187
|
| 1597 |
+
],
|
| 1598 |
+
"angle": 0,
|
| 1599 |
+
"content": "Question"
|
| 1600 |
+
},
|
| 1601 |
+
{
|
| 1602 |
+
"type": "text",
|
| 1603 |
+
"bbox": [
|
| 1604 |
+
0.166,
|
| 1605 |
+
0.19,
|
| 1606 |
+
0.422,
|
| 1607 |
+
0.201
|
| 1608 |
+
],
|
| 1609 |
+
"angle": 0,
|
| 1610 |
+
"content": "Explain how machine learning algorithms work."
|
| 1611 |
+
},
|
| 1612 |
+
{
|
| 1613 |
+
"type": "title",
|
| 1614 |
+
"bbox": [
|
| 1615 |
+
0.166,
|
| 1616 |
+
0.215,
|
| 1617 |
+
0.227,
|
| 1618 |
+
0.225
|
| 1619 |
+
],
|
| 1620 |
+
"angle": 0,
|
| 1621 |
+
"content": "Response"
|
| 1622 |
+
},
|
| 1623 |
+
{
|
| 1624 |
+
"type": "text",
|
| 1625 |
+
"bbox": [
|
| 1626 |
+
0.164,
|
| 1627 |
+
0.227,
|
| 1628 |
+
0.832,
|
| 1629 |
+
0.289
|
| 1630 |
+
],
|
| 1631 |
+
"angle": 0,
|
| 1632 |
+
"content": "Machine learning algorithms are a type of algorithms that use statistical techniques to learn and improve performance on a specific task. They learn from sample data and identify patterns in data to make data-driven decisions. Machine learning algorithms can be divided into two categories: supervised learning and unsupervised learning. In supervised learning, the algorithm is trained on labeled data and is able to make inferences based on the training data. In unsupervised learning, the algorithm is trained on unlabeled data and is able to identify patterns in the data."
|
| 1633 |
+
},
|
| 1634 |
+
{
|
| 1635 |
+
"type": "title",
|
| 1636 |
+
"bbox": [
|
| 1637 |
+
0.166,
|
| 1638 |
+
0.302,
|
| 1639 |
+
0.337,
|
| 1640 |
+
0.312
|
| 1641 |
+
],
|
| 1642 |
+
"angle": 0,
|
| 1643 |
+
"content": "Revised response (Cladue-2):"
|
| 1644 |
+
},
|
| 1645 |
+
{
|
| 1646 |
+
"type": "text",
|
| 1647 |
+
"bbox": [
|
| 1648 |
+
0.164,
|
| 1649 |
+
0.314,
|
| 1650 |
+
0.83,
|
| 1651 |
+
0.412
|
| 1652 |
+
],
|
| 1653 |
+
"angle": 0,
|
| 1654 |
+
"content": "Machine learning algorithms are a type of algorithms that use statistical techniques to learn and improve performance on a specific task over time. They learn from sample data and identify patterns in data to make data-driven decisions without being explicitly programmed for the task. Machine learning algorithms can be divided into two main categories: supervised learning and unsupervised learning. In supervised learning, the algorithm is trained on labeled data, where the data is already tagged with the right answers. It is then able to make inferences on new unlabeled data based on its training. In unsupervised learning, the algorithm is trained on unlabeled data and is able to identify patterns and relationships in the data on its own, without any human guidance. The key aspect is that machine learning algorithms learn from experience and improve their performance independently."
|
| 1655 |
+
},
|
| 1656 |
+
{
|
| 1657 |
+
"type": "text",
|
| 1658 |
+
"bbox": [
|
| 1659 |
+
0.151,
|
| 1660 |
+
0.429,
|
| 1661 |
+
0.845,
|
| 1662 |
+
0.442
|
| 1663 |
+
],
|
| 1664 |
+
"angle": 0,
|
| 1665 |
+
"content": "Figure 4: Example of fine-grained annotation via minimal editing: provide more explanation on academic words."
|
| 1666 |
+
},
|
| 1667 |
+
{
|
| 1668 |
+
"type": "title",
|
| 1669 |
+
"bbox": [
|
| 1670 |
+
0.246,
|
| 1671 |
+
0.643,
|
| 1672 |
+
0.305,
|
| 1673 |
+
0.653
|
| 1674 |
+
],
|
| 1675 |
+
"angle": 0,
|
| 1676 |
+
"content": "Question"
|
| 1677 |
+
},
|
| 1678 |
+
{
|
| 1679 |
+
"type": "text",
|
| 1680 |
+
"bbox": [
|
| 1681 |
+
0.246,
|
| 1682 |
+
0.657,
|
| 1683 |
+
0.721,
|
| 1684 |
+
0.668
|
| 1685 |
+
],
|
| 1686 |
+
"angle": 0,
|
| 1687 |
+
"content": "Produce an alternate version of the following quote using a different literary device"
|
| 1688 |
+
},
|
| 1689 |
+
{
|
| 1690 |
+
"type": "title",
|
| 1691 |
+
"bbox": [
|
| 1692 |
+
0.246,
|
| 1693 |
+
0.683,
|
| 1694 |
+
0.282,
|
| 1695 |
+
0.693
|
| 1696 |
+
],
|
| 1697 |
+
"angle": 0,
|
| 1698 |
+
"content": "Input"
|
| 1699 |
+
},
|
| 1700 |
+
{
|
| 1701 |
+
"type": "text",
|
| 1702 |
+
"bbox": [
|
| 1703 |
+
0.246,
|
| 1704 |
+
0.696,
|
| 1705 |
+
0.592,
|
| 1706 |
+
0.707
|
| 1707 |
+
],
|
| 1708 |
+
"angle": 0,
|
| 1709 |
+
"content": "Better to have loved and lost, than never to have loved at all."
|
| 1710 |
+
},
|
| 1711 |
+
{
|
| 1712 |
+
"type": "title",
|
| 1713 |
+
"bbox": [
|
| 1714 |
+
0.246,
|
| 1715 |
+
0.722,
|
| 1716 |
+
0.31,
|
| 1717 |
+
0.733
|
| 1718 |
+
],
|
| 1719 |
+
"angle": 0,
|
| 1720 |
+
"content": "Response"
|
| 1721 |
+
},
|
| 1722 |
+
{
|
| 1723 |
+
"type": "text",
|
| 1724 |
+
"bbox": [
|
| 1725 |
+
0.246,
|
| 1726 |
+
0.735,
|
| 1727 |
+
0.703,
|
| 1728 |
+
0.746
|
| 1729 |
+
],
|
| 1730 |
+
"angle": 0,
|
| 1731 |
+
"content": "It is better to have had a flame and lost it, than never to have been burned at all."
|
| 1732 |
+
},
|
| 1733 |
+
{
|
| 1734 |
+
"type": "title",
|
| 1735 |
+
"bbox": [
|
| 1736 |
+
0.246,
|
| 1737 |
+
0.761,
|
| 1738 |
+
0.427,
|
| 1739 |
+
0.772
|
| 1740 |
+
],
|
| 1741 |
+
"angle": 0,
|
| 1742 |
+
"content": "Revised response (Claude-2):"
|
| 1743 |
+
},
|
| 1744 |
+
{
|
| 1745 |
+
"type": "text",
|
| 1746 |
+
"bbox": [
|
| 1747 |
+
0.246,
|
| 1748 |
+
0.774,
|
| 1749 |
+
0.751,
|
| 1750 |
+
0.786
|
| 1751 |
+
],
|
| 1752 |
+
"angle": 0,
|
| 1753 |
+
"content": "It is better to have had a flame and been singed, than never to have felt its warmth at all."
|
| 1754 |
+
},
|
| 1755 |
+
{
|
| 1756 |
+
"type": "text",
|
| 1757 |
+
"bbox": [
|
| 1758 |
+
0.114,
|
| 1759 |
+
0.808,
|
| 1760 |
+
0.881,
|
| 1761 |
+
0.822
|
| 1762 |
+
],
|
| 1763 |
+
"angle": 0,
|
| 1764 |
+
"content": "Figure 5: Example of fine-grained annotation via minimal editing: change the literary device that follows the instruction better."
|
| 1765 |
+
},
|
| 1766 |
+
{
|
| 1767 |
+
"type": "page_number",
|
| 1768 |
+
"bbox": [
|
| 1769 |
+
0.486,
|
| 1770 |
+
0.929,
|
| 1771 |
+
0.516,
|
| 1772 |
+
0.94
|
| 1773 |
+
],
|
| 1774 |
+
"angle": 0,
|
| 1775 |
+
"content": "680"
|
| 1776 |
+
}
|
| 1777 |
+
]
|
| 1778 |
+
]
|
2024/Aligning Large Language Models via Fine-grained Supervision/d2093850-e28f-43df-a51c-a0b07b648dd6_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c1ea27b9f7c48f9c53e0a6bc7914a1eabb27db614609524671eadd6ca232a5ba
|
| 3 |
+
size 2675861
|
2024/Aligning Large Language Models via Fine-grained Supervision/full.md
ADDED
|
@@ -0,0 +1,271 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Aligning Large Language Models via Fine-grained Supervision
|
| 2 |
+
|
| 3 |
+
Dehong Xu $^{1*}$ , Liang Qiu $^{2*}$ , Minseok Kim $^{2}$ , Faisal Ladhak $^{2}$ , Jaeyoung Do $^{3}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ Department of Statistics, UCLA $^{2}$ Amazon
|
| 6 |
+
$^{3}$ Department of Electrical and Computer Engineering, Seoul National University Correspondence: xudehong1996@ucla.edu, liangqxx@amazon.com
|
| 7 |
+
|
| 8 |
+
# Abstract
|
| 9 |
+
|
| 10 |
+
Pre-trained large-scale language models (LLMs) excel at producing coherent articles, yet their outputs may be untruthful, toxic, or fail to align with user expectations. Current approaches focus on using reinforcement learning with human feedback (RLHF) to improve model alignment, which works by transforming coarse human preferences of LLM outputs into a feedback signal that guides the model learning process. However, because this approach operates on sequence-level feedback, it lacks the precision to identify the exact parts of the output affecting user preferences. To address this gap, we propose a method to enhance LLM alignment through fine-grained token-level supervision. Specifically, we ask annotators to minimally edit less preferred responses within the standard reward modeling dataset to make them more favorable, ensuring changes are made only where necessary while retaining most of the original content. The refined dataset is used to train a token-level reward model, which is then used for training our fine-grained Proximal Policy Optimization (PPO) model. Our experiment results demonstrate that this approach can achieve up to an absolute improvement of $5.1\%$ in LLM performance, in terms of win rate against the reference model, compared with the traditional PPO model.
|
| 11 |
+
|
| 12 |
+
# 1 Introduction
|
| 13 |
+
|
| 14 |
+
One key objective in advancing large language models (LLMs) is to ensure safe, beneficial human interaction. However, current pre-trained models, mostly trained on web and book texts, often generate biased or toxic text, misaligning with human intentions. To address this issue, numerous studies (Ouyang et al., 2022; Rafailov et al., 2023; Bai et al., 2022b,a; Yuan et al., 2023; Touvron
|
| 15 |
+
|
| 16 |
+
et al., 2023; Ramamurthy et al., 2022) have integrated human feedback into the training process. A significant advancement is reinforcement learning from human feedback (RLHF) (Ouyang et al., 2022), which usually consists of two phases: First, a reward model (RM) is trained from preference data, which comprises various responses alongside their human-assigned preference scores for a given prompt. Then, this reward model is applied to optimize a final model using Proximal Policy Optimization (PPO) (Schulman et al., 2017).
|
| 17 |
+
|
| 18 |
+
Recent works (Wu et al., 2023; Rafailov et al., 2023; Fernandes et al., 2023; Guo et al., 2023; Wang et al., 2024) discovered limitations of the current RM, specifically their misalignment with human values. This misalignment stems from two main issues: $(i)$ the presence of incorrect and ambiguous preference pairs in the human-labeled datasets; $(ii)$ the limited insight inherent in sequence-level feedback. Specifically, from a data collection standpoint, the task of comparing the overall quality of model outputs is challenging for human annotators when outputs exhibit both desired and undesired behaviors in different parts. Moreover from the RM perspective, the reliance on preference-based data labeling leads to sparse training signals. This sparsity discourages the model's ability to distinguish finer details between responses and further limits the capacity for reward optimization.
|
| 19 |
+
|
| 20 |
+
To tackle this challenge, we propose the following two-fold contributions as illustrated in Figure 1:
|
| 21 |
+
|
| 22 |
+
- We introduce a new data collection approach that asks annotators to edit responses from existing RM datasets to be more preferable. By comparing the original and edited responses, we obtain detailed token-level insights that are essential for training our fine-tuned reward model.
|
| 23 |
+
- We propose a new token-level reward modeling approach that provides reward signals at the token level. Different from coarse-grained
|
| 24 |
+
|
| 25 |
+
# Phase 1: Data collection
|
| 26 |
+
|
| 27 |
+
Prompt: What are the top three most abundant elements found in the Earth's crust?
|
| 28 |
+
|
| 29 |
+
# Model response A:
|
| 30 |
+
|
| 31 |
+
The three most abundant elements are oxygen, silicon, and iron. Oxygen combines with other elements to form oxides and silicon is the most common mineral ...
|
| 32 |
+
|
| 33 |
+
# Model response B:
|
| 34 |
+
|
| 35 |
+
The Earth's crust is rich in elements, with oxygen, silicon, and calcium. Oxygen makes up about $46.6\%$ of the Earth's crust ...
|
| 36 |
+
|
| 37 |
+
# Human feedback:
|
| 38 |
+
|
| 39 |
+

|
| 40 |
+
|
| 41 |
+

|
| 42 |
+
|
| 43 |
+
# Phase 2: Reward modeling
|
| 44 |
+
|
| 45 |
+
New prompt: What is the smallest ocean in the world?
|
| 46 |
+
|
| 47 |
+

|
| 48 |
+
Sequence-level RM
|
| 49 |
+
Figure 1: The comparison between sequence-level reward modeling (Left) and our method of fine-grained reward modeling via minimal editing (Right). Our approach diverges from sequence-level reward modeling in two key aspects: (1) Data Collection, where we ask a human or LLM to edit the model response; and (2) Reward Modeling, which enables our model to assign rewards to individual tokens, as opposed to assessing the entire sequence collectively.
|
| 50 |
+
|
| 51 |
+
The Arctic Ocean is the smallest ocean in the world. It locates in the northmost part of the Earth and covers an area of about 12.5 million square kilometers.
|
| 52 |
+
|
| 53 |
+
Sequence reward: -0.2
|
| 54 |
+
|
| 55 |
+
# Model response:
|
| 56 |
+
|
| 57 |
+
The three most abundant elements are oxygen, silicon, and iron. Oxygen combines with other elements to form oxides and silicon is the most common mineral group to form ricks like quartz. Iron is vital for forming many minerals and rocks, contributing to the Earth's magnetic field and core composition.
|
| 58 |
+
|
| 59 |
+
# Revised response via editing:
|
| 60 |
+
|
| 61 |
+
The three most abundant elements are oxygen, silicon, and aluminum. Oxygen combines with other elements to form oxides and silicon is the most common mineral group to form ricks like quartz. Aluminum is vital for forming many minerals and is significant in the crust's overall composition, found in clays and feldspars, contributing to the crust's structure and stability.
|
| 62 |
+
|
| 63 |
+
Token reward: +1
|
| 64 |
+
|
| 65 |
+
The Arctic Ocean is the smallest ocean in the world. It locates in the northmost part of the Earth and covers an area of about 12.5 million square kilometers.
|
| 66 |
+
|
| 67 |
+
Token reward: -2
|
| 68 |
+
|
| 69 |
+
sequence-level rewards, our approach offers more granular feedback, pinpointing the specific parts of a response that are effective or need improvement, which hence helps RL optimization.
|
| 70 |
+
|
| 71 |
+
Experiment results using AlpacaFarm (Dubois et al., 2023) environment indicate that our proposed approach improves LLMs' performance up to $5.1\%$ against the baseline in terms of win rate, given the same amount of data for training.
|
| 72 |
+
|
| 73 |
+
# 2 Method
|
| 74 |
+
|
| 75 |
+
In this section, we introduce our approach to fine-grained data collection through editing and token-level reward modeling.
|
| 76 |
+
|
| 77 |
+
# 2.1 Fine-grained data collection via minimal editing
|
| 78 |
+
|
| 79 |
+
The conventional RLHF pipeline, as outlined in prior works (Ouyang et al., 2022; Dubois et al., 2023), involves three key stages: supervised finetuning (SFT), reward modeling (RM), and proximal policy optimization (PPO). In the RM phase, the standard practice entails collecting a dataset of human evaluations comparing two or more model outputs in response to a series of prompts. The dataset is represented as $\mathcal{D} = \{x^{(i)},y_w^{(i)},y_l^{(i)}\}_{i = 1}^N$ where $x$ denotes a prompt and $(y_{w},y_{l})$ indicates the preferred and less preferred responses, respectively.
|
| 80 |
+
|
| 81 |
+
Utilizing such a dataset, earlier RLHF research focused on developing a reward model $R_{\phi}$ that determines the more favored model output. This holistic reward model associates each input prompt $x$ and its corresponding output $y$ with one scalar value reflecting the output's overall quality.
|
| 82 |
+
|
| 83 |
+
However, as shown in the left panel of Figure 1, annotating a pair of model outputs that are substantially different can be a difficult task for humans, especially when each response exhibits a mix of desirable and undesirable behaviors. To address this issue, we introduce a novel data collection technique aimed at obtaining fine-grained supervision, which offers richer, comparative information beyond simple binary choices. Instead of annotating entire responses, our method involves targeted editing by humans or language models, as depicted in the right panel of Figure 1. The goal is to retain the majority of the original response while making improvements to specific areas in need of enhancement. Specifically, we introduce a response editing process in which we ask humans or prompt LLMs to perform targeted modifications. For fine-grained data collection, our method works for both human annotators and language models, following (Ding et al., 2022; Gilardi et al., 2023; Wang et al., 2022; Chiang and Lee, 2023).
|
| 84 |
+
|
| 85 |
+
In practice, we prompt a proprietary LLM, such as Claude-2 (Bai et al., 2022b), to apply edits to
|
| 86 |
+
|
| 87 |
+
the original output. In the experiment, the original preference pairs $(y_w, y_l)$ were not included and we only utilized $y_l$ from the original dataset for minimal editing. This approach maintains the same amount of data as the baseline methods, ensuring a fair comparison. Details of the prompt used for editing can be found in Appendix A.1, and the examples of fine-grained annotation with minimal editing are shown in Appendix A.2. Our method is based on the assumption that the edits inherently improve a response, making changes only when they enhance alignment with human values. The approach enables the refinement of responses by providing clear insights into the specific areas that require improvement.
|
| 88 |
+
|
| 89 |
+
# 2.2 Token-level reward modeling
|
| 90 |
+
|
| 91 |
+
In this section, we will first introduce the RL environment and then define our token-level reward modeling scheme.
|
| 92 |
+
|
| 93 |
+
Language generation can be defined as a Markov Decision Process (MDP) $\langle S, \mathcal{A}, \mathcal{R}, \mathcal{P}, \gamma \rangle$ . $S$ refers to the state space and we define the start state $s_1$ as the input prompts $\{x\}$ . An action at t-step $a_t$ is a generated token. The transition function of the environment is denoted as $\mathcal{P}: S \times \mathcal{A} \to S$ , $s_t = \{x, a_1, \dots, a_{t-1}\}$ . A response $y$ of length $T$ is then $y = \{a_1, \dots, a_T\}$ . In our token-level reward scheme, a reward is assigned to each generated token $a_t$ by $\mathcal{R}: S \times \mathcal{A} \to \mathbb{R}$ , where at each time step $t$ there is a learned reward function $r_t = r_\phi(s_t, a_t)$ . Therefore, for each response, we have a trajectory $\tau = \{s_1, a_1, r_1, \dots, s_t, a_t, r_t, \dots, s_T, a_T, r_T\}$ .
|
| 94 |
+
|
| 95 |
+
We define the reward of the whole trajectory as the average of rewards assigned to each token:
|
| 96 |
+
|
| 97 |
+
$$
|
| 98 |
+
R (\tau) = \frac {1}{T} \sum_ {t = 1} ^ {T} r _ {t}. \tag {1}
|
| 99 |
+
$$
|
| 100 |
+
|
| 101 |
+
Following the Bradley-Terry (BT) model (Bradley and Terry, 1952) for preference modeling, we formulate the distribution of human preference for responses as below:
|
| 102 |
+
|
| 103 |
+
$$
|
| 104 |
+
\begin{array}{l} p \left(\tau^ {i} \succ \tau^ {j}\right) = \frac {\exp \left(R \left(\tau^ {i}\right)\right)}{\exp \left(R \left(\tau^ {i}\right)\right) + \exp \left(R \left(\tau^ {j}\right)\right)} \tag {2} \\ = \sigma (R (\tau^ {i}) - R (\tau^ {j})), \\ \end{array}
|
| 105 |
+
$$
|
| 106 |
+
|
| 107 |
+
where $\tau^i$ and $\tau^j$ represent two different responses generated from the same prompt. Under the setting of our fine-grained supervision dataset, we assume $\tau^i$ only makes edits on $\tau^j$ while maintaining most
|
| 108 |
+
|
| 109 |
+
parts unchanged. We define $U_0 = \{t | a_t^i = a_t^j\}$ and $U_1 = \{t | a_t^i \neq a_t^j\}$ to represent the unchanged and changed parts.
|
| 110 |
+
|
| 111 |
+
Regarding the reward model as a binary classifier, we use negative log-likelihood as the loss function. By plugging in Equation 1, we have:
|
| 112 |
+
|
| 113 |
+
$$
|
| 114 |
+
\begin{array}{l} \mathcal {L} = - \mathbb {E} _ {(\tau^ {i}, \tau^ {j}) \sim \mathcal {D}} \left[ \log \sigma (R (\tau^ {i}) - R (\tau^ {j})) \right] \\ = - \mathbb {E} _ {(\tau^ {i}, \tau^ {j}) \sim \mathcal {D}} [ \log \sigma \left(\left(\frac {1}{T ^ {i}} - \frac {1}{T ^ {j}}\right) \sum_ {t \in U _ {0}} r _ {t} \right. \tag {3} \\ + \frac {1}{T ^ {i}} \sum_ {t \in U _ {1}} r _ {t} ^ {i} - \frac {1}{T ^ {j}} \sum_ {t \in U _ {1}} r _ {t} ^ {j}) ], \\ \end{array}
|
| 115 |
+
$$
|
| 116 |
+
|
| 117 |
+
Ideally, we aim for the unchanged part to maintain a consistent reward. Under this assumption, and if the two responses are of equal length, the first term of the loss function can be removed:
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
\mathcal {L} \approx - \mathbb {E} _ {\left(\tau^ {i}, \tau^ {j}\right) \sim \mathcal {D}} [ \log \sigma \left(\frac {1}{T ^ {i}} \sum_ {t \in U _ {1}} r _ {t} ^ {i} - \frac {1}{T ^ {j}} \sum_ {t \in U _ {1}} r _ {t} ^ {j}\right) ] \tag {4}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
For the edited part, the loss function is thus designed to maximize the reward for the preferred response and minimize it for the less favored one.
|
| 124 |
+
|
| 125 |
+
With a trained token-level reward model, we can integrate it into the Proximal Policy Optimization (PPO) (Schulman et al., 2017) algorithm. In the traditional PPO-RLHF method, each token in the sequence is assigned a reward of the form $[-KL_1, -KL_2, \dots, R - KL_n]$ , where $KL_i$ denotes the Kullback-Leibler divergence (Kullback and Leibler, 1951) for the generated token sequence up to that point, and $R$ represents the sequence-level reward from the reward model. Generalized Advantage Estimation (GAE) (Schulman et al., 2015) is then employed to calculate the advantage at the token level.
|
| 126 |
+
|
| 127 |
+
In contrast, our approach assigns a reward $R_{i}$ directly from the token-level reward model to each token in the sequence, resulting in a reward vector of $[R_1,R_2,\dots,R_n]$ . This approach enhances the granularity of feedback at each step of the sequence generation process, without changing the underlying GAE and policy update procedure. Consequently, the computational cost remains comparable to the standard RLHF approach.
|
| 128 |
+
|
| 129 |
+
# 3 Experiments
|
| 130 |
+
|
| 131 |
+
In this section, we demonstrate our experimental setup and empirical results in detail.
|
| 132 |
+
|
| 133 |
+
<table><tr><td>Model</td><td>Win rate (%)</td></tr><tr><td>Fine-grained Token-level PPO</td><td>51.6 ± 1.8</td></tr><tr><td>Fine-grained PPO</td><td>51.2 ± 1.8</td></tr><tr><td>Davinci003 (Brown et al., 2020)</td><td>50.0</td></tr><tr><td>PPO-RLHF (Ouyang et al., 2022)</td><td>46.5 ± 1.8</td></tr></table>
|
| 134 |
+
|
| 135 |
+
Table 1: Evaluation results by Claude. Davinci003 is the reference model. All results of other models are from (Dubois et al., 2023).
|
| 136 |
+
|
| 137 |
+
# 3.1 Experimental setup
|
| 138 |
+
|
| 139 |
+
In constructing our dataset, we follow the framework established by AlpacaFarm (Dubois et al., 2023), which offers a simulation environment that includes data splits for SFT, RM, PPO, and evaluation processes. Building on this, we develop our refined RM dataset using the fine-grained approach, where we employ Claude-2 (Bai et al., 2022b) to perform targeted editing. Edits are generated on the less preferred responses from the original pairwise data, ensuring lightweight yet effective modifications.
|
| 140 |
+
|
| 141 |
+
We evaluate our method by finetuning the pretrained $LLaMA-7B$ (Touvron et al., 2023) model. To assess the quality of our model's generation compared to baseline models, we employ a win-rate measurement, where the model $p_{\theta}$ is evaluated against a reference model $p_{\mathrm{ref}}$ . This method involves pairwise comparisons to estimate how often $p_{\theta}$ 's outputs are preferred over $p_{\mathrm{ref}}$ 's for given instructions. Both our model and the baselines are evaluated against the same reference model, Davinci003, aligning with AlpacaFarm (Dubois et al., 2023). To assess the win rate, we employ Claude as the judge, following the simulated approach in (Zheng et al., 2023).
|
| 142 |
+
|
| 143 |
+
To evaluate the effectiveness of our data annotation approach and token-level reward model, we train two models: (i) Fine-grained PPO that only uses our fine-grained RM dataset with editing while still trained with a sequence-level reward, and (ii) Fine-grained Token-level PPO that incorporates both the fine-grained RM dataset and token-level reward modeling, and hence applies token-level reward to PPO.
|
| 144 |
+
|
| 145 |
+
# 3.2 Experiment results
|
| 146 |
+
|
| 147 |
+
Results in human value alignment Table 1 showcases our methods (highlighted) alongside the baseline PPO-RLHF model, both trained on $LLaMA - 7B$ (Touvron et al., 2023). Results indicate
|
| 148 |
+
|
| 149 |
+
<table><tr><td>Model</td><td>Accuracy (%)</td></tr><tr><td>RM w/ Fine-grained dataset</td><td>85.2 ± 1.8</td></tr><tr><td>RM w/o Fine-grained dataset</td><td>58.2 ± 1.8</td></tr></table>
|
| 150 |
+
|
| 151 |
+
Table 2: Reward model accuracy. Leveraging the fine-grained dataset enhances the reward model's ability to assign correct rewards to responses.
|
| 152 |
+
|
| 153 |
+
<table><tr><td>Model</td><td>Step</td><td>Tr. hours</td></tr><tr><td>RLHF (Ouyang et al., 2022)</td><td>RM</td><td>0.2</td></tr><tr><td>Fine-grained RLHF</td><td>RM</td><td>0.3</td></tr><tr><td>RLHF (Ouyang et al., 2022)</td><td>PPO</td><td>4</td></tr><tr><td>Fine-grained RLHF</td><td>PPO</td><td>2</td></tr></table>
|
| 154 |
+
|
| 155 |
+
Table 3: Training efficiency. Highlighted numbers represent the training hours (Tr. hours) of the fine-grained PPO model trained with token-level rewards.
|
| 156 |
+
|
| 157 |
+
that our novel data collection technique, when integrated with standard PPO training, leads to an absolute performance increase of $4.7\%$ compared to traditional methods (refer to lines 2 vs. 4). This highlights the effectiveness of our fine-grained data collection strategy. Moreover, when trained with the same fine-grained dataset, the token-level reward model (line 1) demonstrates further alignment improvements compared to the PPO alone (line 2), indicating the importance of token-level rewards. Together, these findings affirm that our approach significantly outperforms the traditional PPO-RLHF model.
|
| 158 |
+
|
| 159 |
+
Reward model analysis To explain the observed performance increase, we further investigate the effectiveness of the reward model. We test its accuracy in assigning higher rewards to superior responses within the evaluation set. As shown in Table 2, our fine-grained dataset enables the learned reward model to reach an accuracy of approximately $85.2\%$ , outperforming the model trained with the original dataset. This result demonstrates that our data collection method enhances the capability of our reward model to identify and appropriately reward better responses.
|
| 160 |
+
|
| 161 |
+
Training efficiency Table 3 illustrates the training costs for different models. Note that all the models are trained on 8 NVIDIA A100 GPUs (80G) with the same batch size for both phases. While the training time for the reward modeling phase is comparable between our method and the baseline, our fine-grained reward model significantly boosts the efficiency of RL optimization.
|
| 162 |
+
|
| 163 |
+
It reduces the time required for PPO to converge to its optimal performance by half, due to our more precise and fine-grained reward function. Based on the experiment results, our reward function can provide more accurate and denser training signals, which can help RL algorithms converge faster. This improvement in training efficiency could be important for LLM alignment, especially when the size of the LLM becomes increasingly large.
|
| 164 |
+
|
| 165 |
+
# 4 Limitations
|
| 166 |
+
|
| 167 |
+
Although the empirical results show that our approach achieves better performance in model alignment, we struggle to provide rigorous mathematical proof to conclusively demonstrate the effectiveness of this reward allocation strategy, specifically in Equation 4.
|
| 168 |
+
|
| 169 |
+
# 5 Conclusion
|
| 170 |
+
|
| 171 |
+
In this paper, we introduce a fine-grained RLHF framework that includes a data collection technique alongside a token-level reward model. This approach enables better value alignment by learning a more accurate reward model, facilitating faster convergence for PPO. Our experimental results show performance improvement based on automatic evaluations compared to the baseline method.
|
| 172 |
+
|
| 173 |
+
# Acknowledgments
|
| 174 |
+
|
| 175 |
+
We would like to thank Yi Xu, Puyang Xu and other members of Amazon, as well as Ying Nian Wu and Minglu Zhao and from University of California, Los Angeles for their valuable discussions and constructive feedback. Dehong Xu's research for this work was financially supported by Amazon during his internship at Amazon.
|
| 176 |
+
|
| 177 |
+
# References
|
| 178 |
+
|
| 179 |
+
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, et al. 2022a. Training a helpful and harmless assistant with reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862.
|
| 180 |
+
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. 2022b. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073.
|
| 181 |
+
Ralph Allan Bradley and Milton E Terry. 1952. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324-345.
|
| 182 |
+
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877-1901.
|
| 183 |
+
Cheng-Han Chiang and Hung-yi Lee. 2023. Can large language models be an alternative to human evaluations? arXiv preprint arXiv:2305.01937.
|
| 184 |
+
Bosheng Ding, Chengwei Qin, Linlin Liu, Yew Ken Chia, Shafiq Joty, Boyang Li, and Lidong Bing. 2022. Is gpt-3 a good data annotator? arXiv preprint arXiv:2212.10450.
|
| 185 |
+
Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. 2023. Alpacafarm: A simulation framework for methods that learn from human feedback. arXiv preprint arXiv:2305.14387.
|
| 186 |
+
Patrick Fernandes, Aman Madaan, Emmy Liu, Antonio Farinhas, Pedro Henrique Martins, Amanda Bertsch, José GC de Souza, Shuyan Zhou, Tongshuang Wu, Graham Neubig, et al. 2023. Bridging the gap: A survey on integrating (human) feedback for natural language generation. arXiv preprint arXiv:2305.00955.
|
| 187 |
+
Fabrizio Gilardi, Meysam Alizadeh, and Mael Kubli. 2023. Chatgpt outperforms crowd workers for text-annotation tasks. Proceedings of the National Academy of Sciences, 120(30):e2305016120.
|
| 188 |
+
Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, and Ji-Rong Wen. 2023. Beyond imitation: Leveraging fine-grained quality signals for alignment. arXiv preprint arXiv:2311.04072.
|
| 189 |
+
Solomon Kullback and Richard A Leibler. 1951. On information and sufficiency. The annals of mathematical statistics, 22(1):79-86.
|
| 190 |
+
|
| 191 |
+
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35:27730-27744.
|
| 192 |
+
Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290.
|
| 193 |
+
Rajkumar Ramamurthy, Prithviraj Ammanabrolu, Kianté Brantley, Jack Hessel, Rafet Sifa, Christian Bauckhage, Hannaneh Hajishirzi, and Yejin Choi. 2022. Is reinforcement learning (not) for natural language processing?: Benchmarks, baselines, and building blocks for natural language policy optimization. arXiv preprint arXiv:2210.01241.
|
| 194 |
+
John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel. 2015. High-dimensional continuous control using generalized advantage estimation. arXiv preprint arXiv:1506.02438.
|
| 195 |
+
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.
|
| 196 |
+
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
|
| 197 |
+
Binghai Wang, Rui Zheng, Lu Chen, Yan Liu, Shihan Dou, Caishuang Huang, Wei Shen, Senjie Jin, Enyu Zhou, Chenyu Shi, et al. 2024. Secrets of rlhf in large language models part ii: Reward modeling. arXiv preprint arXiv:2401.06080.
|
| 198 |
+
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2022. Self-instruct: Aligning language models with self-generated instructions. arXiv preprint arXiv:2212.10560.
|
| 199 |
+
Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane Suhr, Prithviraj Ammanabrolu, Noah A Smith, Mari Ostendorf, and Hannaneh Hajishirzi. 2023. Fine-grained human feedback gives better rewards for language model training. arXiv preprint arXiv:2306.01693.
|
| 200 |
+
Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. 2023. Rrhf: Rank responses to align language models with human feedback without tears. arXiv preprint arXiv:2304.05302.
|
| 201 |
+
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. 2023.
|
| 202 |
+
|
| 203 |
+
Judging llm-as-a-judge with mt-bench and chatbot arena. arXiv preprint arXiv:2306.05685.
|
| 204 |
+
|
| 205 |
+
# A Appendix
|
| 206 |
+
|
| 207 |
+
# A.1 Prompt for Minimal Editing
|
| 208 |
+
|
| 209 |
+
Figure 2 shows the prompt for Claude-2 to generate the fine-grained dataset by minimal editing.
|
| 210 |
+
|
| 211 |
+
# Claude-2 Prompt:
|
| 212 |
+
|
| 213 |
+
Human: Question: <question>
|
| 214 |
+
|
| 215 |
+
Input: <input>
|
| 216 |
+
|
| 217 |
+
Response: <original response>
|
| 218 |
+
|
| 219 |
+
Please make minimal changes to the above response in order to make it a more helpful response to the question. NOTE It is important to only make minimal changes to the response - try to keep most of the response unchanged.
|
| 220 |
+
|
| 221 |
+
Assistant: Here is my attempt to make minimal changes to improve the response:
|
| 222 |
+
|
| 223 |
+
Figure 2: Prompt for Claude
|
| 224 |
+
|
| 225 |
+
# A.2 More examples of minimal editing
|
| 226 |
+
|
| 227 |
+
In this section, we provide more examples of fine-grained annotation via minimal editing. As shown in Figure 3, 4 and 5, the annotators kept most of the response unchanged and only revised the red parts to the yellow parts.
|
| 228 |
+
|
| 229 |
+
Question: Explain what a generative adversarial network (GAN) is.
|
| 230 |
+
|
| 231 |
+
# Response:
|
| 232 |
+
|
| 233 |
+
A generative adversarial network (GAN) is a type of machine learning algorithm that combines a generative model and a discriminative model in a single framework. The generative model creates data points, while the discriminative model decides whether the generated data points are real or fake. The goal of the generative model is to fool the discriminative model, which encourages the generative model to create data points that are hard to distinguish from the real data. By studying the behavior of the two models, GANs can be used for tasks such as image generation, natural language processing, and more.
|
| 234 |
+
|
| 235 |
+
# Revised response (Claude-2):
|
| 236 |
+
|
| 237 |
+
A generative adversarial network (GAN) is a type of machine learning algorithm that combines a generative model and a discriminative model in a single framework. The generative model creates synthetic data points, while the discriminative model decides whether the generated data points are real or artificially generated. The goal of the generative model is to generate data that fools the discriminative model into thinking it is real, which encourages the generative model to create more realistic synthetic data that is hard to distinguish from the real data. By studying the interaction between the two models as they compete, GANs can be used for tasks such as generating realistic images, improving natural language processing systems, and more. The GAN framework allows for generative models to create higher quality synthesized data.
|
| 238 |
+
|
| 239 |
+
Figure 3: Example of fine-grained annotation via minimal editing: edit words may cause safety issues.
|
| 240 |
+
|
| 241 |
+
# Question
|
| 242 |
+
|
| 243 |
+
Explain how machine learning algorithms work.
|
| 244 |
+
|
| 245 |
+
# Response
|
| 246 |
+
|
| 247 |
+
Machine learning algorithms are a type of algorithms that use statistical techniques to learn and improve performance on a specific task. They learn from sample data and identify patterns in data to make data-driven decisions. Machine learning algorithms can be divided into two categories: supervised learning and unsupervised learning. In supervised learning, the algorithm is trained on labeled data and is able to make inferences based on the training data. In unsupervised learning, the algorithm is trained on unlabeled data and is able to identify patterns in the data.
|
| 248 |
+
|
| 249 |
+
# Revised response (Cladue-2):
|
| 250 |
+
|
| 251 |
+
Machine learning algorithms are a type of algorithms that use statistical techniques to learn and improve performance on a specific task over time. They learn from sample data and identify patterns in data to make data-driven decisions without being explicitly programmed for the task. Machine learning algorithms can be divided into two main categories: supervised learning and unsupervised learning. In supervised learning, the algorithm is trained on labeled data, where the data is already tagged with the right answers. It is then able to make inferences on new unlabeled data based on its training. In unsupervised learning, the algorithm is trained on unlabeled data and is able to identify patterns and relationships in the data on its own, without any human guidance. The key aspect is that machine learning algorithms learn from experience and improve their performance independently.
|
| 252 |
+
|
| 253 |
+
Figure 4: Example of fine-grained annotation via minimal editing: provide more explanation on academic words.
|
| 254 |
+
|
| 255 |
+
# Question
|
| 256 |
+
|
| 257 |
+
Produce an alternate version of the following quote using a different literary device
|
| 258 |
+
|
| 259 |
+
# Input
|
| 260 |
+
|
| 261 |
+
Better to have loved and lost, than never to have loved at all.
|
| 262 |
+
|
| 263 |
+
# Response
|
| 264 |
+
|
| 265 |
+
It is better to have had a flame and lost it, than never to have been burned at all.
|
| 266 |
+
|
| 267 |
+
# Revised response (Claude-2):
|
| 268 |
+
|
| 269 |
+
It is better to have had a flame and been singed, than never to have felt its warmth at all.
|
| 270 |
+
|
| 271 |
+
Figure 5: Example of fine-grained annotation via minimal editing: change the literary device that follows the instruction better.
|
2024/Aligning Large Language Models via Fine-grained Supervision/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:08fdec5b0fd0caf646ec706d9255150ea8063de590b3f08b5b12a11d2b81773e
|
| 3 |
+
size 110220
|
2024/Aligning Large Language Models via Fine-grained Supervision/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Uni-Dubbing_ Zero-Shot Speech Synthesis from Visual Articulation/fa9f71e0-943b-4156-8d6d-614ac6caf2ba_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Uni-Dubbing_ Zero-Shot Speech Synthesis from Visual Articulation/fa9f71e0-943b-4156-8d6d-614ac6caf2ba_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Uni-Dubbing_ Zero-Shot Speech Synthesis from Visual Articulation/fa9f71e0-943b-4156-8d6d-614ac6caf2ba_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:073088ff1d232265f0d57b3e9772d3f6ba75e3971dad66d43a7e373a0119bd08
|
| 3 |
+
size 3109982
|
2024/Uni-Dubbing_ Zero-Shot Speech Synthesis from Visual Articulation/full.md
ADDED
|
@@ -0,0 +1,399 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Uni-Dubbing: Zero-Shot Speech Synthesis from Visual Articulation
|
| 2 |
+
|
| 3 |
+
Songju Lei $^{1}$ , Xize Cheng $^{2*}$ , Mengjiao Lyu $^{1}$ , Jianqiao Hu $^{5}$ , Jintao Tan $^{5}$ , Runlin Liu $^{4}$ , Lingyu Xiong $^{5}$ , Tao Jin $^{2}$ , Xiandong Li $^{3*}$ , Zhou Zhao $^{2}$
|
| 4 |
+
|
| 5 |
+
Nanjing University of Aeronautics and Astronautics<sup>1</sup>, Zhejiang University<sup>2</sup>, Huawei Cloud<sup>3</sup>, Nanjing University<sup>4</sup>, South China University of Technology<sup>5</sup>
|
| 6 |
+
|
| 7 |
+
leisongju@foxmail.com,chengxize@zju.edu.cn,lixiandong6@huawei.com
|
| 8 |
+
|
| 9 |
+
# Abstract
|
| 10 |
+
|
| 11 |
+
In the field of speech synthesis, there is a growing emphasis on employing multimodal speech to enhance robustness. A key challenge in this area is the scarcity of datasets that pair audio with corresponding video. We employ a methodology that incorporates modality alignment during the pre-training phase on multimodal datasets, uniquely facilitating Zero-Shot generalization through the process of freezing the video modality feature extraction component and the encoder module within the pretrained weights, thereby enabling effective cross-modal and cross-lingual transfer. We have named this method 'Uni-Dubbing'. Our method finely tunes with both multimodal and single-modality audio data. In multimodal scenarios, it achieves a reduced word error rate (WER) of $31.73\%$ , surpassing the previous best of $33.9\%$ . It also excels in metrics like tone quality and synchronization. With single-modality audio, it achieves a WER of $36.08\%$ , demonstrating adaptability to limited data. Its domain generalization capabilities are proven across various language tasks in video translation and audio generation. Trained on 433 hours of audio data, it surpasses techniques using 200 hours of audio-visual data. The code and demo are available at https://diracer.github.io/unidubbing.
|
| 12 |
+
|
| 13 |
+
# 1 Introduction
|
| 14 |
+
|
| 15 |
+
With the widespread use of short videos and online meetings in daily life and the workplace(Gupta et al., 2023), the barrier of cross-linguistic communication has become an urgent problem, and thus multimodal technologies have attracted much attention(Yemini et al., 2023). Recently, many researchers have conducted corresponding studies in this area, such as lip reading task(Assael et al., 2016; Jin et al., 2023; Li et al., 2023) that transfers video domain to text domain, Lip task(Prajwal
|
| 16 |
+
|
| 17 |
+
et al., 2020; Kim et al., 2021; Michelsanti et al., 2021; Mira et al., 2022b) that transfers video domain to audio domain, and lip translation(Huang et al., 2023) that converts to the target language directly based on lips. In the case of the field of visual tasks, the biggest challenge for researchers is the extreme scarcity of training data. In addition, the relationship between lips and speech is not always a simple one-to-one mapping; for example, the same word may have very different lip shapes for people with different accents(Choi et al., 2023a). Therefore, maintaining accurate intonation poses a significant challenge, and this has led to the emergence of many important research findings.
|
| 18 |
+
|
| 19 |
+
For these reasons, we adopt the strategy of using discrete units as intermediate targets, i.e., transforming audio and video data into discrete units for alignment, which can effectively circumvent the disadvantage of insufficient paired audio and video data. On top of this, we employ the RVQ(Defossez et al., 2022) module thus enabling the method to achieve better timbre preservation, i.e. high fidelity, after Full-Shot training. Furthermore, in order to cope with the lack of data for contemporary visual tasks, we also use mHubert(Polyak et al., 2021) and K-means of re-combining with discrete units, which enables our model to achieve better semantic consistency and reach Zero-Shot capability. As mentioned earlier, the barriers to cross-language communication are equally significant challenges and a lot of good work has emerged, but unfortunately none of the current methods have been able to achieve Zero-Shot cross-language video translation yet. We further explored learning cross-language and cross-modal Lip2Wav mappings from the audio domain, i.e., Zero-Shot trans-speech, based on the Zero-Shot Lip2Wav model, have verified that the method is capable of cross-language migration.
|
| 20 |
+
|
| 21 |
+
In summary, our goals in the current cross-language video-to-speech translation are twofold:
|
| 22 |
+
|
| 23 |
+
1) High quality and low error: the requirement to be able to recognise the gender in a video so as to generate the corresponding tones with minimal error is very challenging. 2) Zero-Shot: the ability of the reasoning process to achieve Zero-Shot is crucial for practicality when considering video translation.
|
| 24 |
+
|
| 25 |
+
Based on these two goals, in this paper, the innovation of this study lies in proposing a framework that requires only cross-linguistic audio speech training, without the need for visual speech training inputs, to achieve direct synthesis of visual speech to cross-linguistic audio speech. This framework can predict the corresponding audio speech output by analyzing an individual's lip movements, and this prediction is not limited to the language system of the input visual speech. Our method utilizes an advanced Zero-Shot learning strategy (Cheng et al., 2023b) that aligns audio and visual phonemes with audio data alone during the training process, thus enabling the prediction of audio outputs in a target language that has not been seen before in seemingly impossible cross-modal scenarios. The main contributions of this paper are:
|
| 26 |
+
|
| 27 |
+
- Our cross-modal Zero-Shot transfer approach for the Lip2Wav task, trained exclusively with target audio, matches top Full-Shot models in WER, sound quality, and synchronization.
|
| 28 |
+
- Our method in the Lip2Wav task on the LRS3 dataset attains state-of-the-art results in WER, ESTOI, LSE-C, and LSE-D, achieving partial timbre preservation to distinguish voice characteristics of unseen speakers.
|
| 29 |
+
- Our cross-lingual audio generation technology creates target language audio from single-language videos, eliminating the need for dual-language video training. This streamlines training and lessens the need for extensive datasets in cross-lingual dubbing, while also reducing noise.
|
| 30 |
+
|
| 31 |
+
# 2 Related Work
|
| 32 |
+
|
| 33 |
+
In our paper, for the cross-language Lip2Wav synthesis task we mainly divide it into two steps: first implementing high-fidelity video-to-speech synthesis, followed by Zero-Shot cross-language video-to-speech translation. A great deal of excellent research work has preceded our study.
|
| 34 |
+
|
| 35 |
+
# 2.1 Video to Speech Synthesis
|
| 36 |
+
|
| 37 |
+
Video speech synthesis techniques(Cooke et al., 2006; Afouras et al., 2018a; ?; Cheng et al., 2023a)
|
| 38 |
+
|
| 39 |
+
that dub silent videos have received a great deal of attention from researchers in the recent past. Prajwal et al. (2020) presented the Lip2Wav, which utilizes a sequence-to-sequence architecture, enabling it to accurately capture contextual information and generate precise audio. Hong et al. (2021) trained a multimodal memory network, VVMemory, to store and recall audio features corresponding to visual inputs so that audio information can be accessed exclusively through visual inputs during inference. Vougioukas et al. (2019) introduced an end-to-end temporal model based on GAN, capable of generating speech that synchronizes seamlessly with silent videos, presenting a convincing and difficult-to-distinguish quality. Additionally, there have been several recent papers based on GANs(Kim et al., 2021; Hong et al., 2022; Mira et al., 2022b). Most recently, a new method based on diffusion, called DiffV2S, has been proposed by Choi et al. (2023a) who introduced a novel speaker embedding extractor guided by visual information and simultaneously developed a diffusion-based video-to-speech synthesis model. Choi et al. (2023b) built upon the Lip2Wav model by incorporating quantized supervised speech representations, namely speech units, for synthesizing intelligible speech from silent videos.
|
| 40 |
+
|
| 41 |
+
However, despite the fact that all the aforementioned related methods have their own merits, the problem of lack of training data for the visual task mentioned in the previous section remains unsolved. With this in mind, we train our model by using discrete units as intermediate comparison targets in the audio and video domains, thus no longer relying on paired audio and video data.
|
| 42 |
+
|
| 43 |
+
# 2.2 Cross-language Translation
|
| 44 |
+
|
| 45 |
+
The task of cross-language translation is also a very challenging and important endeavour that also receives a lot of attention.(Lavie et al., 1997; Wahlster, 2000; Nakamura et al., 2006; ITU, 2016). Tjandra et al. (2019) introduced a discrete representation of the source language to target speech into the cascaded S2ST system, where this discrete representation is predicted by a separately trained VQVAE and subsequently utilized by the VQVAE decoder to generate the target speech spectrogram. Zhang et al. (2021) proposed the XLVAE model to enhance the discretization and reconstruction capabilities of VQVAE through cross-linguistic speech recognition. Lee et al. (2021) utilizes a separately trained vocoder, which includes a duration predic
|
| 46 |
+
|
| 47 |
+

|
| 48 |
+
|
| 49 |
+

|
| 50 |
+
Figure 1: Uni-Dubbing Overview: In the high-fidelity Lip2Wav task, we employed a Full-Shot training approach and improved the generation of discrete units. The discrete units generated by this method capture more fine-grained acoustic information. For the cross-modal and cross-language Zero-Shot tasks, we adopted an approach similar to uHubert (Hsu and Shi, 2022), where no visual data is used during training and fine-tuning. Another distinction from the Full-Shot method is that, in Zero-Shot tasks, we froze the feature extraction and Encoder modules to prevent excessive loss of original visual knowledge during knowledge transfer. During inference, we input only visual data and use the corresponding Vocoder to generate audio through discrete units. The speech generated in the Zero-Shot manner contains only semantic information, while the Full-Shot generated speech not only includes semantic information but also retains some acoustic information.
|
| 51 |
+
|
| 52 |
+

|
| 53 |
+
|
| 54 |
+
tor, to directly predict waveforms from discrete representations. Jia et al. (2019) first introduced a model based on a sequence-to-sequence architecture capable of end-to-end training and inference. To improve translation quality and overgeneration, Jia et al. (2022) presented Translatotron2, which consists of a speech encoder, a language decoder, an acoustic synthesizer, and a single attention module that connects them together. There is also some work that attempts to introduce visual speech to enhance robustness in the translation process(Huang et al., 2023).
|
| 55 |
+
|
| 56 |
+
To the best of our knowledge, paired cross-lingual audio-video datasets are currently very sparse. This scarcity results in only one existing model capable of achieving cross-lingual Lip2Wav translation. Instead, in direct contrast with the methods mentioned above, our innovative discrete-unit-based approach can successfully cross these dataset barriers, thus learning cross-language visual-phoneme mappings with Zero-Shot cross-language lip-synthesis translation capability.
|
| 57 |
+
|
| 58 |
+
# 3 Method
|
| 59 |
+
|
| 60 |
+
# 3.1 Overview
|
| 61 |
+
|
| 62 |
+
The overview of this paper is depicted in Figure 1. Figure 1a) describes the training process for high-fidelity speech synthesis, while Figure 1b) illustrates the training flow for two tasks: cross-modal and cross-language. The main differences between these tasks lie in the modality used during training, the method for generating discrete units, and the treatment of predicted discrete units for synthesizing speech. Additionally, for Zero-Shot training, it is necessary to freeze the encoder to retain the visual knowledge acquired during the pretraining phase.
|
| 63 |
+
|
| 64 |
+
# 3.2 High-Fidelity Lip2Wav
|
| 65 |
+
|
| 66 |
+
While the state-of-the-art ReVISE model (Hsu et al., 2023) achieves leading performance in Lip2Wav synthesis on the LRS3 dataset, it does not preserve the speaker's timbre during speech synthesis. To address this issue, we propose a novel approach that utilizes acoustic tokens derived from
|
| 67 |
+
|
| 68 |
+
the Hifi-Codec (Yang et al., 2023).
|
| 69 |
+
|
| 70 |
+
The Hifi-Codec consists of an audio encoder, a Residual Vector Quantizer (RVQ), and an audio decoder: Consider an audio signal $x$ with a length of $d$ and sampled at a rate of $sr$ , resulting in a total duration of $T = d / sr$ .
|
| 71 |
+
|
| 72 |
+
1) Initially, the audio encoder $E$ , comprising multiple convolutional blocks, processes the input audio. This encoder extracts features and outputs a latent representation $z$ . 2) Subsequently, the Residual Vector Quantizer $Q$ employs vector quantization layers to convert $z$ into a discrete representation $z_{Q}$ . In this process, the speech utterance $x$ is encoded as a sequence of acoustic tokens $[a_{1}, a_{2}, \dots, a_{T}]$ , where each token $a_{i}$ is an element of the set $\{0, 1, \dots, K_{2} - 1\}$ , with $1 \leq i \leq T$ . These acoustic tokens are the discrete units that we focus on in our training. 3) The audio decoder $G$ reconstructs the signal $\hat{x}$ from the highly compressed latent representation $z_{Q}$ . This algorithm efficiently quantizes the encoder output by iteratively refining the residual, which helps in preserving important information while reducing redundancy. Further, to address the challenges of temporal synchronicity in Lip2Wav tasks, we have innovated upon the existing AV-Hubert model. We have replaced the AV-Hubert decoder with a new structure.
|
| 73 |
+
|
| 74 |
+
Our adaptation involves a unique decoder structure, which includes three transposed convolutional layers. Each layer has a kernel size $(K)$ of 4, a stride $(S)$ of 2, padding $(P)$ of 1, and output padding $(O_p)$ of 1. This configuration is meticulously designed to more accurately align lip movements with the generated speech, thereby enhancing the synchronicity that is crucial for effective Lip2Wav synthesis. The output size $(O)$ of each transposed convolutional layer is calculated using the formula:
|
| 75 |
+
|
| 76 |
+
$$
|
| 77 |
+
O = ((I - 1) \times S + K - 2 \times P) + O _ {p} \quad (1)
|
| 78 |
+
$$
|
| 79 |
+
|
| 80 |
+
where $I$ denotes the input size.
|
| 81 |
+
|
| 82 |
+
# 3.3 Zero-Shot Lip2Wav Model Adaptation
|
| 83 |
+
|
| 84 |
+
To overcome the challenge of scarce paired audiovisual datasets, we loaded the pre-trained weights of AV-Hubert and focused on fine-tuning with pure audio data. To validate the effectiveness of our approach, we adopted the same Zero-Shot configuration on the LRS3 dataset as uHubert. The AV-Hubert model, pre-trained on paired audiovisual data, achieves multimodal alignment by mapping visual speech and audio speech to the same
|
| 85 |
+
|
| 86 |
+
phoneme space. During the fine-tuning phase with pure audio data, we froze the decoder and only trained the final transposed convolution layer to preserve the multimodal alignment knowledge acquired during pre-training. In the inference process, the model processes silent lip videos, predicting the corresponding speech discrete units solely based on lip movements. This Zero-Shot learning strategy enables the model to effectively synthesize speech from unseen lip movements, enhancing its robustness in diverse scenarios.
|
| 87 |
+
|
| 88 |
+
To further validate the effectiveness of our method, we fine-tuned the model using discrete units generated in other languages (e.g., Spanish, French), which were languages not encountered during pretraining. This approach not only enables the model to generate speech from lip movements but also to translate it into different languages. For example, during inference, an English spoken video could be decoded into the audio of another language, simplifying the process of speech synthesis and translation without the need for separate models for each task.
|
| 89 |
+
|
| 90 |
+
In these two tasks, our model does not contain any speaker embeddings and is unable to implicitly acquire visual feature embeddings of the speaker during the fine-tuning phase, eliminating the need to replicate the speaker's acoustic information. Therefore, we used semantic tokens generated by the mHubert and kmeans methods as target units. Compared to acoustic information, semantic information has broader applicability, making the use of semantic tokens more conducive to generalization in cross-modal and cross-language Zero-Shot tasks.
|
| 91 |
+
|
| 92 |
+
# 3.4 Training Object
|
| 93 |
+
|
| 94 |
+
In this study, the focus is on predicting discrete units, for which the cross-entropy loss function
|
| 95 |
+
|
| 96 |
+
$$
|
| 97 |
+
L = \sum_ {t} \sum_ {j = 1} ^ {C} z _ {t} ^ {j} \log f _ {t} ^ {j} (\tilde {x} _ {a}, x _ {v})
|
| 98 |
+
$$
|
| 99 |
+
|
| 100 |
+
is consistently employed. This formula calculates the loss by summing over all frames (t) and across the C units in the vocabulary. The term $z_{t}^{j}$ denotes the one-hot encoded label of the j-th unit in the t-th frame, and $f_{t}^{j}(\tilde{x}_{a},x_{v})$ represents the predicted probability distribution over the discrete units for the same frame and unit, as outputted by the enhancer.
|
| 101 |
+
|
| 102 |
+
<table><tr><td>Method</td><td>ESTOI ↑</td><td>LSE-C ↑</td><td>LSE-D ↓</td><td>WER ↓</td><td>MOS↑</td></tr><tr><td>VCA-GAN (Kim et al., 2021)</td><td>0.207</td><td>4.54</td><td>9.63</td><td>96.63</td><td>1.5±0.19</td></tr><tr><td>SVTS (Mira et al., 2022a)</td><td>0.244</td><td>7.08</td><td>7.04</td><td>79.83</td><td>1.96±0.24</td></tr><tr><td>Multi-task (Kim et al., 2023)</td><td>0.240</td><td>4.85</td><td>9.15</td><td>66.78</td><td>1.77±0.24</td></tr><tr><td>DiffV2s (Choi et al., 2023a)</td><td>0.284</td><td>7.28</td><td>7.27</td><td>39.2</td><td>4.06±0.21</td></tr><tr><td>ReVISE (Hsu et al., 2023)</td><td>0.285</td><td>7.12</td><td>7.25</td><td>33.9</td><td>4.11±0.04</td></tr><tr><td>Uni-Dubbing (Full-Shot)</td><td>0.294</td><td>7.58</td><td>6.90</td><td>31.73</td><td>4.16±0.06</td></tr><tr><td>Uni-Dubbing (Zero-Shot)</td><td>0.235</td><td>6.70</td><td>7.59</td><td>36.08</td><td>4.08±0.05</td></tr></table>
|
| 103 |
+
|
| 104 |
+
Table 1: The results of various methods on the test set of the LRS3 dataset are shown. The symbol $\uparrow$ indicates that higher values are better, while $\uparrow$ signifies that lower values are preferable.
|
| 105 |
+
|
| 106 |
+
# 4 Experiment
|
| 107 |
+
|
| 108 |
+
# 4.1 Datasets
|
| 109 |
+
|
| 110 |
+
LRS3 Dataset LRS3 (Afouras et al., 2018b) is an extensive and open-source benchmark collection for visual speech recognition research, commonly known as lip-reading. This dataset is the successor to the LRW (Chung and Zisserman, 2017a) and LRS2 (Afouras et al., 2018a) datasets and features a vast array of labeled video content with corresponding textual transcriptions, primarily sourced from TED Talks.
|
| 111 |
+
|
| 112 |
+
LRS3-T Dataset LRS3-T (Huang et al., 2023) is a new audio-visual translation dataset that has been generated from the LRS3 dataset through a cascading process, combining Neural Machine Translation (NMT) and Text-to-Speech (TTS) technologies. This intricate processing sequence culminated in a parallel audio-visual translation dataset comprising 200 hours, encompassing both the original source videos and the translated speech in the target language.
|
| 113 |
+
|
| 114 |
+
MUSAN Dataset MUSAN (Snyder et al., 2015) is a collection of music, speech, and noise recordings suitable for audio processing tasks such as speech activity detection and machine learning applications. It features 60 hours of speech from various sources, over 42 hours of diverse music tracks, and 6 hours of environmental and technical noises. We used it to generate various types of noise which were added to the original audio, in order to test the translation task's resistance to noise interference.
|
| 115 |
+
|
| 116 |
+
# 4.2 Evaluation
|
| 117 |
+
|
| 118 |
+
In our study, we evaluate Lip2Wav and audio-video translation using key metrics. For semantic accuracy, we use WER, and for sound quality, we em
|
| 119 |
+
|
| 120 |
+
ploy the Extended Short-Time Objective Intelligibility (ESTOI). Synchronization is measured using LSE-D (predicted audio-video temporal distance) and LSE-C (prediction confidence), as per SyncNet (Chung and Zisserman, 2017b). Our method approximates the speaker's voice, thus we use the Mean Opinion Score (MOS) for evaluating timbre. To ensure consistency with other studies, we adopted a scoring system ranging from 1 to 5, with increments of 0.5 points. For each model, we randomly selected 50 samples for evaluation. We recommend listening to our website's audio samples for a practical understanding.
|
| 121 |
+
|
| 122 |
+
For language translation, we apply the BLEU (Papineni et al., 2002) score to evaluate the accuracy and fluency of speech generation in different languages, comparing machine-generated text to reference texts.
|
| 123 |
+
|
| 124 |
+
# 4.3 Results
|
| 125 |
+
|
| 126 |
+
# 4.3.1 High-Fidelity Video-to-speech synthesis
|
| 127 |
+
|
| 128 |
+
Unlike other datasets that may concentrate on short phrases or isolated words, LRS3 offers longer sequences of speech, enabling more complex and contextually rich lip-reading tasks. Since most speakers only give a TED talk once, the LRS3 dataset is multi-speaker, with no overlap between the speakers in the test set and those in the training set. Consequently, most methods using fixed ID speaker embeddings are ineffective for the LRS3 dataset without altering its test set. This reflects real-world application needs more accurately, as the models we train should be effective for unseen speakers. This paper focuses on speaker generalization on the original LRS3 dataset, aiming to generate audio that is perceptually credible for speakers it has never encountered before.
|
| 129 |
+
|
| 130 |
+
As shown in Table 1, DiffV2s and ReVISE significantly outperform various previous methods,
|
| 131 |
+
|
| 132 |
+
with both achieving a WER below $40\%$ and superior sound quality as evidenced by the ESTOI metric. Our results clearly surpass all prior work in these two measures, achieving a WER of 0.296 and an ESTOI of $31.96\%$ . This is because acoustic units preserve finer details, making the generated audio easier for automatic speech recognition (ASR) systems to understand. In terms of synchronization, our model also achieved the highest rankings on the LSE-C and LSE-D metrics, surpassing all previous methods. This achievement is primarily attributed to our modifications to the original AV-Hubert decoder. We transformed it from a sequence-to-sequence model to one utilizing transposed convolutions. This change effectively ensures that the ratio between the input and output lengths of the model remains constant, thus maintaining a consistent proportional relationship between the generated audio length and the input video length. If the original AV-Hubert decoder were used, the LSE-C and LSE-D scores would be 4.65 and 9.21, respectively. Although our WER has only increased by $1.17\%$ relative to the ReVISE, the additional fine-grained acoustic information plays a crucial role in improving synchronization. This allows our method to outperform ReVISE in terms of synchronization even when using the same transposed convolution decoder.
|
| 133 |
+
|
| 134 |
+
While quantitative metrics are important, they are not the key focus of our task. The primary contribution of our work lies in generating audio that retains partial speaker information without using the identity of the speaker. In contrast, ReVISE produces audio in a single female voice for all outputs, regardless of whether the video features a male speaker. Due to the absence of explicit speaker identity information, our method is unable to fully replicate the unique acoustic characteristics of individual speakers. However, due to its use of implicit visual embeddings and acoustic discrete units, the system is capable of generating distinct male or female voices, depending on whether the videos feature male or female speakers as protagonists. While the synthesized voices may not precisely match those of the original speakers, they do preserve certain overarching characteristics, such as gender distinctions and, to some extent, age differences. We believe this aspect is significant. In cases where humans have not seen the speaker, they cannot deduce the exact timbre from the video but can infer such general voice characteristics. The voices generated by our model align with human
|
| 135 |
+
|
| 136 |
+
perception, thus meeting human expectations and requirements.Benefiting from this approach, our MOS evaluation achieved an optimal score of 4.16.
|
| 137 |
+
|
| 138 |
+
# 4.3.2 Zero-Shot from Audio to Video
|
| 139 |
+
|
| 140 |
+
Table 1 reveals that our method achieves impressive results even when trained solely with audio, without using any video data. The sound quality, measured by the ESTOI, is 0.235. This performance is comparable to the previous three works, ranking just behind DiffV2S and ReVISE. Surprisingly, despite the absence of video data during training, the synchronization of our generated audio is quite good, significantly surpassing the Full-Shot VCA-GAN and Multi-task methods, and comparable to other approaches. Most importantly, our method achieves a WER of $36.08\%$ , which is only slightly inferior to ReVISE's $33.9\%$ and better than all previous Full-Shot methods. These results indicate that our approach effectively utilizes the knowledge embedded in the pre-trained model to achieve outstanding performance, while significantly reducing data collection costs, requiring only pure audio data without corresponding lip-synced video.
|
| 141 |
+
|
| 142 |
+

|
| 143 |
+
Figure 2: The curve graph illustrating the relationship between the kernel size of the last layer of transposed convolution and the corresponding WER. When the kernel size is odd, the stride is set to 1; for even kernel sizes, the stride is 2. Therefore, we have plotted two separate curves for odd and even kernel sizes to analyze the impact of stride.
|
| 144 |
+
|
| 145 |
+
Due to the mHubert audio encoder operating at 50 frames per second and the AV-Hubert video encoder at 25 frames per second, we employed a convolutional layer to align the two. It was imperative to set the stride of this transposed convolution to 2, a fixed requirement. However, the size of the convolutional kernel significantly impacted the final results. To determine the optimal kernel size,
|
| 146 |
+
|
| 147 |
+
we conducted multiple experiments. For comparison, we also tried the alignment method used in AV-Hubert pre-training, which involves downscaling the audio labels' discrete units to 25 frames per second by extracting them at intervals. In this scenario, we set the stride of the transposed convolution to 1 and chose a convolutional kernel of an odd size.
|
| 148 |
+
|
| 149 |
+
As shown in Figure 2, all models using odd-numbered kernel sizes performed worse in terms of WER compared to those using even-numbered kernels. Specifically, smaller even-numbered kernels, such as 2 and 4, significantly reduced accuracy. However, the performance improvement became marginal when the kernel size increased to 8 or larger. Based on this finding, we selected a kernel size of 8, balancing optimally between temporal resolution and computational efficiency, crucial for effective synchronization between audio and video modalities. Additionally, we experimented with the original fully connected (FC) layer. The results indicated that using an FC layer instead of transposed convolutions yielded the worst outcomes, highlighting the effectiveness of transposed convolutions in extracting local information for our task.
|
| 150 |
+
|
| 151 |
+
A noteworthy observation is that methods comparable to Zero-Shot in terms of ESTOI generally have a WER exceeding $60\%$ . This implies that Zero-Shot is capable of acquiring a substantial degree of semantic knowledge from pre-training, but it slightly lags in generating audio quality, failing to reach a level commensurate with its semantic proficiency.
|
| 152 |
+
|
| 153 |
+
# 4.3.3 Translate from Video
|
| 154 |
+
|
| 155 |
+
Building on the concepts discussed earlier, collecting audio and its corresponding lip-synchronized video data presents significant challenges. These challenges further escalate when the task is extended to multiple languages. Our objective is to utilize datasets composed of video-audio pairs in a single language, combined with multilingual audio datasets, to make this approach applicable to multilingual audio generation. This strategy aims to efficiently utilize existing resources while addressing the challenges of multimodal and multilingual datasets.
|
| 156 |
+
|
| 157 |
+
In our study, we compared the performance of existing Full-Shot methods with our Zero-Shot method in English to Spanish (En-Es) and English to French (En-Fr) translation tasks, with detailed results presented in Table 2. We also tested the
|
| 158 |
+
|
| 159 |
+

|
| 160 |
+
Figure 3: The comparison between Uni-Dubbing and Av-Transpeech under various sizes of visual speech data is highlighted. Remarkably, Uni-Dubbing, utilizing a Zero-Shot approach, outperforms Av-Transpeech even when the latter is fine-tuned with 200 hours of visual data.
|
| 161 |
+
|
| 162 |
+
robustness of our model under different modalities and specific noise conditions. Firstly, we found that under given noise conditions, the BLEU scores using both visual and audio modal inputs were consistently higher than those using only audio input. This demonstrates the auxiliary role of visual information in enhancing audio in noisy environments, highlighting the importance of visual data. Especially under babble noise conditions, with a signal-to-noise ratio (SNR) of -5, the BLEU score for pure audio input was even lower than that for pure visual input, further emphasizing the significance of lip-reading translation. We also provided experimental data under various noise types and intensities in the appendix. In pure visual translation, Full-Shot methods typically outperform Zero-Shot methods. However, the Zero-Shot method still performs commendably in terms of BLEU scores and MOS, achieving BLEU scores of 16.99 and 19.90, and MOS of 3.73 and 3.70, respectively.
|
| 163 |
+
|
| 164 |
+
We replicated Av-Transpeech and fine-tuned it using multimodal data of varying durations, with detailed results shown in Figure 3. The figure demonstrates that the BLEU score obtained by fine-tuning with 433 hours of pure audio data is roughly equivalent to that achieved with just 220 hours of audiovisual data. During the pre-training phase, we mapped the audiovisual data to the same phoneme space. This result indicates that the knowledge in this phoneme space is equally applicable to crosslingual audio, enabling us to align the source language video with the target language audio through
|
| 165 |
+
|
| 166 |
+
<table><tr><td rowspan="2">Type</td><td rowspan="2">Method</td><td colspan="2">Training</td><td colspan="2">Eval</td><td colspan="2">En-Es</td><td colspan="2">En-Fr</td></tr><tr><td>A</td><td>V</td><td>A</td><td>V</td><td>BLEU↑</td><td>MOS↑</td><td>BLEU↑</td><td>MOS↑</td></tr><tr><td rowspan="3">Full-Shot</td><td rowspan="3">Av-Transpeech (Huang et al., 2023)</td><td>✓</td><td>✓</td><td></td><td>✓</td><td>25.00</td><td>3.94 ± 0.11</td><td>19.90</td><td>3.95 ± 0.10</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>33.10</td><td>-</td><td>28.00</td><td>-</td></tr><tr><td>✓</td><td>✓</td><td>✓</td><td></td><td>5.50</td><td>-</td><td>4.60</td><td>-</td></tr><tr><td rowspan="6">Zero-Shot</td><td rowspan="3">Uni-Dubbing (Frozen)</td><td>✓</td><td></td><td></td><td>✓</td><td>16.99</td><td>3.73 ± 0.12</td><td>15.58</td><td>3.70 ± 0.08</td></tr><tr><td>✓</td><td></td><td>✓</td><td>✓</td><td>30.00</td><td>-</td><td>25.30</td><td>-</td></tr><tr><td>✓</td><td></td><td>✓</td><td></td><td>7.58</td><td>-</td><td>6.31</td><td>-</td></tr><tr><td rowspan="3">Uni-Dubbing (No Frozen)</td><td>✓</td><td></td><td></td><td>✓</td><td>0</td><td>-</td><td>0</td><td>-</td></tr><tr><td>✓</td><td></td><td>✓</td><td>✓</td><td>0.94</td><td>-</td><td>1.39</td><td>-</td></tr><tr><td>✓</td><td></td><td>✓</td><td></td><td>0.92</td><td>-</td><td>1.07</td><td>-</td></tr></table>
|
| 167 |
+
|
| 168 |
+
Table 2: Comparison of translation results between the Full-Shot method and our method across various modalities and noise environments. It's worth noting that babble noise with an SNR of -5 is added to all instances using the audio modality (including AV and A) during inference. Please refer to the appendix for additional experimental results on different types of noise and their intensities.
|
| 169 |
+
|
| 170 |
+
pure audio fine-tuning, resulting in the current BLEU scores. This finding not only validates the effectiveness of our method but also emphasizes the feasibility of using a large amount of pure audio data as an alternative in scenarios where it is challenging to collect extensive multimodal data.
|
| 171 |
+
|
| 172 |
+
In our study, as illustrated in Table 2, we additionally conducted an experiment to investigate the translation results obtained using our Zero-Shot method without freezing the encoder. This part of the experiment primarily aimed to assess the role of freezing the encoder in preserving pre-trained knowledge. Under this setup, we observed a significant phenomenon: the BLEU scores for model inference on pure video were zero in both En-Es and En-Fr translation tasks. This result implies that the majority of the visual knowledge acquired during the model's pre-training phase has been substantially forgotten in subsequent processes.
|
| 173 |
+
|
| 174 |
+
Furthermore, compared to models that kept the encoder frozen during the inference phase, the models with unfrozen encoders also showed lower resistance to noise. This difference not only reveals the importance of freezing the encoder for maintaining model stability but also reflects the criticality of preserving knowledge acquired during pre-training when dealing with complex and variable visual inputs. Freezing the encoder effectively retains the visual information learned during the pre-training phase, which is crucial for enhancing the model's accuracy and robustness in parsing and understanding visual data. Therefore, our study not only emphasizes the importance of managing the state of
|
| 175 |
+
|
| 176 |
+
the encoder in implementing Zero-Shot learning methods but also provides valuable insights for future model design in the intersection of vision and language domains.
|
| 177 |
+
|
| 178 |
+
# 5 Conclusion
|
| 179 |
+
|
| 180 |
+
This paper introduces Uni-Dubbing, an innovative approach trained on multimodal audio-video datasets, which achieved the best WER, ESTOI, and synchronization metrics on the LRS3 dataset. Additionally, by utilizing implicit visual embeddings and acoustic tokens, we successfully preserved partial speaker information on the cross-speaker LRS3 dataset. We then implemented a Zero-Shot strategy, transitioning from audio to video modalities in cross-modal Lip2Wav tasks, and cross-lingual Lip2Wav translation tasks. This method significantly reduces the dependency on multimodal datasets and demonstrates potential for application in a wider range of tasks.
|
| 181 |
+
|
| 182 |
+
To further validate the practicality of this method, our research utilized only the audio portion of existing multimodal datasets. In future work, we plan to explore the use of larger single-modality audio datasets, aiming to further expand the applicability and enhance the effectiveness of this method. Through such research, we hope to deepen our understanding and utilization of single-modality audio data in multimodal tasks, thereby paving new paths for development in this field.
|
| 183 |
+
|
| 184 |
+
# 6 Ethics Statement
|
| 185 |
+
|
| 186 |
+
In the context of our research, we acknowledge that lip-reading technology holds considerable potential in a multitude of applications, such as facilitating silent commands in noisy environments or enhancing communication for individuals with hearing impairments. The OpenSR system is designed to democratize the development of lip-reading models, particularly for domains where resources are scarce, thereby promoting equality in technology application across different fields and languages.
|
| 187 |
+
|
| 188 |
+
However, we recognize the ethical implications surrounding the use of speech recognition technology, including the potential for unintended information exposure. It is important to note that effective lip-reading by our model demands specific video criteria, such as front-facing, high-resolution imagery with sufficient frame rates to ensure clear visibility of lip movements. Typically, such conditions are met in environments with close-range cameras or during virtual meetings, not in scenarios where video footage is obtained from a distance or without clear visibility of the mouth region, like most surveillance contexts.
|
| 189 |
+
|
| 190 |
+
Therefore, while our model advances the field of speech recognition, it is engineered with inherent limitations that naturally restrict its use in situations that could compromise individual privacy. We maintain a commitment to ethical research practices, prioritizing the beneficial impacts of our work while actively mitigating potential risks of misuse that could infringe on personal privacy or be deemed invasive. Our ongoing research includes a strong focus on developing safeguards and protocols to ensure that the technology is used responsibly and ethically.
|
| 191 |
+
|
| 192 |
+
# 7 Limitations
|
| 193 |
+
|
| 194 |
+
The present study is limited to the use of just two modalities: video and audio, thus neglecting the potential benefits of incorporating further modalities. Furthermore, the approach of applying single-modality Zero-Shot learning, although it minimizes reliance on extensive datasets, inherently results in the inadvertent omission of some portions of the previously acquired knowledge. Consequently, this methodology is not entirely effective in preserving the full spectrum of multimodal alignment knowledge that was initially obtained during the training phase.
|
| 195 |
+
|
| 196 |
+
# References
|
| 197 |
+
|
| 198 |
+
Triantafyllos Afouras, Joon Son Chung, Andrew Senior, Oriol Vinyals, and Andrew Zisserman. 2018a. Deep audio-visual speech recognition. IEEE transactions on pattern analysis and machine intelligence, 44(12):8717-8727.
|
| 199 |
+
Triantafyllos Afouras, Joon Son Chung, and Andrew Zisserman. 2018b. Lrs3-ted: a large-scale dataset for visual speech recognition. arXiv preprint arXiv:1809.00496.
|
| 200 |
+
Rie Kubota Ando and Tong Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6:1817-1853.
|
| 201 |
+
Galen Andrew and Jianfeng Gao. 2007. Scalable training of L1-regularized log-linear models. In Proceedings of the 24th International Conference on Machine Learning, pages 33-40.
|
| 202 |
+
Yannis M Assael, Brendan Shillingford, Shimon Whiteson, and Nando De Freitas. 2016. Lipnet: End-to-end sentence-level lipreading. arXiv preprint arXiv:1611.01599.
|
| 203 |
+
Xize Cheng, Rongjie Huang, Linjun Li, Tao Jin, Zehan Wang, Aoxiong Yin, Minglei Li, Xinyu Duan, Zhou Zhao, et al. 2023a. Transface: Unit-based audiovisual speech synthesizer for talking head translation. arXiv preprint arXiv:2312.15197.
|
| 204 |
+
Xize Cheng, Tao Jin, Linjun Li, Wang Lin, Xinyu Duan, and Zhou Zhao. 2023b. Opensr: Open-modality speech recognition via maintaining multi-modality alignment. arXiv preprint arXiv:2306.06410.
|
| 205 |
+
Jeongsoo Choi, Joanna Hong, and Yong Man Ro. 2023a. DiffV2S: Diffusion-based video-to-speech synthesis with vision-guided speaker embedding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7812-7821.
|
| 206 |
+
Jeongsoo Choi, Minsu Kim, and Yong Man Ro. 2023b. Intelligible lip-to-speech synthesis with speech units. arXiv preprint arXiv:2305.19603.
|
| 207 |
+
Joon Son Chung and Andrew Zisserman. 2017a. Lip reading in the wild. In Computer Vision-ACCV 2016: 13th Asian Conference on Computer Vision, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part II 13, pages 87-103. Springer.
|
| 208 |
+
Joon Son Chung and Andrew Zisserman. 2017b. Out of time: automated lip sync in the wild. In Computer Vision-ACCV 2016 Workshops: ACCV 2016 International Workshops, Taipei, Taiwan, November 20-24, 2016, Revised Selected Papers, Part II 13, pages 251-263. Springer.
|
| 209 |
+
Martin Cooke, Jon Barker, Stuart Cunningham, and Xu Shao. 2006. An audio-visual corpus for speech perception and automatic speech recognition. The Journal of the Acoustical Society of America, 120(5):2421-2424.
|
| 210 |
+
|
| 211 |
+
Alexandre Défossez, Jade Copet, Gabriel Synnaeve, and Yossi Adi. 2022. High fidelity neural audio compression. arXiv preprint arXiv:2210.13438.
|
| 212 |
+
Anchit Gupta, Rudrabha Mukhopadhyay, Sindhu Balachandra, Faizan Farooq Khan, Vinay P Namboodiri, and CV Jawahar. 2023. Towards generating ultra-high resolution talking-face videos with lip synchronization. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 5209-5218.
|
| 213 |
+
Joanna Hong, Minsu Kim, Se Jin Park, and Yong Man Ro. 2021. Speech reconstruction with reminiscent sound via visual voice memory. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:3654-3667.
|
| 214 |
+
Joanna Hong, Minsu Kim, and Yong Man Ro. 2022. VisageSynTalk: Unseen speaker video-to-speech synthesis via speech-visage feature selection. In European Conference on Computer Vision, pages 452-468. Springer.
|
| 215 |
+
Wei-Ning Hsu, Tal Remez, Bowen Shi, Jacob Donley, and Yossi Adi. 2023. ReVISE: Self-supervised speech resynthesis with visual input for universal and generalized speech regeneration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18795-18805.
|
| 216 |
+
Wei-Ning Hsu and Bowen Shi. 2022. u-HuBERT: Unified mixed-modal speech pretraining and zero-shot transfer to unlabeled modality. Advances in Neural Information Processing Systems, 35:21157-21170.
|
| 217 |
+
Rongjie Huang, Huadai Liu, Xize Cheng, Yi Ren, Linjun Li, Zhenhui Ye, Jinzheng He, Lichao Zhang, Jinglin Liu, Xiang Yin, et al. 2023. AV - TranSpeech: Audio-visual robust speech-to-speech translation. arXiv preprint arXiv:2305.15403.
|
| 218 |
+
ITU. 2016. ITU-T F.745: Functional requirements for network-based speech-to-speech translation services. International Telecommunication Union.
|
| 219 |
+
Ye Jia, Michelle Tadmor Ramanovich, Tal Remez, and Roi Pomerantz. 2022. Translatotron 2: High-quality direct speech-to-speech translation with voice preservation. In International Conference on Machine Learning, pages 10120-10134. PMLR.
|
| 220 |
+
Ye Jia, Ron J Weiss, Fadi Biadsy, Wolfgang Macherey, Melvin Johnson, Zhifeng Chen, and Yonghui Wu. 2019. Direct speech-to-speech translation with a sequence-to-sequence model. arXiv preprint arXiv:1904.06037.
|
| 221 |
+
Tao Jin, Xize Cheng, Linjun Li, Wang Lin, Ye Wang, and Zhou Zhao. 2023. Rethinking missing modality learning from a decoding perspective. In Proceedings of the 31st ACM International Conference on Multimedia, pages 4431-4439.
|
| 222 |
+
|
| 223 |
+
Minsu Kim, Joanna Hong, and Yong Man Ro. 2021. Lip to speech synthesis with visual context attentional gan. Advances in Neural Information Processing Systems, 34:2758-2770.
|
| 224 |
+
Minsu Kim, Joanna Hong, and Yong Man Ro. 2023. Lip-to-speech synthesis in the wild with multi-task learning. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1-5. IEEE.
|
| 225 |
+
Alon Lavie, Alex Waibel, Lori Levin, Michael Finke, Donna Gates, Marsal Gavalda, Torsten Zeppenfeld, and Puming Zhan. 1997. JANUS - III: Speech-to-speech translation in multiple languages. In 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing, volume 1, pages 99-102. IEEE.
|
| 226 |
+
Ann Lee, Peng-Jen Chen, Changhan Wang, Jiatao Gu, Sravya Popuri, Xutai Ma, Adam Polyak, Yossi Adi, Qing He, Yun Tang, et al. 2021. Direct speech-to-speech translation with discrete units. arXiv preprint arXiv:2107.05604.
|
| 227 |
+
Linjun Li, Tao Jin, Xize Cheng, Ye Wang, Wang Lin, Rongjie Huang, and Zhou Zhao. 2023. Contrastive token-wise meta-learning for unseen performer visual temporal-aligned translation. In *Findings of the Association for Computational Linguistics: ACL 2023*, pages 10993-11007.
|
| 228 |
+
Pingchuan Ma, Alexandros Haliassos, Adriana Fernandez-Lopez, Honglie Chen, Stavros Petridis, and Maja Pantic. 2023. Auto-avsr: Audio-visual speech recognition with automatic labels. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1-5. IEEE.
|
| 229 |
+
Daniel Michelsanti, Zheng-Hua Tan, Shi-Xiong Zhang, Yong Xu, Meng Yu, Dong Yu, and Jesper Jensen. 2021. An overview of deep-learning-based audio-visual speech enhancement and separation. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:1368-1396.
|
| 230 |
+
Rodrigo Mira, Alexandros Haliassos, Stavros Petridis, Björn W Schuller, and Maja Pantic. 2022a. Svts: scalable video-to-speech synthesis. arXiv preprint arXiv:2205.02058.
|
| 231 |
+
Rodrigo Mira, Konstantinos Vougioukas, Pingchuan Ma, Stavros Petridis, Björn W Schuller, and Maja Pantic. 2022b. End-to-end video-to-speech synthesis using generative adversarial networks. IEEE transactions on cybernetics.
|
| 232 |
+
Satoshi Nakamura, Konstantin Markov, Hiromi Nakaiwa, Gen-ichiro Kikui, Hisashi Kawai, Takatoshi Jitsuhiro, J-S Zhang, Hirofumi Yamamoto, Eiichiro Sumita, and Seiichi Yamamoto. 2006. The atr multilingual speech-to-speech translation system. IEEE Transactions on Audio, Speech, and Language Processing, 14(2):365-376.
|
| 233 |
+
|
| 234 |
+
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. *fairseq: A fast, extensible toolkit for sequence modeling*. In *Proceedings of NAACL-HLT* 2019: Demonstrations.
|
| 235 |
+
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311-318.
|
| 236 |
+
Adam Polyak, Yossi Adi, Jade Copet, Eugene Kharitonov, Kushal Lakhotia, Wei-Ning Hsu, Abdelrahman Mohamed, and Emmanuel Dupoux. 2021. Speech resynthesis from discrete disentangled self-supervised representations. arXiv preprint arXiv:2104.00355.
|
| 237 |
+
KR Prajwal, Rudrabha Mukhopadhyay, Vinay P Namboodiri, and CV Jawahar. 2020. Learning individual speaking styles for accurate lip to speech synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13796-13805.
|
| 238 |
+
Mohammad Sadegh Rasooli and Joel R. Tetreault. 2015. Yara parser: A fast and accurate dependency parser. Computing Research Repository, arXiv:1503.06733. Version 2.
|
| 239 |
+
David Snyder, Guoguo Chen, and Daniel Povey. 2015. Musan: A music, speech, and noise corpus. arXiv preprint arXiv:1510.08484.
|
| 240 |
+
Andros Tjandra, Sakriani Sakti, and Satoshi Nakamura. 2019. Speech-to-speech translation between untranscribed unknown languages. In 2019 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 593-600. IEEE.
|
| 241 |
+
Konstantinos Vougioukas, Pingchuan Ma, Stavros Petridis, and Maja Pantic. 2019. Video-driven speech reconstruction using generative adversarial networks. In Interspeech.
|
| 242 |
+
Wolfgang Wahlster. 2000. Verb Mobil: Foundations of Speech-to-Speech Translation. Springer Science & Business Media.
|
| 243 |
+
Dongchao Yang, Songxiang Liu, Rongjie Huang, Jinchuan Tian, Chao Weng, and Yuexian Zou. 2023. Hifi-codec: Group-residual vector quantization for high fidelity audio codec. arXiv preprint arXiv:2305.02765.
|
| 244 |
+
Yochai Yemini, Aviv Shamsian, Lior Bracha, Sharon Gannot, and Ethan Fetaya. 2023. LipVoicer: Generating speech from silent videos guided by lip reading. In arXiv:2306.03258.
|
| 245 |
+
Chen Zhang, Xu Tan, Yi Ren, Tao Qin, Kejun Zhang, and Tie-Yan Liu. 2021. UWSpeech: Speech to speech translation for unwritten languages. In Proceedings of the AAAI Conference on Artificial Intelligence, pages 14319-14327.
|
| 246 |
+
|
| 247 |
+
# A Additional quantitative Results
|
| 248 |
+
|
| 249 |
+
Zero-Shot Kernel Size. The results of cross-modal Zero-Shot experiments conducted on the LRS3 dataset are closely related to the kernel size of the last layer of transposed convolution. Table 3 details these results, including ESTOI, WER, and synchronization rate metrics.
|
| 250 |
+
|
| 251 |
+
<table><tr><td>K</td><td>ESTOI↑</td><td>LSE-C↑</td><td>LSE-D↓</td><td>WER↓</td></tr><tr><td>2</td><td>0.228</td><td>6.54</td><td>7.77</td><td>36.87</td></tr><tr><td>4</td><td>0.235</td><td>6.67</td><td>7.64</td><td>36.31</td></tr><tr><td>8</td><td>0.235</td><td>6.70</td><td>7.59</td><td>36.08</td></tr><tr><td>16</td><td>0.234</td><td>6.75</td><td>7.61</td><td>36.12</td></tr><tr><td>32</td><td>0.235</td><td>6.72</td><td>7.60</td><td>36.10</td></tr><tr><td>1</td><td>0.211</td><td>6.22</td><td>8.05</td><td>39.79</td></tr><tr><td>3</td><td>0.214</td><td>6.32</td><td>7.95</td><td>37.66</td></tr><tr><td>5</td><td>0.214</td><td>6.37</td><td>7.91</td><td>37.76</td></tr><tr><td>9</td><td>0.216</td><td>6.39</td><td>7.91</td><td>37.53</td></tr><tr><td>17</td><td>0.214</td><td>6.40</td><td>7.90</td><td>37.99</td></tr><tr><td>33</td><td>0.214</td><td>6.39</td><td>7.90</td><td>37.98</td></tr><tr><td>FC</td><td>0.209</td><td>6.20</td><td>8.05</td><td>41.08</td></tr></table>
|
| 252 |
+
|
| 253 |
+
Zero-Shot Translate Data Size. For the Zero-Shot translation task, we present in Table 4 the performance of AV-Transpeech after fine-tuning with varying amounts of data. We compare the results of inference using both audiovisual data and video-only data. We found that for both AVST (Audio-Visual Synchronous Translation) and VST (Video Synchronous Translation) tasks, the effectiveness of our method is similar to that achieved by fine-tuning with a 200-hour multimodal audiovisual dataset.
|
| 254 |
+
|
| 255 |
+
Table 3: The impact of varying kernel sizes on different metrics in audio generation. K represents the size of the kernel in the final layer of transposed convolution. FC (Fully Connected) represents a configuration where, instead of using a transposed convolution layer, a fully connected layer is employed as the final layer.
|
| 256 |
+
|
| 257 |
+
<table><tr><td rowspan="2">Method</td><td rowspan="2">Utts(hrs)</td><td colspan="2">En-Es</td><td colspan="2">En-Fr</td></tr><tr><td>AV</td><td>V</td><td>AV</td><td>V</td></tr><tr><td rowspan="5">AV-Transpeech</td><td>433</td><td>45.2</td><td>25</td><td>33.6</td><td>19.9</td></tr><tr><td>200</td><td>35.98</td><td>15.25</td><td>29.83</td><td>14.45</td></tr><tr><td>100</td><td>31.59</td><td>12.36</td><td>27.64</td><td>11.21</td></tr><tr><td>50</td><td>28.2</td><td>11.22</td><td>24.21</td><td>10.41</td></tr><tr><td>30</td><td>24.92</td><td>9.92</td><td>15.96</td><td>8.57</td></tr><tr><td>Our(Zero-Shot)</td><td>433</td><td>36.53</td><td>16.99</td><td>28.94</td><td>15.58</td></tr></table>
|
| 258 |
+
|
| 259 |
+
Table 4: Translation results of AV-Transpeech in different modalities after fine-tuning with various data volumes.
|
| 260 |
+
|
| 261 |
+
Zero-Shot Translate Noise Robust. In the main text, we only present the performance of the model under partial noise conditions. Table 5 and Table 6 respectively showcase the results of the Zero-Shot model under frozen and no frozen states across various noise conditions.
|
| 262 |
+
|
| 263 |
+
<table><tr><td rowspan="2">Modality</td><td rowspan="2">Noise</td><td rowspan="2">Language</td><td colspan="8">SNR</td><td rowspan="2">Average</td></tr><tr><td>-20</td><td>-10</td><td>-5</td><td>0</td><td>5</td><td>10</td><td>20</td><td></td></tr><tr><td rowspan="8">AV</td><td rowspan="2">Babble</td><td>En-Es</td><td>13.45</td><td>23.61</td><td>30.00</td><td>34.15</td><td>35.40</td><td>35.55</td><td>36.07</td><td>29.75</td><td></td></tr><tr><td>En-Fr</td><td>12.06</td><td>19.24</td><td>25.30</td><td>27.59</td><td>28.46</td><td>28.61</td><td>28.83</td><td>24.30</td><td></td></tr><tr><td rowspan="2">Music</td><td>En-Es</td><td>23.93</td><td>31.73</td><td>34.54</td><td>35.09</td><td>35.81</td><td>35.56</td><td>36.18</td><td>33.26</td><td></td></tr><tr><td>En-Fr</td><td>19.25</td><td>26.27</td><td>27.8</td><td>28.48</td><td>28.60</td><td>28.81</td><td>28.75</td><td>26.85</td><td></td></tr><tr><td rowspan="2">Speech</td><td>En-Es</td><td>24.63</td><td>32.38</td><td>34.25</td><td>35.41</td><td>35.57</td><td>36.16</td><td>36.41</td><td>33.54</td><td></td></tr><tr><td>En-Fr</td><td>19.83</td><td>26.21</td><td>27.69</td><td>28.72</td><td>28.92</td><td>28.55</td><td>29.30</td><td>27.03</td><td></td></tr><tr><td rowspan="2">Average</td><td>En-Es</td><td>20.67</td><td>29.24</td><td>32.93</td><td>34.88</td><td>35.59</td><td>35.76</td><td>36.22</td><td>32.18</td><td></td></tr><tr><td>En-Fr</td><td>17.05</td><td>23.91</td><td>26.93</td><td>28.26</td><td>28.66</td><td>28.66</td><td>28.96</td><td>26.06</td><td></td></tr><tr><td rowspan="8">A</td><td rowspan="2">Babble</td><td>En-Es</td><td>0.01</td><td>0.12</td><td>7.58</td><td>26.64</td><td>33.82</td><td>35.23</td><td>35.71</td><td>19.87</td><td></td></tr><tr><td>En-Fr</td><td>0.05</td><td>0.17</td><td>6.31</td><td>21.54</td><td>27.18</td><td>28.55</td><td>29.41</td><td>16.17</td><td></td></tr><tr><td rowspan="2">Music</td><td>En-Es</td><td>3.03</td><td>16.76</td><td>28.25</td><td>33.42</td><td>34.97</td><td>35.78</td><td>36.60</td><td>26.97</td><td></td></tr><tr><td>En-Fr</td><td>3.47</td><td>15.01</td><td>22.47</td><td>27.11</td><td>28.18</td><td>28.97</td><td>29.11</td><td>22.05</td><td></td></tr><tr><td rowspan="2">Speech</td><td>En-Es</td><td>4.11</td><td>17.97</td><td>27.88</td><td>33.89</td><td>34.79</td><td>35.53</td><td>36.09</td><td>27.18</td><td></td></tr><tr><td>En-Fr</td><td>3.84</td><td>15.71</td><td>21.92</td><td>27.12</td><td>28.61</td><td>29.14</td><td>29.16</td><td>22.21</td><td></td></tr><tr><td rowspan="2">Average</td><td>En-Es</td><td>2.38</td><td>11.62</td><td>21.24</td><td>31.32</td><td>24.53</td><td>35.51</td><td>36.13</td><td>24.68</td><td></td></tr><tr><td>En-Fr</td><td>2.45</td><td>10.30</td><td>16.90</td><td>25.26</td><td>27.99</td><td>28.89</td><td>29.23</td><td>20.14</td><td></td></tr><tr><td rowspan="2">V</td><td>-</td><td>En-Es</td><td>16.99</td><td>16.99</td><td>16.99</td><td>16.99</td><td>16.99</td><td>16.99</td><td>16.99</td><td>16.99</td><td></td></tr><tr><td>-</td><td>En-Fr</td><td>15.58</td><td>15.58</td><td>15.58</td><td>15.58</td><td>15.58</td><td>15.58</td><td>15.58</td><td>15.58</td><td></td></tr></table>
|
| 264 |
+
|
| 265 |
+
Table 5: Comparison of translation accuracy (BLEU score $\uparrow$ ) of our zero shot model between different noise configurations and input modalities. The BLEU scores for pure audio inference are lower than those for inference using only video in multiple scenarios when the noise intensity is high.
|
| 266 |
+
|
| 267 |
+
<table><tr><td rowspan="2">Modality</td><td rowspan="2">Noise</td><td rowspan="2">Language</td><td colspan="8">SNR</td><td rowspan="2">Average</td></tr><tr><td>-20</td><td>-10</td><td>-5</td><td>0</td><td>5</td><td>10</td><td>20</td><td></td></tr><tr><td rowspan="8">AV</td><td rowspan="2">Babble</td><td>En-Es</td><td>0.01</td><td>0.04</td><td>0.94</td><td>11.47</td><td>29.20</td><td>36.74</td><td>40.08</td><td>16.93</td><td></td></tr><tr><td>En-Fr</td><td>0.11</td><td>0.14</td><td>1.39</td><td>10.26</td><td>24.33</td><td>30.93</td><td>33.94</td><td>14.44</td><td></td></tr><tr><td rowspan="2">Music</td><td>En-Es</td><td>0.53</td><td>5.33</td><td>15.21</td><td>26.91</td><td>35.13</td><td>38.73</td><td>40.33</td><td>23.17</td><td></td></tr><tr><td>En-Fr</td><td>0.40</td><td>5.31</td><td>12.91</td><td>22.63</td><td>30.19</td><td>32.67</td><td>33.70</td><td>19.69</td><td></td></tr><tr><td rowspan="2">Speech</td><td>En-Es</td><td>0.65</td><td>7.63</td><td>16.73</td><td>28.21</td><td>34.87</td><td>38.52</td><td>40.02</td><td>23.80</td><td></td></tr><tr><td>En-Fr</td><td>0.55</td><td>7.21</td><td>13.91</td><td>24.01</td><td>29.61</td><td>32.48</td><td>33.68</td><td>20.21</td><td></td></tr><tr><td rowspan="2">Average</td><td>En-Es</td><td>0.40</td><td>4.33</td><td>10.96</td><td>22.20</td><td>33.07</td><td>38.00</td><td>40.14</td><td>21.30</td><td></td></tr><tr><td>En-Fr</td><td>0.35</td><td>4.22</td><td>9.40</td><td>18.97</td><td>28.04</td><td>32.03</td><td>33.77</td><td>18.11</td><td></td></tr><tr><td rowspan="8">A</td><td rowspan="2">Babble</td><td>En-Es</td><td>0.01</td><td>0.01</td><td>0.92</td><td>10.60</td><td>28.76</td><td>36.96</td><td>40.01</td><td>16.75</td><td></td></tr><tr><td>En-Fr</td><td>0.09</td><td>0.08</td><td>1.07</td><td>9.60</td><td>24.75</td><td>30.62</td><td>34.04</td><td>14.32</td><td></td></tr><tr><td rowspan="2">Music</td><td>En-Es</td><td>0.48</td><td>6.92</td><td>15.61</td><td>26.06</td><td>34.37</td><td>38.40</td><td>40.04</td><td>23.13</td><td></td></tr><tr><td>En-Fr</td><td>0.46</td><td>4.71</td><td>12.35</td><td>23.18</td><td>29.38</td><td>32.54</td><td>34.20</td><td>19.55</td><td></td></tr><tr><td rowspan="2">Speech</td><td>En-Es</td><td>1.06</td><td>7.33</td><td>16.93</td><td>27.47</td><td>35.45</td><td>38.25</td><td>40.14</td><td>23.80</td><td></td></tr><tr><td>En-Fr</td><td>0.66</td><td>6.53</td><td>14.50</td><td>23.46</td><td>29.82</td><td>32.30</td><td>33.83</td><td>20.16</td><td></td></tr><tr><td rowspan="2">Average</td><td>En-Es</td><td>0.52</td><td>4.75</td><td>11.15</td><td>21.38</td><td>32.86</td><td>37.87</td><td>40.06</td><td>21.23</td><td></td></tr><tr><td>En-Fr</td><td>0.40</td><td>3.77</td><td>9.31</td><td>18.75</td><td>27.98</td><td>31.82</td><td>34.02</td><td>18.01</td><td></td></tr></table>
|
| 268 |
+
|
| 269 |
+
Table 6: Comparison of translation accuracy (BLEU score ↑) of our no-frozon Zero-Shot model between different noise configurations and input modalities.
|
| 270 |
+
|
| 271 |
+
# B Additional qualitative Results
|
| 272 |
+
|
| 273 |
+
LRS3 Dataset in Lip2Wav Implementation. In Figure 4, we display visualizations of four samples each from the ground truth, our Full-Shot and Zero-Shot methods, and ReVISE, to compare their respective mel-spectrogram outputs. These methods generate mel-spectrograms whose backbone structures maintain a certain degree of similarity, resulting in low WER and minimal differences in retained semantic information for the synthesized speech. However, in comparison, our Full-Shot method produces mel-spectrograms that more closely resemble real data (Ground Truth) in detail, displaying finer frequency variations and a more continuous temporal sequence structure. This indicates that the Full-Shot approach achieves higher accuracy in audio reconstruction, capturing more of the acoustic features of real speech signals beyond just semantic information. Additionally, our Zero-Shot method shows greater similarity to ReVISE, demonstrating that even when fine-tuned using only audio data, it can retain a considerable level of semantic information. This validates the effectiveness of our method in modal transfer.
|
| 274 |
+
|
| 275 |
+

|
| 276 |
+
|
| 277 |
+

|
| 278 |
+
|
| 279 |
+

|
| 280 |
+
|
| 281 |
+

|
| 282 |
+
|
| 283 |
+

|
| 284 |
+
|
| 285 |
+

|
| 286 |
+
|
| 287 |
+

|
| 288 |
+
|
| 289 |
+

|
| 290 |
+
|
| 291 |
+

|
| 292 |
+
|
| 293 |
+

|
| 294 |
+
|
| 295 |
+

|
| 296 |
+
|
| 297 |
+

|
| 298 |
+
|
| 299 |
+

|
| 300 |
+
Figure 4: Sample mel-spectrogram visualizations from various methods on the LRS3 dataset.
|
| 301 |
+
|
| 302 |
+

|
| 303 |
+
|
| 304 |
+

|
| 305 |
+
|
| 306 |
+

|
| 307 |
+
|
| 308 |
+
In Table 7, we present the results of several audio samples processed through Lip2Wav and subsequently analyzed using ASR. The errors generated by these methods are largely similar, likely stemming from the inherent confusability of the Lip2Wav approach itself. This is because the majority of errors originate from phonetically similar words or phrases, which are exceedingly difficult to overcome in subsequent processing.
|
| 309 |
+
|
| 310 |
+
Table 7: This qualitative comparison addresses visually confusing words. 'Red words' highlighted in red indicate misidentified terms, striking throughs in parentheses denote visually similar words, and (red words) within parentheses emphasize words that are absent.
|
| 311 |
+
|
| 312 |
+
<table><tr><td>Ground Truth:
|
| 313 |
+
Our(Full-Shot):
|
| 314 |
+
ReVISE:
|
| 315 |
+
Our(Zero-Shot):</td><td>we were making what was invisible visible
|
| 316 |
+
we were making what was invisible invisible (visible)
|
| 317 |
+
we were many (making) what was invisible invisible (visible)
|
| 318 |
+
we were many (making) what was invisible visible</td></tr><tr><td>Ground Truth:
|
| 319 |
+
Our(Full-Shot):
|
| 320 |
+
ReVISE:
|
| 321 |
+
Our(Zero-Shot):</td><td>would you like to create a second one together
|
| 322 |
+
would you like to create a successful (second) one together
|
| 323 |
+
would you like to create (a) success when you guess (second-one together)
|
| 324 |
+
would you like to be in a cecil when (create a second one) together</td></tr><tr><td>Ground Truth:
|
| 325 |
+
Our(Full-Shot):
|
| 326 |
+
ReVISE:
|
| 327 |
+
Our(Zero-Shot):</td><td>african americans supported it at a higher level than had ever been recorded
|
| 328 |
+
african americans supported it at a higher level than had ever been recorded
|
| 329 |
+
african americans supported it at a higher level than it (had) ever been recorded
|
| 330 |
+
african americans supported it at a higher level than it (had) ever be (been) recorded</td></tr><tr><td>Ground Truth:
|
| 331 |
+
Our(Full-Shot):
|
| 332 |
+
ReVISE:
|
| 333 |
+
Our(Zero-Shot):</td><td>dan replies so often you won't even notice it
|
| 334 |
+
ten (dan) replies so often you won't even notice it
|
| 335 |
+
the data (dan) replies so often you won't even notice it
|
| 336 |
+
ten (dan) replies so often you won't even notice it</td></tr></table>
|
| 337 |
+
|
| 338 |
+
LRS3-T Dataset in Cross-Lingual Lip2Wav Translation. In Figure 5, we display the actual spectrograms for En-Es and En-Fr samples, along with the corresponding spectrograms generated by Av-Transpeech and our Zero-Shot method. The mel-spectrograms generated by Av-Transpeech show a high degree of similarity to those produced by our method, but both exhibit certain differences from the GT. This is primarily because both methods use discretized units generated in the same way as training targets, hence the information they carry is quite similar, primarily focusing on semantic information. On the LRS3-T dataset, the similarity of the mel-spectrograms generated by these two methods further confirms the Zero-Shot capabilities of our approach.
|
| 339 |
+
|
| 340 |
+

|
| 341 |
+
|
| 342 |
+

|
| 343 |
+
|
| 344 |
+

|
| 345 |
+
|
| 346 |
+

|
| 347 |
+
|
| 348 |
+

|
| 349 |
+
|
| 350 |
+

|
| 351 |
+
|
| 352 |
+

|
| 353 |
+
|
| 354 |
+

|
| 355 |
+
|
| 356 |
+

|
| 357 |
+
Figure 5: Sample mel-spectrogram visualizations from various methods on the LRS3 dataset.
|
| 358 |
+
|
| 359 |
+

|
| 360 |
+
|
| 361 |
+

|
| 362 |
+
|
| 363 |
+

|
| 364 |
+
|
| 365 |
+
Table 8 shows GT, Av-Transpeech, and our En-Es and En-Fr translation results. Our translations contain more erroneous words compared to Av-Transpeech, as reflected in the lower BLEU scores reported in the text. However, the locations of errors are similar for both methods, suggesting that pure audio fine-tuning might achieve semantics similar to Full-Shot for the main body of sentences, but there could be confusion in some details. Further research and exploration in this area are needed.
|
| 366 |
+
|
| 367 |
+
Table 8: This qualitative comparison addresses visually confusing words. 'Red words' highlighted in red indicate misidentified terms, striking throughs in parentheses denote visually similar words, and (red words) within parentheses emphasize words that are absent. The top two samples are En-Es translations, and the bottom two are En-Fr translations.
|
| 368 |
+
|
| 369 |
+
<table><tr><td>Ground Truth:
|
| 370 |
+
Av-Transpeech:
|
| 371 |
+
Uni-Dubbing</td><td>te gustaría create un segundo jintos
|
| 372 |
+
te gustaría create una sensación (un-segundo) jintos
|
| 373 |
+
te gustaría create un sentidoconjunto (jentes)</td></tr><tr><td>Ground Truth:
|
| 374 |
+
Av-Transpeech:
|
| 375 |
+
Uni-Dubbing</td><td>podemos create un parlamento mundial de alcaldes
|
| 376 |
+
podemos create un parlamento global (mudial) de pares
|
| 377 |
+
necesitamos (podemos) create un parlamento global (mudial) de c (alcaldes)</td></tr><tr><td>Ground Truth:
|
| 378 |
+
Av-Transpeech:
|
| 379 |
+
Uni-Dubbing</td><td>Je te pardonne et je ne te fais pas
|
| 380 |
+
je te pardonne et je ne te déteste (pas)
|
| 381 |
+
je te donne (pardonne-et) je (ne) te déteste (déteste-pass)</td></tr><tr><td>Ground Truth:
|
| 382 |
+
Av-Transpeech:
|
| 383 |
+
Uni-Dubbing</td><td>donc la réponse à la deuxième question peut-on changer
|
| 384 |
+
donc la réponse à la第二种e question pouvants-nous change (peut-on changer)
|
| 385 |
+
donc la réponse à la第二种e question pouvont-s-nois (peut-on) changer</td></tr></table>
|
| 386 |
+
|
| 387 |
+
# C Zero-Shot configuration
|
| 388 |
+
|
| 389 |
+
On the LRS3 dataset, our applied Zero-Shot configuration is consistent with that of uHubert (Hsu and Shi, 2022). One concern arises: the model might memorize audio-visual pairs from the pre-training period and associate them with unimodal data for Zero-Shot learning, as the dataset used for fine-tuning is a subset of the pre-training data. To address this issue, uHubert conducted experiments on non-LRS3 audio datasets, demonstrating the effectiveness of this configuration. Therefore, we did not seek another out-of-domain audio dataset for experimentation in this task. We directly conducted Zero-Shot experiments on LRS3-T, whose audio data is not only excluded from the pre-training but also differs in language type. Furthermore, ablation experiments regarding whether to freeze the encoder layers also validated the Zero-Shot capability of our method.
|
| 390 |
+
|
| 391 |
+
# D More implementation details.
|
| 392 |
+
|
| 393 |
+
Experiment hyperparameters. Table 9 displays the training hyperparameter configurations for each task in our study, noting that audio masking was not employed in any of the tasks.
|
| 394 |
+
|
| 395 |
+
<table><tr><td></td><td>Full-Shot</td><td>Zero-Shot Modal</td><td>Zero-Shot Translate</td></tr><tr><td>num. of updates</td><td>45000</td><td>20000</td><td>60000</td></tr><tr><td>num. of frozen</td><td>5000</td><td>20000</td><td>60000</td></tr><tr><td>tri-stage LR schedule</td><td>(10%,20%,70%)</td><td>(10%,20%,70%)</td><td>(33%,0%,67%)</td></tr><tr><td>peak learning rate</td><td>6e-05</td><td>6e-05</td><td>5e-04</td></tr><tr><td>batchsize /GPU</td><td>1000</td><td>1000</td><td>1000</td></tr><tr><td>num. of GPU</td><td>8</td><td>8</td><td>8</td></tr><tr><td>Adam (β1,β2)</td><td>(0.9,0.98)</td><td>(0.9,0.98)</td><td>(0.9,0.98)</td></tr></table>
|
| 396 |
+
|
| 397 |
+
Table 9: Experiment hyperparameters.
|
| 398 |
+
|
| 399 |
+
ASR toolkit for Evaluation. In this paper, the English ASR used is cited from (Ma et al., 2023). For Spanish and French, we utilize open-sourced ASR models within the fairseq framework (Ott et al., 2019) to transcribe the audios, which is consistent with the ASR used by Av-Transpeech.
|
2024/Uni-Dubbing_ Zero-Shot Speech Synthesis from Visual Articulation/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:0bd181dcb4caf1db8539b0d78dee2dc33fda356034dc32e325dd1679c6dc1d4f
|
| 3 |
+
size 1433102
|
2024/Uni-Dubbing_ Zero-Shot Speech Synthesis from Visual Articulation/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/UniBridge_ A Unified Approach to Cross-Lingual Transfer Learning for Low-Resource Languages/31fceca4-760f-4c7d-9a2a-9207221cfedf_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/UniBridge_ A Unified Approach to Cross-Lingual Transfer Learning for Low-Resource Languages/31fceca4-760f-4c7d-9a2a-9207221cfedf_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/UniBridge_ A Unified Approach to Cross-Lingual Transfer Learning for Low-Resource Languages/31fceca4-760f-4c7d-9a2a-9207221cfedf_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:8585d5c2155b93fd3822af7a0e7f0fa949b15d014924c2495ef38ff317711ff2
|
| 3 |
+
size 555994
|
2024/UniBridge_ A Unified Approach to Cross-Lingual Transfer Learning for Low-Resource Languages/full.md
ADDED
|
@@ -0,0 +1,473 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# UniBridge: A Unified Approach to Cross-Linguual Transfer Learning for Low-Resource Languages
|
| 2 |
+
|
| 3 |
+
Trinh Pham $^{1*}$ , Khoi M. Le $^{2*}$ , Luu Anh Tuan $^{3\dagger}$
|
| 4 |
+
|
| 5 |
+
<sup>1</sup>Ho Chi Minh City University of Technology (HCMUT), VNU-HCM, Ho Chi Minh City, Vietnam
|
| 6 |
+
|
| 7 |
+
$^{2}$ VinAI Research, Vietnam
|
| 8 |
+
|
| 9 |
+
$^{3}$ Nanyang Technological University, Singapore
|
| 10 |
+
|
| 11 |
+
phkhanhtrinh23@gmail.com, v.khoilm1@vinai.io, anhtuan.luu@ntu.edu.sg
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
In this paper, we introduce UniBridge (Cross-Lingual Transfer Learning with Optimized Embeddings and Vocabulary), a comprehensive approach developed to improve the effectiveness of Cross-Lingual Transfer Learning, particularly in languages with limited resources. Our approach tackles two essential elements of a language model: the initialization of embeddings and the optimal vocabulary size. Specifically, we propose a novel embedding initialization method that leverages both lexical and semantic alignment for a language. In addition, we present a method for systematically searching for the optimal vocabulary size, ensuring a balance between model complexity and linguistic coverage. Our experiments across multilingual datasets show that our approach greatly improves the F1-Score in several languages. UniBridge is a robust and adaptable solution for cross-lingual systems in various languages, highlighting the significance of initializing embeddings and choosing the right vocabulary size in cross-lingual environments.
|
| 16 |
+
|
| 17 |
+
# 1 Introduction
|
| 18 |
+
|
| 19 |
+
Recently, multilingual pre-trained language models (LMs) have significantly advanced natural language processing (NLP) tasks, narrowing the performance gap between English and various other languages. Multilingual pre-trained models such as XLM-R (Conneau et al., 2020) and mBERT (Devlin et al., 2019) are currently strong models for effectively cross-lingual transfer (Hu et al., 2020; Artetxe et al., 2020; Le et al., 2024). However, these models pose a limitation that they are pretrained on a limited set of approximately 100 languages, leaving a substantial void for the vast array of the world's nearly 7000 languages (van Esch et al., 2022). The resultant disparity disproportionately affects low-resource languages that are
|
| 20 |
+
|
| 21 |
+
not covered in their pre-trained corpora (Wu and Dredze, 2020; Pfeiffer et al., 2020), impeding their performance compared to their high-resource counterparts.
|
| 22 |
+
|
| 23 |
+
<table><tr><td>Language</td><td>Pashto</td><td>Amharic</td></tr><tr><td>Original</td><td>الله��يologicسcury</td><td>الله��يologicسcury</td></tr><tr><td>mBERT</td><td>الله��يologic [UNK]</td><td>[UNK] [UNK] [UNK] [UNK]</td></tr><tr><td>UniBridge</td><td>الله��يologicسcury</td><td>الله��يologicسcury</td></tr></table>
|
| 24 |
+
|
| 25 |
+
Figure 1: Some languages/scripts are not covered in the pre-trained corpora. Hence, the pre-trained tokenizer will eventually produce many unknown tokens which corrupts the sentence's meaning and results in poor performance.
|
| 26 |
+
|
| 27 |
+
Recent efforts propose the use of adapters to mitigate the knowledge gap in low-resource languages prior to transferring knowledge for specific tasks (Pfeiffer et al., 2020; Üstün et al., 2020; Ansell et al., 2021). These methods adapt the pre-trained LMs to a new language by utilizing monolingual data, enabling the model to acquire a robust representation of the target language before receiving knowledge from the source language. Despite enhanced performance in languages not included in the pre-trained corpora, these approaches still exhibit poor performance in languages with unseen scripts (i.e., the scripts that are not presented in the pre-training corpora; see Figure 1). To address the issue of unseen scripts, existing studies (Artetxe et al., 2020; Pfeiffer et al., 2021) propose acquiring a new vocabulary embedding for newly discovered languages. However, these methods heavily rely on manually configuring the vocabulary size and initializing the embedding matrix.
|
| 28 |
+
|
| 29 |
+
Furthermore, recent Cross-Lingual Transfer Learning studies focus on English due to its abundant pre-trained data and impressive task performance, our experiments reveal that high performance in English tasks does not necessarily guar
|
| 30 |
+
|
| 31 |
+
antee successful transfer to other languages, particularly low-resource languages. Therefore, we suggest an automated method utilizing the LMs to identify the most suitable set of source languages for knowledge aggregation, leading to notable performance improvements over single-source language transfer.
|
| 32 |
+
|
| 33 |
+
Our research empirically tested the effectiveness of newly random initialized embeddings and fixed vocabulary size. We then introduce an efficient technique for determining the optimal vocabulary size for new languages, utilizing the syntactic and semantic insights from the pre-trained LMs. In addition, we present an innovative method for transferring knowledge from multiple sources, which allows the model to choose the best combination of source languages to improve the overall performance. Our results contribute to the ongoing discussion about managing linguistic diversity in NLP, particularly for languages with limited resources, emphasizing the importance of a detailed and inclusive strategy in creating multilingual pre-trained LMs.
|
| 34 |
+
|
| 35 |
+
We evaluate our approach on sequence tagging tasks (e.g. NER, POS) and classification (e.g. NLI) with two strong baselines, mBERT and XLM-R, and observe a significant increase in the F1 and accuracy score ${}^{1}$ . In summary,our contributions are:
|
| 36 |
+
|
| 37 |
+
- We propose a novel approach to automatic search for a suitable vocabulary size to adapt to a new language.
|
| 38 |
+
- We propose a new strategy to initialize the embedding that leverages the syntactic and semantic knowledge encoded in the pre-trained LMs to address the missing tokens when adapting to low-resource languages.
|
| 39 |
+
- We propose a method to aggregate multi-source transfer learning to enhance the performance on cross-lingual transfer tasks. We show that multi-source can outperform effective multi-language learning.
|
| 40 |
+
|
| 41 |
+
# 2 Methodology
|
| 42 |
+
|
| 43 |
+
Our proposed framework includes five stages as illustrated in Figure 2. In the following section we will detail each stage of the framework: 1) Vocabulary size searching, 2) Language-specific embedding initialization, 3) Model adaptation to new
|
| 44 |
+
|
| 45 |
+
languages not covered in the pre-training data, 4) Downstream task training, 5) Multi-source transfer downstream task inference.
|
| 46 |
+
|
| 47 |
+
# 2.1 Vocabulary size searching
|
| 48 |
+
|
| 49 |
+
Whether training from scratch or starting with a pre-trained language model, every NLP practitioner faces the task of determining the appropriate vocabulary size. Thus, choosing a suitable vocabulary size requires exhaustive searching (i.e., the whole training and testing process is required to determine the best vocabulary size). For UniBridge, the vocabulary is determined by using only CPU and is not time-consuming as it does not require any language model training phases. This is achieved by leveraging the average log probability (ALP, Zheng et al. (2021)). The algorithm for vocabulary size searching is illustrated by Algorithm 1.
|
| 50 |
+
|
| 51 |
+
Algorithm 1 Vocabulary size searching algorithm.
|
| 52 |
+
```txt
|
| 53 |
+
Require: $\mathcal{D}$ : monolingual data, contains a list of words sentences, $v_{i}$ : initial vocabulary size, $v_{m}$ : maximum vocabulary size that the system should not exceed, $\delta_v$ : increased step of vocabulary size, $\epsilon_s$ : a threshold for stopping the algorithm.
|
| 54 |
+
```
|
| 55 |
+
|
| 56 |
+
1: $v \gets v_{i}$
|
| 57 |
+
2: $t\gets$ build tokenizer with vocab size $v$ on $\mathcal{D}$
|
| 58 |
+
3: $s_{prev} \gets ALP(\mathcal{D}, t)$
|
| 59 |
+
4: $\Delta_{s} = \infty$
|
| 60 |
+
5: while $\Delta_s > \epsilon_s$ do
|
| 61 |
+
6: $v\gets v + \delta_v$
|
| 62 |
+
7: if $v > v_{m}$ then
|
| 63 |
+
8: $v\gets v_{m}$
|
| 64 |
+
9: $t\gets$ build tokenizer with $\mathcal{V}$ on $\mathcal{D}$
|
| 65 |
+
0: Break the loop
|
| 66 |
+
11: else
|
| 67 |
+
12: $t\gets$ build tokenizer with $v$ on $\mathcal{D}$
|
| 68 |
+
13: $s_{\text{curr}} \gets ALP(\mathcal{D}, t)$
|
| 69 |
+
14: $\Delta_{s} = s_{curr} - s_{prev}$
|
| 70 |
+
15: $s_{prev} \gets s_{curr}$
|
| 71 |
+
16: end if
|
| 72 |
+
17: end while
|
| 73 |
+
18: return Tokenizer $t$ with vocab size $v$
|
| 74 |
+
|
| 75 |
+
The concept of Average Log Probability (ALP) was introduced by Zheng et al. (2021), who argue that ALP is related to the effectiveness of subsequent tasks.
|
| 76 |
+
|
| 77 |
+

|
| 78 |
+
|
| 79 |
+

|
| 80 |
+
Figure 2: Illustration of UniBridge: UniBridge represents an end-to-end framework for Cross-Linguual Transfer Learning. The framework encompasses various stages, including determining the appropriate vocabulary size, initializing language-specific embedding, adapting the model to new languages, and transferring task knowledge from multiple source languages. This approach aims to harness the power of a multilingual embedding space rather than relying on a single-source transfer language, such as English.
|
| 81 |
+
|
| 82 |
+
$$
|
| 83 |
+
A L P (\mathcal {D}, t) = \frac {1}{| t (\mathcal {D}) |} \sum_ {j = 1} ^ {| t (\mathcal {D}) |} \sum_ {k = 1} ^ {| s _ {j} |} \log p _ {u n i} \left(s _ {j} ^ {k}\right) \tag {1}
|
| 84 |
+
$$
|
| 85 |
+
|
| 86 |
+
For more details, readers are advised to refer to the work of Zheng et al. (2021). It is worth noting that although ALP has a high correlation with downstream tasks, the work did not provide a solution to find an optimal vocabulary size. Therefore, in this work, we propose using the 'degree of changes' in the ALP score, e.g., $\Delta_s$ . Initially, the starting vocabulary size is chosen and the ALP is calculated. Through a series of increases in the vocab size by $\delta_v$ , we can calculate the difference between the current ALP and the previous one. Thus, the algorithm will stop when the difference reaches a specific threshold $\epsilon_s$ . This threshold indicates that the optimal vocabulary size has been obtained. Continuing to increase the size will result in similar or worse performance. Therefore, we stop the algorithm to maintain the efficiency of training. Additionally, our method stands out from traditional grid search by using the 'degree of changes' of the ALP score indirectly, rather than directly as in grid search.
|
| 87 |
+
|
| 88 |
+
# 2.2 Language-specific embedding initialization
|
| 89 |
+
|
| 90 |
+
When training for a new language, using a randomly initialized embedding can lead to prolonged training times for optimal performance, especially in low-resource settings with a dataset size of around 10K samples. In such cases, strategically initializing the embedding proves to be more effective than a random approach. While FOCUS (Dobler and de Melo, 2023) demonstrates the use of a pre-trained LM's embedding for initialization, it depends heavily on a simple lexical overlapped alignment for subsequent stages, thus decreasing the downstream task performance. To address this gap, our approach initializes the new embedding by leveraging the pre-trained LMs in both syntactic and semantic aspects. In the initial stage, we obtain the target tokenizer $t_T$ for the new language, with $t_S$ being the source tokenizer of the pre-trained LMs. Representing the vocabulary sets as $V^T$ and $V^S$ for the target and source tokenizers respectively, and embedding matrices as $E^T[\cdot]$ and $E^S[\cdot]$ , we copy the source embedding to the target embedding for the overlapping tokens $O^L = V^T \cap V^S$ . This method ensures a seamless
|
| 91 |
+
|
| 92 |
+
integration of knowledge from the pre-trained LMs, addressing both syntactic and semantic aspects of the new language's embedding initialization.
|
| 93 |
+
|
| 94 |
+
$$
|
| 95 |
+
\forall o \in O ^ {L}: E ^ {T} [ o ] = E ^ {S} [ o ] \tag {2}
|
| 96 |
+
$$
|
| 97 |
+
|
| 98 |
+
Although the number of lexical overlapping tokens can be substantial when utilizing the same script, such as Latin or Han, this phenomenon does not extend to unseen scripts. To address this challenge, we define the non-lexical alignment set as $A_T^L = V^T \setminus O^L$ and initiate a search for semantically aligned tokens within this set. Despite languages having different scripts, the underlying meanings often converge on similar definitions. To facilitate this alignment, we train two static embeddings—one for the source tokenizer ( $F^S$ ) and another for the target tokenizer ( $F^T$ )—using the monolingual dataset $\mathcal{D}$ . These embeddings are denoted as $F^S[\cdot]$ for the source tokenizer and $F^T[\cdot]$ for the target tokenizer. For each token $v_i$ in $A_T^L$ , we calculate the cosine similarity with every token $v_j$ in $A_S^L = V^S \setminus O^L$ , resulting in a matrix $S_{i,j} \in \mathbb{R}^{|A_T^L| \times |A_S^L|}$ . A pair of semantically aligned tokens $(v_i, v_j)$ is defined as a pair of source and target tokens whose embeddings exhibit the highest cosine similarity score to each other, or:
|
| 99 |
+
|
| 100 |
+
$$
|
| 101 |
+
i = \underset {l} {\operatorname {a r g m a x}} \left(S _ {l, j}\right) \quad \text {a n d} \quad j = \underset {l} {\operatorname {a r g m a x}} \left(S _ {i, l}\right) \tag {3}
|
| 102 |
+
$$
|
| 103 |
+
|
| 104 |
+
Refer to Equation 3, we define $S = \{(i,j) | i = \operatorname{argmax}_{l} (S_{l,j}) \text{ and } j = \operatorname{argmax}_{l} (S_{i,l})\}$ . Each token that is semantically aligned will have the embedding copied from their counterpart from the source embeddings.
|
| 105 |
+
|
| 106 |
+
$$
|
| 107 |
+
\forall (i, j) \in S: E ^ {T} [ i ] = E ^ {S} [ j ] \tag {4}
|
| 108 |
+
$$
|
| 109 |
+
|
| 110 |
+
For the remaining non-aligned tokens, $A_{T} = A_{T}^{L} \setminus S_{i}$ and $A_{S} = A_{S}^{L} \setminus S_{j}$ where $S_{i}, S_{j}$ is the set of semantically aligned token of the target and source vocabulary (i.e. $S_{i} = \{i | (i, j) \in S\}$ , $S_{j} = \{j | (i, j) \in S\}$ ), we initialize the target embedding using the weighted sum of the aligned target tokens. We compute the cosine similarity between each non-aligned token $a_{T} \in A_{T}$ and the set of aligned target tokens (comprising both lexical and semantically aligned tokens) $o_{T} \in O^{L} \cup S_{i}$ .
|
| 111 |
+
|
| 112 |
+
$$
|
| 113 |
+
c _ {a, o} = \frac {F ^ {T} \left[ a _ {T} \right] F ^ {T} \left[ o _ {T} \right] ^ {\top}}{\| F ^ {T} \left[ a _ {T} \right] \| \cdot \| F ^ {T} \left[ o _ {T} \right] \|} \tag {5}
|
| 114 |
+
$$
|
| 115 |
+
|
| 116 |
+
To obtain the most similar aligned symbols $o_{T}$ for a single symbol $a_{T}$ , we use the same approach in Dobler and de Melo (2023), using sparsemax (Martins and Astudillo, 2016) over $c_{a}$ , where $c_{a}$ is a vector containing all similarity scores from $c_{a,o}$ . Sparsemax is a variant softmax, but it assigns zero to low-probability element. By this, we can overcome the problem posted by skew distribution where some tokens have only one or two similar tokens while others have more. The weight $w_{a,o}$ for each aligned token $o_{T}$ as defined in Equation 6.
|
| 117 |
+
|
| 118 |
+
$$
|
| 119 |
+
w _ {a, o} = \operatorname {s p a r s e m a x} _ {o} \left(c _ {a}\right) \tag {6}
|
| 120 |
+
$$
|
| 121 |
+
|
| 122 |
+
We denote $S_{a}$ as a set of similar aligned tokens, which contains $o_{T}$ whose probability is non-zero assigned by sparesemax.
|
| 123 |
+
|
| 124 |
+
$$
|
| 125 |
+
S _ {a} = \left\{o _ {T} \in O ^ {L} \cup S _ {i} \mid w _ {a, o} > 0 \right\} \tag {7}
|
| 126 |
+
$$
|
| 127 |
+
|
| 128 |
+
Using the set $S_{a}$ and the weight $w_{a,o}$ , the embedding for the non-aligned token $a_{T}$ is calculated as the weighted sum of its most similar aligned tokens.
|
| 129 |
+
|
| 130 |
+
$$
|
| 131 |
+
\forall a _ {T} \in A _ {T}: E ^ {T} [ a _ {T} ] = \sum_ {o _ {T} \in S _ {a}} w _ {a _ {T}, o _ {T}} E ^ {T} [ o _ {T} ] \tag {8}
|
| 132 |
+
$$
|
| 133 |
+
|
| 134 |
+
# 2.3 Model adaptation to new languages & Downstream task training
|
| 135 |
+
|
| 136 |
+
Continual pre-training, also known as language adaptation, has proven to be an effective method for enhancing the downstream performance of zero-shot cross-lingual tasks, as demonstrated by studies such as Ke et al. (2023); Alabi et al. (2022); Ebrahimi and Kann (2021). To mitigate the environmental impact and reduce model storage requirements, we opt to pre-train only a portion of the model, aligning with the approach introduced in MAD-X (Pfeiffer et al., 2020).
|
| 137 |
+
|
| 138 |
+
As in Figure 2, we made some modifications to the MAD-X configuration. Firstly, we initialize a new embedding for UniBridge which is achieved from previous stages and train the embedding together with adapters while still freezing all the pretrained LMs' parameters. Secondly, we propose using the KL divergence together with the MLM loss (Appendix A). We see that although the frozen parameter in each layer of the pre-trained LMs helps guide the trainable adapters of the new language's embedding representation into the same pre-trained LM's embedding space, MLM is not sufficient as it only enforces the adapter to predict
|
| 139 |
+
|
| 140 |
+
the mask token and this cannot guarantee the new language's representation is the same as multilingual embedding space encoded by the pre-trained LMs. This limitation prohibits the knowledge transferability of task adaptation since task adaptation takes a source language (usually high-resource languages such as English, Chinese, etc) and transfers the task knowledge directly to the target language without any alignment between the two languages. Therefore, we use KL divergence as a regularizer to guide the model not only to learn the language representation well, but also to maintain the same space as the source language in order to achieve better transferability.
|
| 141 |
+
|
| 142 |
+
$$
|
| 143 |
+
\begin{array}{l} \mathcal {L} = \mathcal {L} _ {M L M} (y, \hat {y}) \tag {9} \\ + \beta D _ {K L} \left(\pi_ {U n i B r i d g e} (h | x) \| \pi_ {P L M} (h | x)\right) \\ \end{array}
|
| 144 |
+
$$
|
| 145 |
+
|
| 146 |
+
$y$ and $\hat{y}$ are the ground truth and prediction logits of the mask prediction task, respectively. $\pi_{UniBridge}(h|x)$ is the last hidden state of UniBridge, it is the output of the invertible adapter before goes to the linear classification head for masked predicting. $\pi_{PLM}(h|x)$ is the last hidden state of the pre-trained LMs, it is the output of the last Transformer layer, as in Figure 2, and is the input of the linear classification head for mask predicting.
|
| 147 |
+
|
| 148 |
+
# 2.4 Multi-source transfer downstream task inference
|
| 149 |
+
|
| 150 |
+
Instead of using one task adapter from one source language, we propose aggregating the knowledge from multiple source languages to derive a better result. For each target language, we compute the harmony weight or similarity distance between languages. Some libraries such as Lang2Vec (Malaviya et al., 2017) provide a similarity score between languages. However it does not cover all the languages. To overcome this problem, we directly use the language model (that UniBridge produced from previous stages) to measure the similarity between languages. In the Appendix D.2, we will provide a detailed comparison between Lang2Vec and UniBridge. This analysis will highlight the differences and similarities between the two approaches, offering insights into their respective performances and effectiveness.
|
| 151 |
+
|
| 152 |
+
For each target language, we collect $K$ samples of parallel sentences from datasets such as Tatoeba (Tiedemann, 2020) or FLORES-200 (Guzmán et al., 2019; Goyal et al., 2022; Team et al., 2022)
|
| 153 |
+
|
| 154 |
+
between the target language and a set of $N$ source languages.
|
| 155 |
+
|
| 156 |
+
We denote $\mathcal{D}^T$ as a monolingual dataset extracted from the parallel dataset on the target side, $\mathcal{D}^{S_i}$ is the monolingual dataset extracted from the parallel dataset on the source side of the $i$ -th source language. Each sentence is fed into the UniBridge with the corresponding language adapters and obtains a set of hidden states (i.e., output from the invertible adapter).
|
| 157 |
+
|
| 158 |
+
$$
|
| 159 |
+
H _ {l} = \left\{\pi_ {U n i B r i d g e} ^ {l} (s) | s \in \mathcal {D} ^ {l} \right\} \tag {10}
|
| 160 |
+
$$
|
| 161 |
+
|
| 162 |
+
$\pi_{UniBridge}^{l}$ is the UniBridge model which use the $l$ adapter; $\mathcal{D}^l$ is $\mathcal{D}^T$ for the target language and $\mathcal{D}^{S_i}$ for the $i$ -th source languages. The inverse $L_{2}$ distance between the target hidden state $H_{t}$ for target language $t$ and source hidden state $H_{s}$ for source language $s$ will be computed.
|
| 163 |
+
|
| 164 |
+
$$
|
| 165 |
+
d _ {t, s} = \frac {1}{L _ {2} - \operatorname {n o r m} \left(H _ {t} , H _ {s}\right)} \tag {11}
|
| 166 |
+
$$
|
| 167 |
+
|
| 168 |
+
After that, we compute the softmax over the inverse $L_{2}$ distance to gain the harmony weight between target language $t$ and set of source languages $S = \{s_i\}_{i=1}^N$ .
|
| 169 |
+
|
| 170 |
+
$$
|
| 171 |
+
w _ {t} = \operatorname {s o f t m a x} _ {s} \left(d _ {t, s}\right) \tag {12}
|
| 172 |
+
$$
|
| 173 |
+
|
| 174 |
+
Using this harmony weight, instead of replacing the task adapter for each different source language during inference like MAD-X, we forward through all the task adapters in parallel. The last logit prediction will be the weighted sum of all the logits predicted by each source language weighted by the harmony weight.
|
| 175 |
+
|
| 176 |
+
$$
|
| 177 |
+
\hat {y} = \sum_ {s \in S} w _ {t, s} \hat {y} _ {s} \tag {13}
|
| 178 |
+
$$
|
| 179 |
+
|
| 180 |
+
$\hat{y}_s$ is the logit prediction from source language $s$ .
|
| 181 |
+
|
| 182 |
+
The intuition behind the harmony weight is that given a pair of parallel sentence, each sentence is encoded by a different language model. The difference between the hidden states produced by this process turns out to be the difference between languages itself since the sentences convey the same meaning. Therefore, inversing the difference and applying softmax will result in the similarity that we can up-weight for languages, and they could be beneficial to the target language on downstream tasks and, at the same time, down-weight
|
| 183 |
+
|
| 184 |
+
the languages that are distant from the target language. Through our experiment, we show that multi-source inference outperforms single-source transfer and multi-language learning settings.
|
| 185 |
+
|
| 186 |
+
# 3 Experimental setup
|
| 187 |
+
|
| 188 |
+
Language set: The set of source languages are: English, Chinese, Russian, Arabic and Japanese. For the target languages, we evaluate 14 low-resource languages from WikiANN (Rahimi et al., 2019) whose training set consists of only 100 samples per language, 9 low-resource languages from Universal Dependencies (UD) (Zeman et al., 2023) whose training set consists of just few thousands samples per language and 10 languages from the AmericasNLI (Ebrahimimi et al., 2022).
|
| 189 |
+
|
| 190 |
+
Monolingual data: For the language adaptation part, we extract from the Wikipedia dataset from HuggingFace $^{2}$ 10K samples for simulating the low-resource settings, each sample consists of 128 words, for each target language. For source languages, the number of samples is 50K per language to simulate the rich-resource languages. For languages in AmericasNLI, we use one side of the translation dataset from Mager et al. (2021).
|
| 191 |
+
|
| 192 |
+
Tokenizer: We use the SentencePiece (Kudo and Richardson, 2018) to learn the token from the monolingual data with the vocab size determined by our Algorithm 1.
|
| 193 |
+
|
| 194 |
+
Downstream data: NER: We train UniBridge on the train split of WikiANN for all the source language sets and perform inference for the target language on the test split. POS: We train UniBridge from the train split of UD for all the source languages sets. NLI: We train UniBridge from the train split of XNLI (Conneau et al., 2018) for English, Chinese, Arabic and Russia due to the missing Japanese set.
|
| 195 |
+
|
| 196 |
+
Baseline: We evaluate UniBridge against the MAD-X framework and zero-shot cross-lingual fine-tuning using pre-trained language models (LMs). In the zero-shot approach, we fine-tune the entire pre-trained LM on the combined training data of all source languages and then directly infer on the target languages. With MAD-X, we adhere to its standard setup, training on monolingual data. To perform multi-language training, we combine training data from all source languages to train a "universal" task adapter. For inference, we swap
|
| 197 |
+
|
| 198 |
+
the language adapter for each target language and integrate the "universal" task adapter. For UniBridge, we implement the language adaptation and task training stages as detailed in Section 2.3. During inference, we combine the task adapters from 5 source languages for multi-source transfer and report the F1 score for NER and accuracy score POS, NLI on the target language's test split.
|
| 199 |
+
|
| 200 |
+
The hyperparameters for training, inference as well as the computational resources are given in Appendix C.
|
| 201 |
+
|
| 202 |
+
# 4 Results and Analysis
|
| 203 |
+
|
| 204 |
+
We present the result of our method and the baselines in Table 1 and 2 for NER task and Table 3 and 4 for POS tagging task. We report the NLI results in Table 12 and 13 in Appendix D.1. UniBridge outperforms strong baselines such as whole model fine-tuned (XLM-R, mBERT) and MAD-X framework by a large margin, i.e., for the XLM-R model, we outperform 11 over 14 languages. For POS tagging task, we outperform both baselines with two different backbone models. We also see this trend in NLI task (Appendix D.1). This highlights the effect of leveraging multiple source languages during inference to help make better decisions since each language contributes knowledge that benefits the model at prediction. Meanwhile, multi-training offers a more robust performance but also introducing more difficulties during training. The fact that UniBridge outperforms strong baselines such as whole fine-tuned model indicates that given a small monolingual and lightweight adaptation using adapters, we can significantly improve the cross-lingual tasks for uncovered languages. Compared to MAD-X, our approach differs from the use of a new embedding layer. For unseen languages, using a more specific layer of embedding can remarkably enhance the performance. Even though MAD-X already uses the invertible adapters as a component to adapt embedding layer to unseen languages, these components may not sufficient for rare languages with unseen scripts such as Amharic (amh), Khmer (khm), Kanada (kan). In addition, to evaluate UniBridge with large (decoder-style) Language Models (LLMs), we expanded our experiments beyond XLM-R and mBERT to include mGPT (Shliazhko et al., 2024) and mBART (Liu et al., 2020). This extension provides a more robust assessment of UniBridge's effectiveness across different model types, demonstrating its versatility
|
| 205 |
+
|
| 206 |
+
<table><tr><td></td><td>amh</td><td>ang</td><td>cdo</td><td>crh</td><td>eml</td><td>frr</td><td>khm</td><td>kan</td><td>lij</td><td>pbt</td><td>san</td><td>snd</td><td>sin</td><td>som</td><td>Average</td></tr><tr><td>XLM-R</td><td>43.31</td><td>52.71</td><td>22.04</td><td>44.62</td><td>40</td><td>44.17</td><td>40.69</td><td>45.34</td><td>40.45</td><td>46</td><td>41.28</td><td>43.13</td><td>50.03</td><td>50.23</td><td>43.14</td></tr><tr><td>MAD-X (XLM-R)</td><td>39.3</td><td>46.59</td><td>17.32</td><td>36.63</td><td>33.86</td><td>39.51</td><td>50</td><td>45.24</td><td>38.13</td><td>42.66</td><td>19.93</td><td>39.06</td><td>39.55</td><td>49.6</td><td>38.38</td></tr><tr><td>UniBridge (XLM-R)</td><td>49.6</td><td>43.24</td><td>42.91</td><td>46.03</td><td>40.15</td><td>50.67</td><td>42.67</td><td>48.72</td><td>45.16</td><td>46.09</td><td>29.74</td><td>51.32</td><td>52.86</td><td>54.22</td><td>45.95</td></tr></table>
|
| 207 |
+
|
| 208 |
+
Table 1: The results of the F1 Score for every setup with XLM-R as a backbone are showcased in 14 diverse languages of WikiANN. We highlight in **bold** the highest F1 score and **underline** the second highest of each target language for each backbone model.
|
| 209 |
+
|
| 210 |
+
<table><tr><td></td><td>amh</td><td>ang</td><td>cdo</td><td>crh</td><td>eml</td><td>frr</td><td>khm</td><td>kan</td><td>lij</td><td>pbt</td><td>san</td><td>snd</td><td>sin</td><td>som</td><td>Average</td></tr><tr><td>mBERT</td><td>12.87</td><td>52.24</td><td>19.76</td><td>47.81</td><td>39.71</td><td>51.3</td><td>18.46</td><td>42.86</td><td>45</td><td>25.86</td><td>30.71</td><td>13.61</td><td>2.79</td><td>46.15</td><td>32.08</td></tr><tr><td>MAD-X (mBERT)</td><td>13.91</td><td>51.48</td><td>16.22</td><td>46.22</td><td>39.2</td><td>45.76</td><td>19.2</td><td>31.3</td><td>37.35</td><td>29.25</td><td>22.96</td><td>20.31</td><td>12.34</td><td>37.66</td><td>30.23</td></tr><tr><td>UniBridge (mBERT)</td><td>15.46</td><td>53.28</td><td>30.42</td><td>45.67</td><td>36.15</td><td>54.72</td><td>19.49</td><td>44.07</td><td>45.49</td><td>39.33</td><td>20.55</td><td>42.36</td><td>13.68</td><td>62.28</td><td>37.35</td></tr></table>
|
| 211 |
+
|
| 212 |
+
Table 2: The results of the F1 Score for every setup with mBERT as a backbone showcased in 14 diverse languages of WikiANN. We highlight in bold the highest F1 score and underline the second highest of each target language for each backbone model.
|
| 213 |
+
|
| 214 |
+
and potential in leveraging various LLM architectures for improved language representation. The results are presented in Appendix D.8, showcasing the comparative performance and strengths of UniBridge in diverse settings.
|
| 215 |
+
|
| 216 |
+
Although UniBridge can successfully improve cross-lingual generalization, there are still some inconsistencies in the performance of a language on different tasks, e.g., Amharic (amh), Ligurian (lij), and Sanskrit (san) on NER and POS tasks. We hypothesize that the inconsistency arises from the misalignment in the subspace between the language adapter and the task adapter. One approach to mitigate this misalignment is to regularize the representation so that the newly learned representation is shared between the source and target languages. UniBridge leverages KL divergence as a regularization approach. This may not be sufficient to completely resolve the inconsistency, but given our constrained resources, KL divergence fits our requirements well. We leave other advanced methods, such as optimal transport or contrastive learning, for future work.
|
| 217 |
+
|
| 218 |
+
# 5 Ablation study
|
| 219 |
+
|
| 220 |
+
# 5.1 Contribution of each component
|
| 221 |
+
|
| 222 |
+
We study the contribution of each UniBridge component independently to investigate the critical components of each module. To remove KL divergence, we simply remove the KL loss from equation 9, keeping only MLM loss. To remove the embedding initialization component, we randomly initialize the embedding drawn from the Xavier normal distribution (Glorot and Bengio, 2010). To remove the vocab size search component, we fix the vocab size to $10k$ for every target language and
|
| 223 |
+
|
| 224 |
+
use SentencePiece (Kudo and Richardson, 2018). To remove the multi-source transfer, we consider English as the single source language transferred due to its wide use in many cross-lingual transfer works.
|
| 225 |
+
|
| 226 |
+
We report the mean and standard deviation of the F1 scores between 14 languages of 2 backbone models when applying UniBridge and the components removed from UniBridge in Table 5, details of each language can be found in Appendix D.3. Among components, embedding initialization plays the most critical role since removing it, we experience performance drops of about 39 and 20 for XLM-R and mBERT, respectively. For multi-source transfer component, mBERT experiences a larger drop with 11 F1 drop while XLM-R is down from 45 to 42. However, the standard deviation when removing multi-source transfer is larger than that of UniBridge (XLM-R), indicating that multi-source benefits more languages compared to single language transferred. Although removing KL divergence off the XLM-R improves its performance by 1 F1 score, the standard deviation increases by 1 score. Thus, KL divergence benefits languages in maintaining a more stable improvement among languages. On the other hand, removing KL divergence while using mBERT as a backbone model hurts the performance and drops 3 F1 scores. In order to clarify the effectiveness of KL-Divergence in the other model, we conducted experiments in Appendix D.4. Vocabulary size searching with dynamic vocab size significantly improves the performance for mBERT backbone with an improvement of the 7 F1 score. This implies that different languages should be applied differently and dynamically to best adapt to their linguistic features.
|
| 227 |
+
|
| 228 |
+
<table><tr><td></td><td>amh</td><td>lij</td><td>olo</td><td>san</td><td>snd</td><td>sin</td><td>tam</td><td>tgl</td><td>tat</td><td>Average</td></tr><tr><td>XLM-R</td><td>46.02</td><td>39.15</td><td>60.69</td><td>32.9</td><td>70.01</td><td>76.25</td><td>85.53</td><td>67.45</td><td>57.89</td><td>59.54</td></tr><tr><td>MAD-X (XLM-R)</td><td>47.72</td><td>58.28</td><td>69.48</td><td>36.1</td><td>71.2</td><td>73.86</td><td>83.85</td><td>69.01</td><td>65.83</td><td>63.88</td></tr><tr><td>UniBridge (XLM-R)</td><td>40.88</td><td>73.75</td><td>81.45</td><td>38.94</td><td>71.37</td><td>63.52</td><td>83.5</td><td>72.62</td><td>81.3</td><td>67.81</td></tr></table>
|
| 229 |
+
|
| 230 |
+
Table 3: The results of the accuracy for every setup with XLM as a backbone are showcased in 9 diverse languages of UD. We highlight in bold the highest accuracy score and underline the second highest of each target language for each backbone model.
|
| 231 |
+
|
| 232 |
+
<table><tr><td></td><td>amh</td><td>lij</td><td>olo</td><td>san</td><td>snd</td><td>sin</td><td>tam</td><td>tgl</td><td>tat</td><td>Average</td></tr><tr><td>mBERT</td><td>8.59</td><td>60.66</td><td>61.49</td><td>9.35</td><td>20.39</td><td>11.47</td><td>72.93</td><td>66.3</td><td>83.2</td><td>37.82</td></tr><tr><td>MAD-X (mBERT)</td><td>13.31</td><td>50.47</td><td>59.61</td><td>10.88</td><td>24.93</td><td>25.68</td><td>66.61</td><td>55.56</td><td>74.17</td><td>42.36</td></tr><tr><td>UniBridge (mBERT)</td><td>29.24</td><td>65.53</td><td>70.65</td><td>12.86</td><td>66.78</td><td>52.61</td><td>75.23</td><td>70.65</td><td>84.16</td><td>58.64</td></tr></table>
|
| 233 |
+
|
| 234 |
+
Table 4: The results of the accuracy for every setup with mBERT as a backbone are showcased in 9 diverse languages of UD. We highlight in bold the highest accuracy score and underline the second highest of each target language for each backbone model.
|
| 235 |
+
|
| 236 |
+
<table><tr><td></td><td>XLM-R</td><td>mBERT</td></tr><tr><td>UniBridge</td><td>45.95±6.28</td><td>37.35±15.38</td></tr><tr><td>- KL Divergence</td><td>46.87±7.02</td><td>34.78±17.48</td></tr><tr><td>- Embedding initialization</td><td>6.56±6.11</td><td>10.21±8.72</td></tr><tr><td>- VOCab size searching</td><td>45.48±7.54</td><td>30.59±14.55</td></tr><tr><td>- Multi-source transfer</td><td>42.05±9.91</td><td>25.66±12.3</td></tr></table>
|
| 237 |
+
|
| 238 |
+
Table 5: The performance of UniBridge when removing each component independently. Here, each removed component are indicating by the minus sign (-). For each removed components, other components are remained the same as the default configuration.
|
| 239 |
+
|
| 240 |
+
# 5.2 Vocabulary size
|
| 241 |
+
|
| 242 |
+
In this section, we contrast our approach with a novel technique for vocabulary initialization called EXTEND (Wang et al., 2020). EXTEND operates by initially expanding mBERT's vocabulary to accommodate the new language and then proceeding with pre-training on this language. In our comparison, EXTEND undergoes full fine-tuning for the MLM pre-training task. Subsequently, EXTEND is further fine-tuned using the monolingual data of each target language. Despite its extensive fine-tuning and high computational requirements, EXTEND does not perform satisfactorily on NER in comparison to UniBridge, as illustrated in Table 19 in Appendix D.5. UniBridge offers a much lighter and faster alternative, employing adapters for cross-lingual transfer learning. The lightweight and rapid nature of UniBridge significantly enhances the effectiveness of our method. Furthermore, we present an elaborate Table 20 containing various vocabulary sizes for each target language in the Appendix D.6. Regarding the lexical similarity of subwords in the vocabulary, we offer illustrations of subwords that exhibit similarity in both mBERT and XLM-R. These examples can be
|
| 243 |
+
|
| 244 |
+
found in Figure 4 within Appendix B.
|
| 245 |
+
|
| 246 |
+
# 5.3 ALP Threshold
|
| 247 |
+
|
| 248 |
+
We conducted experiments using different ALP thresholds to identify the most effective one. We tested threshold values such as 5.0, 10.0, and 15.0 during the pre-training process of UniBridge. In essence, raising the threshold led to a decrease in vocabulary size as the algorithm ended prematurely. As a result, we noticed a decrease in the F1-Score of mBERT and XLM-R as the threshold values increased, as illustrated in Figure 3.
|
| 249 |
+
|
| 250 |
+

|
| 251 |
+
Figure 3: Mean F1-Score across various ALP thresholds.
|
| 252 |
+
|
| 253 |
+
More experiments and ablation study can be found in Appendix D.
|
| 254 |
+
|
| 255 |
+
# 6 Related works
|
| 256 |
+
|
| 257 |
+
Dynamic vocabulary size: It is common among NLP practitioners that the vocabulary size is considered a hyperparameter and requires manual settings. Algorithms such as BPE (Gage, 1994), Word
|
| 258 |
+
|
| 259 |
+
Piece (Wu et al., 2016), SentencePiece (Kudo and Richardson, 2018) focus more on how to build a set of predefined number of tokens that statistically retrieved from the monolingual dataset. Some work such as VoCAP (Zheng et al., 2021), XLM-V (Liang et al., 2023) proposed algorithm to dynamically assign a vocabulary size for each language during the multilingual training. However, in monolingual training, there are some few works concerning this problem such as BPE-dropout (Provilkov et al., 2020), VOLT (Xu et al., 2021) learns to have an optimal vocab size via reducing the original vocab size using optimal transport as in VOLT or randomly removing merge in BPE-dropout.
|
| 260 |
+
|
| 261 |
+
Initialization: Artetxe et al. (2020) proposed to randomly initialize the new embedding for new language adaptation. Meanwhile, Wang et al. (2020), Chau et al. (2020), Pfeiffer et al. (2021) leverage the lexical similarity between the old vocabulary and the new vocabulary to initialize the embedding. On the other hand, there are works that explore the semantic space for initialization. SMALA (Vernikos and Popescu-Belis, 2021) directly finds the aligned token through the highest cosine similarity score. WECHSEL (Minixhofer et al., 2022) and FOCUS (Dobler and de Melo, 2023) use static embedding to find aligned tokens.
|
| 262 |
+
|
| 263 |
+
Multi-source transfer: Single-source transfer, especially, English-as-the-source-language receives many attentions. Artetxe et al. (2020), Ansell et al. (2021), Tu et al. (2022) leverages the multilingual backbone model, fine-tune on English downstream task and perform zero-shot transfer on target language test's set. Until recently, researchers have pointed out that using a multilingual training set is more beneficial compared to a single language. DeMuX (Khanuja et al., 2023) investigates the dataset level to accumulate examples that best benefit transferring using active learning. Dossou et al. (2022), Ogunremi et al. (2023) pre-train on the multilingual African dataset before distilling knowledge to downstream tasks.
|
| 264 |
+
|
| 265 |
+
# 7 Conclusion
|
| 266 |
+
|
| 267 |
+
In this paper, we investigate Cross-Linguual Transfer Learning, focusing on languages with constrained resources. Our contribution lies in an algorithm that autonomously determines the optimal vocabulary size for a new language, informed by its monolingual corpus, and an innovative method for initializing a new embedding matrix, drawing from
|
| 268 |
+
|
| 269 |
+
both semantic and lexical facets of the pre-trained language models. Additionally, we introduce a novel technique for aggregating multi-source transfer learning, enhancing the efficacy of cross-lingual transfer tasks. Our empirical tests demonstrate the adaptability of our method across different models, yielding significant enhancements in performance. A thorough investigation of key elements highlights UniBridge's effectiveness in various situations, offering an in-depth understanding of the robustness of our approach.
|
| 270 |
+
|
| 271 |
+
# Limitation
|
| 272 |
+
|
| 273 |
+
UniBridge is trained on the extracted Wikipedia with some heuristic noise filtering. However, we believe that further pre-processing such as language identification and noise filtering pipeline could further produce higher-quality monolingual data, which potentially improve the language adaptation stage. UniBridge incorporates the use of adapter to perform cross-lingual generalization, while this leverages the modular characteristic of adapter, it also inherited some limitation of the adapter itself (Kunz and Holmstrom, 2024; Alabi et al., 2024).
|
| 274 |
+
|
| 275 |
+
# References
|
| 276 |
+
|
| 277 |
+
Jesujoba O. Alabi, David Ifeoluwa Adelani, Marius Mosbach, and Dietrich Klakow. 2022. Adapting pretrained language models to African languages via multilingual adaptive fine-tuning. In Proceedings of the 29th International Conference on Computational Linguistics, pages 4336-4349, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
|
| 278 |
+
Jesujoba O. Alabi, Marius Mosbach, Matan Eyal, Dietrich Klakow, and Mor Geva. 2024. The hidden space of transformer language adapters.
|
| 279 |
+
Alan Ansell, Edoardo Maria Ponti, Jonas Pfeiffer, Sebastian Ruder, Goran Glavaš, Ivan Vulić, and Anna Korhonen. 2021. MAD-G: Multilingual adapter generation for efficient cross-lingual transfer. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4762–4781, Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 280 |
+
Mikel Artetxe, Sebastian Ruder, and Dani Yogatama. 2020. On the cross-lingual transferability of monolingual representations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4623–4637, Online. Association for Computational Linguistics.
|
| 281 |
+
Ethan C. Chau, Lucy H. Lin, and Noah A. Smith. 2020. Parsing with multilingual BERT, a small corpus, and
|
| 282 |
+
|
| 283 |
+
a small treebank. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1324-1334, Online. Association for Computational Linguistics.
|
| 284 |
+
Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle-moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8440-8451, Online. Association for Computational Linguistics.
|
| 285 |
+
Alexis Conneau, Rudy Rinott, Guillaume Lample, Adina Williams, Samuel Bowman, Holger Schwenk, and Veselin Stoyanov. 2018. XNLI: Evaluating crosslingual sentence representations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2475-2485, Brussels, Belgium. Association for Computational Linguistics.
|
| 286 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics.
|
| 287 |
+
Konstantin Dobler and Gerard de Melo. 2023. Focus: Effective embedding initialization for monolingual specialization of multilingual models.
|
| 288 |
+
Bonaventure F. P. Dossou, Atnafu Lambebo Tonja, Green Yousuf, Salomey Osei, Abigail Oppong, Iyanuoluwa Shode, Oluwabusayo Olufunke Awoyomi, and Chris Emezue. 2022. AfroLM: A self-active learning-based multilingual pretrained language model for 23 African languages. In Proceedings of The Third Workshop on Simple and Efficient Natural Language Processing (SustaiNLP), pages 52-64, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
|
| 289 |
+
Abteen Ebrahimi and Katharina Kann. 2021. How to adapt your pretrained multilingual model to 1600 languages. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4555-4567, Online. Association for Computational Linguistics.
|
| 290 |
+
Abteen Ebrahimi, Manuel Mager, Arturo Oncevay, Vishrav Chaudhary, Luis Chiruzzo, Angela Fan, John Ortega, Ricardo Ramos, Annette Rios, Ivan Vladimir Meza Ruiz, Gustavo Giménez-Lugo, Elisabeth Mager, Graham Neubig, Alexis Palmer, Rolando Coto-Solano, Thang Vu, and Katharina Kann. 2022. AmericasNLI: Evaluating zero-shot natural language
|
| 291 |
+
|
| 292 |
+
understanding of pretrained multilingual models in truly low-resource languages. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6279-6299, Dublin, Ireland. Association for Computational Linguistics.
|
| 293 |
+
Philip Gage. 1994. A new algorithm for data compression. C Users Journal, 12(2):23-38.
|
| 294 |
+
Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pages 249-256, Chia Laguna Resort, Sardinia, Italy. PMLR.
|
| 295 |
+
Naman Goyal, Cynthia Gao, Vishrav Chaudhary, Peng-Jen Chen, Guillaume Wenzek, Da Ju, Sanjana Krishnan, Marc'Aurelio Ranzato, Francisco Guzmán, and Angela Fan. 2022. The Flores-101 evaluation benchmark for low-resource and multilingual machine translation. Transactions of the Association for Computational Linguistics, 10:522-538.
|
| 296 |
+
Francisco Guzmán, Peng-Jen Chen, Myle Ott, Juan Pino, Guillaume Lample, Philipp Koehn, Vishrav Chaudhary, and Marc'Aurelio Ranzato. 2019. The FLORES evaluation datasets for low-resource machine translation: Nepali-English and Sinhala-English. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6098-6111, Hong Kong, China. Association for Computational Linguistics.
|
| 297 |
+
Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan First, and Melvin Johnson. 2020. XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalization. CoRR, abs/2003.11080.
|
| 298 |
+
Zixuan Ke, Yijia Shao, Haowei Lin, Tatsuya Konishi, Gyuhak Kim, and Bing Liu. 2023. Continual pretraining of language models. In The Eleventh International Conference on Learning Representations.
|
| 299 |
+
Simran Khanuja, Srinivas Gowiraj, Lucio Dery, and Graham Neubig. 2023. Demux: Data-efficient multilingual learning.
|
| 300 |
+
Taehyeon Kim, Jaehoon Oh, NakYil Kim, Sangwook Cho, and Se-Young Yun. 2021. Comparing kullback-Leibler divergence and mean squared error loss in knowledge distillation.
|
| 301 |
+
Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and tokenizer for neural text processing. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 66-71, Brussels, Belgium. Association for Computational Linguistics.
|
| 302 |
+
|
| 303 |
+
Jenny Kunz and Oskar Holmstrom. 2024. The impact of language adapters in cross-lingual transfer for NLU. In Proceedings of the 1st Workshop on Modular and Open Multilingual NLP (MOOMIN 2024), pages 24–43, St Julians, Malta. Association for Computational Linguistics.
|
| 304 |
+
Khoi M. Le, Trinh Pham, Tho Quan, and Anh Tuan Luu. 2024. LAMPAT: Low-Rank Adaption for Multilingual Paraphrasing Using Adversarial Training. In Proceedings of the AAAI Conference on Artificial Intelligence.
|
| 305 |
+
Davis Liang, Hila Gonen, Yuning Mao, Rui Hou, Naman Goyal, Marjan Ghazvininejad, Luke Zettlemoyer, and Madian Khabsa. 2023. XLM-V: Overcoming the vocabulary bottleneck in multilingual masked language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 13142-13152, Singapore. Association for Computational Linguistics.
|
| 306 |
+
Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pretraining for neural machine translation. Transactions of the Association for Computational Linguistics, 8:726-742.
|
| 307 |
+
Manuel Mager, Arturo Oncevay, Abteen Ebrahimi, John Ortega, Annette Rios, Angela Fan, Ximena Gutierrez-Vasques, Luis Chiruzzo, Gustavo Giménez-Lugo, Ricardo Ramos, Ivan Vladimir Meza Ruiz, Rolando Coto-Solano, Alexis Palmer, Elisabeth Mager-Hois, Vishrav Chaudhary, Graham Neubig, Ngoc Thang Vu, and Katharina Kann. 2021. Findings of the AmericasNLP 2021 shared task on open machine translation for indigenous languages of the Americas. In Proceedings of the First Workshop on Natural Language Processing for Indigenous Languages of the Americas, pages 202-217, Online. Association for Computational Linguistics.
|
| 308 |
+
Chaitanya Malaviya, Graham Neubig, and Patrick Littell. 2017. Learning language representations for typology prediction. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2529-2535, Copenhagen, Denmark. Association for Computational Linguistics.
|
| 309 |
+
Andre F. T. Martins and Ramón Fernandez Astudillo. 2016. From softmax to sparsemax: A sparse model of attention and multi-label classification.
|
| 310 |
+
Benjamin Minixhofer, Fabian Paischer, and Navid Rekabsaz. 2022. WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3992-4006, Seattle, United States. Association for Computational Linguistics.
|
| 311 |
+
|
| 312 |
+
Tolulope Ogunremi, Dan Jurafsky, and Christopher Manning. 2023. Mini but mighty: Efficient multilingual pretraining with linguistically-informed data selection. In Findings of the Association for Computational Linguistics: EACL 2023, pages 1251-1266, Dubrovnik, Croatia. Association for Computational Linguistics.
|
| 313 |
+
Jonas Pfeiffer, Ivan Vulic, Iryna Gurevych, and Sebastian Ruder. 2020. MAD-X: An Adapter-Based Framework for Multi-Task Cross-Linguual Transfer. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7654–7673, Online. Association for Computational Linguistics.
|
| 314 |
+
Jonas Pfeiffer, Ivan Vulic, Iryna Gurevych, and Sebastian Ruder. 2021. UNKs everywhere: Adapting multilingual language models to new scripts. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 10186-10203, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 315 |
+
Ivan Provilkov, Dmitrii Emelianenko, and Elena Voita. 2020. BPE-dropout: Simple and effective subword regularization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1882-1892, Online. Association for Computational Linguistics.
|
| 316 |
+
Afshin Rahimi, Yuan Li, and Trevor Cohn. 2019. Massively multilingual transfer for NER. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 151-164, Florence, Italy. Association for Computational Linguistics.
|
| 317 |
+
Oleh Shliazhko, Alena Fenogenova, Maria Tikhonova, Anastasia Kozlova, Vladislav Mikhailov, and Tatiana Shavrina. 2024. mGPT: Few-Shot Learners Go Multilingual. Transactions of the Association for Computational Linguistics, 12:58-79.
|
| 318 |
+
Koustuv Sinha, Robin Jia, Dieuwke Hupkes, Joelle Pineau, Adina Williams, and Douwe Kiela. 2021. Masked language modeling and the distributional hypothesis: Order word matters pre-training for little. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 2888-2913, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 319 |
+
NLLB Team, Marta R. Costa-jussà, James Cross, Onur Celebi, Maha Elbayad, Kenneth Heafield, Kevin Heffernan, Elahe Kalbassi, Janice Lam, Daniel Licht, Jean Maillard, Anna Sun, Skyler Wang, Guillaume Wenzek, Al Youngblood, Bapi Akula, Loic Barrault, Gabriel Mejia Gonzalez, Prangthip Hansanti, John Hoffman, Semarley Jarrett, Kaushik Ram Sadagopan, Dirk Rowe, Shannon Spruit, Chau Tran, Pierre Andrews, Necip Fazil Ayan, Shruti Bhosale, Sergey Edunov, Angela Fan, Cynthia Gao, Vedanuj Goswami, Francisco Guzmán, Philipp Koehn, Alexandre Mourachko, Christophe Ropers, Safiyyah Saleem, Holger Schwenk, and Jeff Wang.
|
| 320 |
+
|
| 321 |
+
2022. No language left behind: Scaling human-centered machine translation.
|
| 322 |
+
Jörg Tiedemann. 2020. The tatoeba translation challenge - realistic data sets for low resource and multilingual MT. In Proceedings of the Fifth Conference on Machine Translation, pages 1174-1182, Online. Association for Computational Linguistics.
|
| 323 |
+
Lifu Tu, Caiming Xiong, and Yingbo Zhou. 2022. Prompt-tuning can be much better than fine-tuning on cross-lingual understanding with multilingual language models. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 5478-5485, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
|
| 324 |
+
Ahmet Üstün, Arianna Bisazza, Gosse Bouma, and Gertjan van Noord. 2020. UDapter: Language adaptation for truly Universal Dependency parsing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2302-2315, Online. Association for Computational Linguistics.
|
| 325 |
+
Daan van Esch, Tamar Lucassen, Sebastian Ruder, Isaac Caswell, and Clara Rivera. 2022. Writing system and speaker metadata for $2,800+$ language varieties. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 5035-5046, Marseille, France. European Language Resources Association.
|
| 326 |
+
Giorgos Vernikos and Andrei Popescu-Belis. 2021. Subword mapping and anchoring across languages. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2633-2647, Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 327 |
+
Zihan Wang, Karthikeyan K, Stephen Mayhew, and Dan Roth. 2020. Extending multilingual BERT to low-resource languages. In *Findings of the Association for Computational Linguistics: EMNLP* 2020, pages 2649–2656, Online. Association for Computational Linguistics.
|
| 328 |
+
Shijie Wu and Mark Dredze. 2020. Are all languages created equal in multilingual BERT? In Proceedings of the 5th Workshop on Representation Learning for NLP, pages 120-130, Online. Association for Computational Linguistics.
|
| 329 |
+
Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation.
|
| 330 |
+
|
| 331 |
+
Jingjing Xu, Hao Zhou, Chun Gan, Zaixiang Zheng, and Lei Li. 2021. Vocabulary learning via optimal transport for neural machine translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 7361-7373, Online. Association for Computational Linguistics.
|
| 332 |
+
Daniel Zeman, Joakim Nivre, Mitchell Abrams, Elia Ackermann, Noëmi Aepli, Hamid Aghaei, Željko Agić, Amir Ahmadi, Lars Ahrenberg, Chika Kennedy Ajede, Gabrielé Aleksandraviciţe, Ika Alfina, Lene Antonsen, Katya Aplonova, Angelina Aquino, Carolina Aragon, Maria Jesus Aranzabe, Hórumn Arnardóttir, Gashaw Arutie, Jessica Naraiswari Arwidarasti, Masayuki Asahara, Luma Ateyah, Furkan Atmaca, Mohammed Attia, Aitziber Atutxa, Liesbeth Augustinus, Elena Badmaeva, Keerthana Balasubramani, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, Verginica Barbu Mittelu, Victoria Basmov, Colin Batchelor, John Bauer, Seyyit Talha Bedir, Kepa Bengoetxea, Gözde Berk, Yevgeni Berzak, Irshad Ahmad Bhat, Riyadh Ahmad Bhat, Erica Biagetti, Eckhard Bick, Agne Bielinskiè, Kristin Bjarnadóttir, Rogier Blokland, Victoria Bobicev, Ioïc Boizou, Emanuel Borges Völker, Carl Börstell, Cristina Bosco, Gosse Bouma, Sam Bowman, Adriane Boyd, Kristina Brokaitè, Aljoscha Burchardt, Marie Candido, Bernard Caron, Gauthier Caron, Tatiana Cavalcanti, Gulsen Cebiroglu Eryigit, Flavio Massimiliano Cecchini, Giuseppe G. A. Celano, Slavomir Ceplo, Savas Cetin, Ozlem Cetinoglu, Fabricio Chalub, Ethan Chi, Yongseok Cho, Jinho Choi, Jayeol Chun, Alessandra T. Cignarella, Silvie Cinkova, Aurelie Collomb, Cagrí Coltekin, Miriam Connor, Marine Courtin, Elizabeth Davidson, Marie-Catherine de Marneffé, Valeria de Paiva, Mehmet Oguz Derin, Elvis de Souza, Arantza Diaz de Ilarraza, Carly Dickerson, Arawinda Dinakaramani, Bamba Dione, Peter Dirix, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Hanne Eckhoff, Marhaba Eli, Ali Elkahky, Binyam Ephrem, Olga Erina, Tomaz Erjavec, Aline Etienne, Wograine Evelyn, Sidney Facundes, Richard Farkas, Marília Fernanda, Hector Fernandez Alcalde, Jennifer Foster, Cláudia Freitas, Kazunori Fujita, Katarína Gajdosová, Daniel Galbraith, Marcos Garcia, Moa Gärdenfors, Sebastian Garza, Fabricio Ferraz Gerardi, Kim Gerdes, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Memduh Gökirmak, Yoav Goldberg, Xavier Gómez Guinovart, Berta González Saavedra, Bernadeta Griciūtē, Matias Grioni, Loïc Grobol, Normunds Gržītis, Bruno Guillaume, Céline Guillot-Barbance, Tunga Güngör, Nizar Habash, Hinrik Hafsteinsson, Jan Hajic, Jan Hajic jr., Mika Hämäläinen, Linh Hà Mya Na-Rae Han, Muhammad Yüistira Hanifmuti, Sam Hardwick, Kim Harris, Dag Haug, Johannes Heinecke, Oliver Hellwig, Felix Hennig, Barbora Hladká Jaroslava Hlaváčová Florinel Hociung Petter Hohle Eva Huber Jena Hwang Takumi Ikeda Anton Karl Ingason Radu Ion Elena Irimia Olajíde Ishola Tomás Jelinek Anders Johannsen Hildur Jónsdóttir Fredrik Jor
|
| 333 |
+
|
| 334 |
+
gensen, Markus Juutinen, Sarveswaran K, Huner Kašikara, Andre Kaasen, Nadezhda Kabaeva, Sylvain Kahane, Hiroshi Kanayama, Jenna Kanerva, Boris Katz, Tolga Kayadelen, Jessica Kenney, Václava Kettnerova, Jesse Kirchner, Elena Klementieva, Arne Köhn, Abdullatif Köksal, Kamil Kopacewicz, Timo Korkiakangas, Natalia Kotsyba, Jolanta Kovalevskaité, Simon Krek, Parameswari Krishnamurthy, Sookyoung Kwak, Veronika Laippala, Lucia Lam, Lorenzo Lambertino, Tatiana Lando, Septina Dian Larasati, Alexei Lavrentiev, John Lee, Phng Lê Hong, Alessandro Lenci, Sarah Lertpradit, Herman Leung, Maria Levina, Cheuk Ying Li, Josie Li, Keying Li, Yuan Li, KyungTae Lim, Krister Linden, Nikola Ljubesic, Olga Loginova, Andry Luthfi, Mikko Luukko, Olga Lyashevskaya, Teresa Lynn, Vivien Macketanz, Aibek Makazhanov, Michael Mandl, Christopher Manning, Ruli Manurung, Catalina Maranduc, David Marcek, Katrin Marheinecke, Héctor Martínez Alonso, André Martins, Jan Masek, Hiroshi Matsuda, Yuji Matsumoto, Ryan McDonald, Sarah McGuinness, Gustavo Mendonca, Niko Miekka, Karina Mischenkova, Margarita Misirpashayeva, Anna Missilä, Catalan Mititelu, Maria Mitrofan, Yusuke Miyao, Amir Hossein Mojiri Foroushani, Amirsaeid Moloodi, Simonetta Montemagni, Amir More, Laura Moreno Romero, Keiko Sophie Mori, Shinsuke Mori, Tomohiko Morioka, Shigeki Moro, Bjartur Mortensen, Bohdan Moskalevskyi, Kadri Muischnek, Robert Munro, Yugo Murawaki, Kaili Muürisep, Pinkey Nainwani, Mariam Nakhlé, Juan Ignacio Navarro Horniacek, Anna Nedoluzhko, Gunta Nespore-Berzkalne,Lng Nguyen Thi, Huyen Nguyen Thi Minh, Yoshihiro Nikaido, Vitaly Nikolaev, Rattima Nitisaroj, Alireza Nourian, Hanna Nurmi, Stina Ojala, Atul Kr. Ojha, Adédayo Olókun, Mai Omura, Emeka Onwuegbuzia, Petya Osenova, Robert Östling, Lilja Övrelid, Şaziye Betül Özates, Arzucan Özgür, Balkiz Öztürk Başaran, Niko Partanen, Elena Pascual, Marco Passarotti, Agnieszka Patejuk, Guilherme Paulino-Passos, Angelika Peljak-Lapińska, Siyao Peng, Cenel-Augusto Perez, Natalia Perkova, Guy Perrier, Slav Petrov, Daria Petrova, Jason Phelan, Jussi Piitulainen, Tommi A Pirinen, Emily Piterler Barbara Plank, Thierry Poibeau, Larisa Ponomareva, Martin Popel, Lauma Pretkalnina, Sophie Prévost Prokopis Prokopidis, Adam Przepiörkowski Tiina Puolakainen,Sampo Pyysalo,Peng QiAndriela Raabis,Alexandre Rademaker,Taraka RamaLoganathan Ramasamy Carlos Ramisch,Fam Rashel Mohammad Sadegh Rasooli,Vinit Ravishankar Livy Real,Petru Rebeja,Siva Reddy Georg Rehm,Ivan Riabov,Michael Rießler,Erika Rimkute,Larissa Rinaldi,Laura Rituma,Luisa Rocha,Eirikur Rögnvaldsson Mykhailo Romanenko,Rudolf Rosa Valentin Rosca,Davide Rovati Olga Rudina Jack Rueter Kristjan Rúnarsson Shoval Sadde Pegah Safari Benoit Sagot,Aleksi Sahala Shadi Saleh Alessio Salomoni Tanja Samardzić Stephanie Samson Manuela Sanguinetti,Dage Sarg Baiba Saulite Yanin Sawanakunanon Kevin Scannell Salvatore Scarlata,Nathan Schneider Sebastian Schuster
|
| 335 |
+
|
| 336 |
+
Djame Seddah, Wolfgang Seeker, Mojgan Seraji, Mo Shen, Atsuko Shimada, Hiroyuki Shirasu, Muh Shohibussirri, Dmitry Sichinava, Einar Freyr Sigursson, Aline Silveira, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simko, Maria Simková, Kiril Simov, Maria Skachedubova, Aaron Smith, Isabela Soares-Bastos, Carolyn Spadine, Steinhör Steingrímsson, Antonio Stella, Milan Straka, Emmett Strickland, Jana Strnadová, Alane Suhr, Yogi Lesmana Sulestio, Umut Sulubacak, Shingo Suzuki, Zsolt Szántó, Dima Taji, Yuta Takahashi, Fabio Tamburini, Mary Ann C. Tan, Takaaki Tanaka, Samson Tella, Isabelle Tellier, Guillaume Thomas, Liisi Torga, Marsida Toska, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Utku Türk, Francis Tyers, Sumire Uematsu, Roman Untilov, Zdenka Uresová, Larraitz Uria, Hans Uszkoreit, Andrius Utka, Sowmya Vajjala, Daniel van Niekerk, Gertjan van Noord, Viktor Varga, Eric Villemonte de la Clergerie, Veronika Vincze, Aya Wakasa, Joel C. Wallenberg, Lars Wallin, Abigail Walsh, Jing Xian Wang, Jonathan North Washington, Maximilan Wendt, Paul Widmer, Seyi Williams, Mats Wiren, Christian Wittern, Tsegay Woldemariam, Tak-sum Wong, Alina Wróblewska, Mary Yako, Kayo Yamashita, Naoki Yamazaki, Chunxiao Yan, Koichi Yasuoka, Marat M. Yavrumyan, Zhuoran Yu, Zdenek Zabokrtský, Shorouq Zahra, Amir Zeldes, Hanzhi Zhu, and Anna Zhuravleva. 2023. Universal dependencies 2.13. LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics (UFAL), Faculty of Mathematics and Physics, Charles University.
|
| 337 |
+
|
| 338 |
+
Bo Zheng, Li Dong, Shaohan Huang, Saksham Singhal, Wanxiang Che, Ting Liu, Xia Song, and Furu Wei. 2021. Allocating large vocabulary capacity for cross-lingual language model pre-training. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 3203-3215, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
|
| 339 |
+
|
| 340 |
+
# A Why KL Divergence and MLM Loss work?
|
| 341 |
+
|
| 342 |
+
For the KL-Divergence effect, in contrast to MAD-X, UniBridge incorporates a novel training embedding. This results in the pre-task adapter representation being more inclined to reflect the characteristics of the target language compared to solely using the language adapter in MAD-X. Consequently, this introduces a misalignment between the language adapter and the task adapter, as each represents a different language.
|
| 343 |
+
|
| 344 |
+
In our research, we opt for KL-Divergence to regulate the representation to ensure it is shared across both the source and target languages (Kim et al., 2021). KL-Divergence requires less computational resources compared to other methods like layer-wise regularization or optimal transport.
|
| 345 |
+
|
| 346 |
+
For the MLM Loss effect, it is extremely effective in training encoder-only LMs because it encourages the model to learn rich contextual representations of language and facilitates effective pre-training.
|
| 347 |
+
|
| 348 |
+
In MLM, a portion of the input tokens is randomly masked, and the model is trained to predict these masked tokens based on the context provided by the surrounding tokens. This forces the model to learn contextual representations of words and phrases of that target language (Sinha et al., 2021). Moreover, by randomly masking tokens, MLM introduces noise into the training process, which can prevent overfitting and encourage the model to learn more generalizable features of the data.
|
| 349 |
+
|
| 350 |
+
# B Similar tokens between pre-trained LM and UniBridge specific tokenizer
|
| 351 |
+
|
| 352 |
+
We illustrate the similar tokens between pre-trained LM and UniBridge specific tokenizer in Figure 4.
|
| 353 |
+
|
| 354 |
+

|
| 355 |
+
Figure 4: Illustrations of subwords exhibiting similarity in both mBERT and XLM-R.
|
| 356 |
+
|
| 357 |
+
# C Computational resources and hyperparameter for training, inference
|
| 358 |
+
|
| 359 |
+
All experiments are conducted on T4 machines. Training the UniBridge's language adapter takes approximately 2.5 hours on a single T4 machine with a batch size of 16. Separately, training UniBridge's task adapter, takes about 0.5 hours per source language on a single T4 machine with a batch size of 16.
|
| 360 |
+
|
| 361 |
+
We present the hyperparameters for training and inference for UniBridge and all the baselines' configurations in Table 6, 7, 8, 9, 10, and 11.
|
| 362 |
+
|
| 363 |
+
# D More experiments and ablation study
|
| 364 |
+
|
| 365 |
+
# D.1 Performance of UniBridge on NLI task
|
| 366 |
+
|
| 367 |
+
We report the performance of UniBridge on the AmericasNLI dataset in Tables 12 and 13.
|
| 368 |
+
|
| 369 |
+
<table><tr><td>Hyperparameter</td><td>Value</td></tr><tr><td>Initial vocab sizevi</td><td>7000</td></tr><tr><td>Maximum vocab sizevm</td><td>60000</td></tr><tr><td>Increased step of vocab sizeδv</td><td>1000</td></tr><tr><td>Threshold for stopping the algorithm εv</td><td>5.0</td></tr></table>
|
| 370 |
+
|
| 371 |
+
Table 6: The hyperparameter for vocabulary size searching process.
|
| 372 |
+
|
| 373 |
+
<table><tr><td>Hyperparameter</td><td>Value</td></tr><tr><td>Static embedding model</td><td>FastText</td></tr><tr><td>Static embedding dimension</td><td>300</td></tr><tr><td>Number of epoch of training</td><td>3</td></tr></table>
|
| 374 |
+
|
| 375 |
+
Table 7: The hyperparameter for embedding initialization stage.
|
| 376 |
+
|
| 377 |
+
<table><tr><td>Hyperparameter</td><td>Value</td></tr><tr><td>Invertible adapter reduced factor</td><td>2</td></tr><tr><td>Language adapter reduced factor</td><td>2</td></tr><tr><td>KL divergence weight β</td><td>1.0</td></tr><tr><td>Masked probability</td><td>0.15</td></tr><tr><td>Number of epochs trained</td><td>50</td></tr><tr><td>Batch size</td><td>32</td></tr><tr><td>Learning rate</td><td>{5e-5, 2e-4, 5e-4, 1e-3}</td></tr></table>
|
| 378 |
+
|
| 379 |
+
Table 8: The hyperparameter for language adaptation training. The adapter dimension is dynamically determined by reducing the Transformer's hidden size by a factor of reduced factor. Each language has a different proportion in the pre-trained LMs' knowledge; therefore, to have an optimal language adaptation, different learning rate for different language is required.
|
| 380 |
+
|
| 381 |
+
<table><tr><td>Hyperparameter</td><td>Value</td></tr><tr><td>Task adapter reduced factor</td><td>16</td></tr><tr><td>Number of epochs trained</td><td>11</td></tr><tr><td>Batch size</td><td>32</td></tr><tr><td>Learning rate</td><td>{5e-4, 1e-3}</td></tr></table>
|
| 382 |
+
|
| 383 |
+
Table 9: The hyperparameter for task adaptation. Each language has a different proportion in the pre-trained LMs' knowledge; therefore, to have an optimal language adaptation, a different learning rate for different languages is required.
|
| 384 |
+
|
| 385 |
+
<table><tr><td>Hyperparameter</td><td>Value</td></tr><tr><td>Number of epochs trained</td><td>10</td></tr><tr><td>Batch size</td><td>32</td></tr><tr><td>Learning rate</td><td>1e-5</td></tr></table>
|
| 386 |
+
|
| 387 |
+
Table 10: The configuration of the pre-trained LMs' fine-tuning on source downstream task and zero-shot transfer to target language.
|
| 388 |
+
|
| 389 |
+
Table 11: The configuration for multi-source inference.
|
| 390 |
+
|
| 391 |
+
# D.2 UniBridge v.s. Lang2Vec
|
| 392 |
+
|
| 393 |
+
Our method's reliance on parallel data enables it to capture typological similarities as well as syntactic and semantic relationships between languages. By utilizing parallel sentences, we can develop more nuanced representations that reflect the intricacies of language structures and meanings.
|
| 394 |
+
|
| 395 |
+
Moreover, the quality and coverage of typological databases can be inconsistent. Although these databases are available for many languages, they often lack completeness and accuracy. In contrast, parallel corpora, while more challenging to obtain, provide direct evidence of language similarities and differences in real-world contexts. Additionally, our method has shown superior performance compared to Lang2Vec in the experiments conducted on the WikiANN dataset in Table 14 and 15.
|
| 396 |
+
|
| 397 |
+
# D.3 Detail performance of each factor
|
| 398 |
+
|
| 399 |
+
We present the detailed performance of UniBridge on 14 languages on NER task when removing the contributed components in Table 16 and 17.
|
| 400 |
+
|
| 401 |
+
# D.4 Effectiveness of KL Divergence
|
| 402 |
+
|
| 403 |
+
In contrast to MAD-X, UniBridge employs a novel training embedding, leading to a pretask adapter representation that better captures the characteristics of the target language than solely using the language adapter in MAD-X. To ensure that the output representation is shared between both source and target languages, we use KL-Divergence. This approach is less computationally intensive than methods such as layer-wise regularization or optimal transport (Section 6).
|
| 404 |
+
|
| 405 |
+
To assess the effectiveness of using KL-Divergence within UniBridge, we conducted extensive tests on an alternative Language Model, such as mBART, using the WikiANN dataset. The results in Table 18 indicate that KL-Divergence significantly contributes to the overall performance of UniBridge, enhancing its effectiveness considerably.
|
| 406 |
+
|
| 407 |
+
# D.5 UniBridge v.s. EXTEND
|
| 408 |
+
|
| 409 |
+
We report the result on NER task compared between UniBridge and EXTEND method in Table
|
| 410 |
+
|
| 411 |
+
19.
|
| 412 |
+
|
| 413 |
+
# D.6 Vocabulary searching result of UniBridge
|
| 414 |
+
|
| 415 |
+
We report the searched size of the Algorithm 1 for each language in Table 20.
|
| 416 |
+
|
| 417 |
+
# D.7 UniBridge v.s. FOCUS
|
| 418 |
+
|
| 419 |
+
We compared UniBridge initialization and FOCUS initialization. For UniBridge, the whole pipeline is kept the same as discussed in the paper. For FOCUS (Dobler and de Melo, 2023), we replace step 2 discussed in Section 2.2 with FOCUS initialization pipeline while other steps are kept the same as UniBridge. We report the results on NER task in Table 21 and Table 22.
|
| 420 |
+
|
| 421 |
+
UniBridge surpasses FOCUS in performance across 10 out of 14 languages and 9 out of 14 languages on WikiANN. Among these languages, approximately $10 - 15\%$ of the tokens exhibit semantic alignment. We theorize that UniBridge's advantage lies in its ability to leverage these aligned tokens, which facilitates a smoother and quicker convergence during the subsequent MLM training phase compared to FOCUS initialization.
|
| 422 |
+
|
| 423 |
+
# D.8 UniBridge with Large Language Models
|
| 424 |
+
|
| 425 |
+
To evaluate UniBridge with large (decoder-style) Language Models (LLMs), we extended our experiments to include mGPT and mBART, alongside XLM-R and mBERT. This broader assessment demonstrates UniBridge's versatility and effectiveness across different model types. The results, presented in Table 23, highlight the strengths of UniBridge in diverse settings.
|
| 426 |
+
|
| 427 |
+
<table><tr><td></td><td>aym</td><td>bzd</td><td>cni</td><td>grn</td><td>hch</td><td>nah</td><td>oto</td><td>quy</td><td>shp</td><td>tar</td><td>Average</td></tr><tr><td>XLM-R</td><td>36.26</td><td>38.53</td><td>36.4</td><td>37.33</td><td>37.33</td><td>39.43</td><td>36.89</td><td>37.6</td><td>35.86</td><td>34.66</td><td>37.03</td></tr><tr><td>MAD-X (XLM-R)</td><td>39.46</td><td>36.8</td><td>38.93</td><td>39.73</td><td>35.86</td><td>40.78</td><td>33.42</td><td>37.46</td><td>39.06</td><td>36.53</td><td>37.80</td></tr><tr><td>UniBridge (XLM-R)</td><td>52.13</td><td>36.8</td><td>40.26</td><td>59.59</td><td>35.86</td><td>46.88</td><td>42.38</td><td>59.86</td><td>35.6</td><td>36.4</td><td>44.58</td></tr></table>
|
| 428 |
+
|
| 429 |
+
Table 12: The results of the accuracy score for every setup with XLM as a backbone showcased in 10 diverse languages of AmericasNLI. We highlight in bold the highest accuracy and underline the second highest of each target language for each backbone model.
|
| 430 |
+
|
| 431 |
+
<table><tr><td></td><td>aym</td><td>bzd</td><td>cni</td><td>grn</td><td>hch</td><td>nah</td><td>oto</td><td>quy</td><td>shp</td><td>tar</td><td>Average</td></tr><tr><td>mBERT</td><td>33.33</td><td>33.33</td><td>33.33</td><td>33.33</td><td>33.33</td><td>33.33</td><td>33.2</td><td>33.28</td><td>33.33</td><td>33.33</td><td>33.31</td></tr><tr><td>MAD-X (mBERT)</td><td>33.06</td><td>33.33</td><td>34.4</td><td>33.46</td><td>34</td><td>33.33</td><td>33.42</td><td>33.73</td><td>32.93</td><td>33.2</td><td>33.54</td></tr><tr><td>UniBridge (mBERT)</td><td>35.73</td><td>33.33</td><td>33.33</td><td>37.46</td><td>34.4</td><td>36.31</td><td>33.42</td><td>36.66</td><td>34.4</td><td>34.4</td><td>34.94</td></tr></table>
|
| 432 |
+
|
| 433 |
+
Table 13: The results of the accuracy score for every setup with mBERT as a backbone showcased in 10 diverse languages of AmericasNLI. We highlight in bold the highest accuracy and underline the second highest of each target language for each backbone model.
|
| 434 |
+
|
| 435 |
+
<table><tr><td></td><td>amh</td><td>ang</td><td>cdo</td><td>crh</td><td>eml</td><td>frr</td><td>khm</td><td>kan</td><td>lij</td><td>pbt</td><td>san</td><td>snd</td><td>sin</td><td>som</td><td>Average</td></tr><tr><td>lang2vec (XLM-R)</td><td>30.19</td><td>45.51</td><td>36.28</td><td>45.8</td><td>32.23</td><td>41.72</td><td>37.75</td><td>47.45</td><td>31.67</td><td>40.05†</td><td>49.79</td><td>44.84</td><td>48.95</td><td>42.17</td><td>38.03</td></tr><tr><td>UniBridge (XLM-R)</td><td>49.6</td><td>43.24</td><td>42.91</td><td>46.03</td><td>40.15</td><td>50.67</td><td>42.67</td><td>48.72</td><td>45.16</td><td>46.09</td><td>29.74</td><td>51.32</td><td>52.86</td><td>54.22</td><td>45.96</td></tr></table>
|
| 436 |
+
|
| 437 |
+
Table 14: Comparison between Lang2Vec and UniBridge using the XLM-R backbone on the WikiANN dataset. The highest F1 score for each target language is highlighted in bold. The average value for each row is calculated in the last column.†: The language Pashto (pbt) does not exist in the dictionary of lang2vec thus we set the average weight for it, e.g. 0.2 for every source language.
|
| 438 |
+
|
| 439 |
+
<table><tr><td></td><td>amh</td><td>ang</td><td>cdo</td><td>crh</td><td>eml</td><td>frr</td><td>khm</td><td>kan</td><td>lij</td><td>pbt</td><td>san</td><td>snd</td><td>sin</td><td>som</td><td>Average</td></tr><tr><td>lang2vec (mBERT)</td><td>8.76</td><td>26.85</td><td>32.25</td><td>34.0</td><td>21.23</td><td>16.0</td><td>26.85</td><td>37.38</td><td>27.51</td><td>28.32†</td><td>12.12</td><td>11.34</td><td>12.57</td><td>35.42</td><td>23.61</td></tr><tr><td>UniBridge (mBERT)</td><td>15.46</td><td>53.28</td><td>30.42</td><td>45.67</td><td>36.15</td><td>54.72</td><td>19.49</td><td>44.07</td><td>45.49</td><td>39.33</td><td>20.55</td><td>42.36</td><td>13.68</td><td>62.28</td><td>37.35</td></tr></table>
|
| 440 |
+
|
| 441 |
+
Table 15: Comparison between Lang2Vec and UniBridge using the mBERT backbone on the WikiANN dataset. The highest F1 score for each target language is highlighted in bold. The average value for each row is calculated in the last column.†: The language Pashto (pbt) does not exist in the dictionary of lang2vec thus we set the average weight for it, e.g. 0.2 for every source language.
|
| 442 |
+
|
| 443 |
+
<table><tr><td></td><td>am</td><td>ang</td><td>cdo</td><td>crh</td><td>eml</td><td>frr</td><td>km</td><td>kn</td><td>lij</td><td>ps</td><td>sa</td><td>sd</td><td>si</td><td>so</td></tr><tr><td>UniBridge (XLM-R)</td><td>49.6</td><td>43.24</td><td>42.91</td><td>46.03</td><td>40.15</td><td>50.67</td><td>42.67</td><td>48.72</td><td>45.16</td><td>46.09</td><td>29.74</td><td>51.32</td><td>52.86</td><td>54.22</td></tr><tr><td>- KL Divergence</td><td>47.66</td><td>45.61</td><td>47.1</td><td>45.91</td><td>37.78</td><td>58.1</td><td>40</td><td>50</td><td>43.92</td><td>49.61</td><td>31.91</td><td>50.74</td><td>51.1</td><td>56.79</td></tr><tr><td>- Embedding initialization</td><td>6.64</td><td>1.23</td><td>0.59</td><td>2.43</td><td>1.56</td><td>2.49</td><td>15.53</td><td>11.32</td><td>12.32</td><td>2.32</td><td>1.15</td><td>15.38</td><td>2.87</td><td>15.95</td></tr><tr><td>- Vocal size searching</td><td>36.13</td><td>57.14</td><td>47.37</td><td>47.54</td><td>42.91</td><td>54.95</td><td>39.65</td><td>45.76</td><td>42.75</td><td>46.44</td><td>28.06</td><td>47.35</td><td>47.51</td><td>53.11</td></tr><tr><td>- Multi-source transfer</td><td>40.58</td><td>56.13</td><td>36.68</td><td>45.49</td><td>35.96</td><td>57.14</td><td>32.67</td><td>45.53</td><td>39.23</td><td>33.77</td><td>22.93</td><td>39.27</td><td>47.37</td><td>55.97</td></tr></table>
|
| 444 |
+
|
| 445 |
+
Table 16: The detailed performance of UniBridge based on backbone model XLM-R when removing contributed components.
|
| 446 |
+
|
| 447 |
+
<table><tr><td></td><td>am</td><td>ang</td><td>cdo</td><td>crh</td><td>eml</td><td>frr</td><td>km</td><td>kn</td><td>lij</td><td>ps</td><td>sa</td><td>sd</td><td>si</td><td>so</td></tr><tr><td>UniBridge (mBERT)</td><td>15.46</td><td>53.28</td><td>30.42</td><td>45.67</td><td>36.15</td><td>54.72</td><td>19.49</td><td>44.07</td><td>45.49</td><td>39.33</td><td>20.55</td><td>42.36</td><td>13.68</td><td>62.28</td></tr><tr><td>- KL Divergence</td><td>2.42</td><td>52.07</td><td>25.52</td><td>42.97</td><td>32.7</td><td>55.56</td><td>19.29</td><td>40.69</td><td>46.15</td><td>40.15</td><td>16.43</td><td>40.14</td><td>11.58</td><td>61.26</td></tr><tr><td>- Embedding initialization</td><td>6.58</td><td>3.59</td><td>23.53</td><td>12.35</td><td>9.84</td><td>27.75</td><td>2.34</td><td>11.06</td><td>13.04</td><td>7.61</td><td>1.54</td><td>1.1</td><td>1.2</td><td>21.36</td></tr><tr><td>- Vocal size searching</td><td>0.15</td><td>43.82</td><td>17.78</td><td>48.8</td><td>32.74</td><td>47.58</td><td>16.74</td><td>33.61</td><td>34.92</td><td>29.06</td><td>23.32</td><td>35.99</td><td>15.18</td><td>48.51</td></tr><tr><td>- Multi-source transfer</td><td>25.08</td><td>47.21</td><td>15.68</td><td>30.72</td><td>19.86</td><td>41.95</td><td>9.33</td><td>29.37</td><td>29.86</td><td>21.73</td><td>11.26</td><td>22.84</td><td>10.69</td><td>43.66</td></tr></table>
|
| 448 |
+
|
| 449 |
+
Table 17: The detailed performance of UniBridge based on backbone model mBERT when removing contributed components.
|
| 450 |
+
|
| 451 |
+
<table><tr><td></td><td>amh</td><td>ang</td><td>cdo</td><td>crh</td><td>eml</td><td>frr</td><td>khm</td><td>kan</td><td>lij</td><td>pbt</td><td>san</td><td>snd</td><td>sin</td><td>som</td><td>Average</td></tr><tr><td>mBART</td><td>19.19</td><td>15.47</td><td>10.46</td><td>9.1</td><td>14.92</td><td>18.86</td><td>13.16</td><td>15.52</td><td>6.22</td><td>11.45</td><td>19.31</td><td>16.68</td><td>13.21</td><td>14.04</td><td>14.11</td></tr><tr><td>MAD-X (mBART)</td><td>67.03</td><td>51.24</td><td>56.57</td><td>29.73</td><td>39.13</td><td>51.5</td><td>28.79</td><td>43.52</td><td>49.72</td><td>45.25</td><td>51.85</td><td>58.64</td><td>60.33</td><td>51.2</td><td>48.89</td></tr><tr><td>UniBridge without KL-Divergence (mBART)</td><td>53.76</td><td>60.7</td><td>62.4</td><td>65.67</td><td>66.27</td><td>56.08</td><td>33.43</td><td>39.13</td><td>42.67</td><td>33.13</td><td>59.52</td><td>45.91</td><td>58.9</td><td>52.02</td><td>52.11</td></tr><tr><td>UniBridge (mBART)</td><td>69.15</td><td>67.5</td><td>67.89</td><td>61.91</td><td>67.14</td><td>57.07</td><td>41.74</td><td>48.37</td><td>44.1</td><td>52.47</td><td>60.99</td><td>59.12</td><td>59.29</td><td>54.48</td><td>57.94</td></tr></table>
|
| 452 |
+
|
| 453 |
+
Table 18: Various configurations with the mBART backbone on the WikiANN dataset. We highlight in bold the highest F1 score and underline the second highest of each target language for each backbone model.
|
| 454 |
+
|
| 455 |
+
<table><tr><td></td><td>amh</td><td>ang</td><td>cdo</td><td>crh</td><td>eml</td><td>frr</td><td>khm</td><td>kan</td><td>lij</td><td>pbt</td><td>san</td><td>snd</td><td>sin</td><td>som</td><td>Average</td></tr><tr><td>mBERT</td><td>12.87</td><td>52.24</td><td>19.76</td><td>47.81</td><td>39.71</td><td>51.3</td><td>18.46</td><td>42.86</td><td>45</td><td>25.86</td><td>30.71</td><td>13.61</td><td>2.79</td><td>46.15</td><td>32.08</td></tr><tr><td>EXTEND (mBERT)</td><td>10.25</td><td>60.66</td><td>26.95</td><td>42.58</td><td>30.42</td><td>29.71</td><td>22.04</td><td>35.41</td><td>48.63</td><td>21.16</td><td>14.27</td><td>49.94</td><td>11.45</td><td>50.78</td><td>32.45</td></tr><tr><td>UniBridge (mBERT)</td><td>15.46</td><td>53.28</td><td>30.42</td><td>45.67</td><td>36.15</td><td>54.72</td><td>19.49</td><td>44.07</td><td>45.49</td><td>39.33</td><td>20.55</td><td>42.36</td><td>13.68</td><td>62.28</td><td>37.35</td></tr></table>
|
| 456 |
+
|
| 457 |
+
Table 19: The results of the F1 Score for every setup with mBERT as a backbone showcased in 14 diverse languages of WikiANN. We highlight in bold the highest F1 score and underline the second highest of each target language for each backbone model.
|
| 458 |
+
|
| 459 |
+
<table><tr><td></td><td>am</td><td>ang</td><td>cdo</td><td>crh</td><td>eml</td><td>frr</td><td>khm</td><td>kan</td><td>lij</td><td>pbt</td><td>san</td><td>snd</td><td>sin</td><td>som</td></tr><tr><td>UniBridge</td><td>19k</td><td>19k</td><td>10k</td><td>8k</td><td>8k</td><td>18k</td><td>51k</td><td>27k</td><td>20k</td><td>16k</td><td>31k</td><td>14k</td><td>20k</td><td>26k</td></tr></table>
|
| 460 |
+
|
| 461 |
+
Table 20: The approximate vocabulary sizes of each target language.
|
| 462 |
+
|
| 463 |
+
<table><tr><td></td><td>am</td><td>ang</td><td>cdo</td><td>crh</td><td>eml</td><td>frr</td><td>khm</td><td>kan</td><td>lij</td><td>pbt</td><td>san</td><td>snd</td><td>sin</td><td>som</td></tr><tr><td>FOCUS (XLM-R)</td><td>45.72</td><td>40.13</td><td>43.03</td><td>46.03</td><td>41.53</td><td>45.24</td><td>35.12</td><td>45.85</td><td>40.09</td><td>43.24</td><td>30.15</td><td>50.67</td><td>51.22</td><td>46.89</td></tr><tr><td>UniBridge (XLM-R)</td><td>49.6</td><td>43.24</td><td>42.91</td><td>46.03</td><td>40.15</td><td>50.67</td><td>42.67</td><td>48.72</td><td>45.16</td><td>46.09</td><td>29.74</td><td>51.32</td><td>52.86</td><td>54.22</td></tr></table>
|
| 464 |
+
|
| 465 |
+
Table 21: FOCUS initialization and UniBridge with XLM-R backbone on WikiANN.
|
| 466 |
+
|
| 467 |
+
<table><tr><td></td><td>am</td><td>ang</td><td>cdo</td><td>crh</td><td>eml</td><td>frr</td><td>khm</td><td>kan</td><td>lij</td><td>pbt</td><td>san</td><td>snd</td><td>sin</td><td>som</td></tr><tr><td>FOCUS (mBERT)</td><td>17.85</td><td>48.82</td><td>20.25</td><td>46.05</td><td>45.01</td><td>55 15.83</td><td>43.16</td><td>43.17</td><td>36.55</td><td>21.24</td><td>40.83</td><td>11.07</td><td>55.85</td><td></td></tr><tr><td>UniBridge (mBERT)</td><td>15.46</td><td>53.28</td><td>30.42</td><td>45.67</td><td>36.15</td><td>54.72</td><td>19.49</td><td>44.07</td><td>45.49</td><td>39.33</td><td>20.55</td><td>42.36</td><td>13.68</td><td>62.28</td></tr></table>
|
| 468 |
+
|
| 469 |
+
Table 22: FOCUS initialization and UniBridge with mBERT backbone on WikiANN.
|
| 470 |
+
|
| 471 |
+
<table><tr><td></td><td>amh</td><td>ang</td><td>cdo</td><td>crh</td><td>eml</td><td>frr</td><td>khm</td><td>kan</td><td>lij</td><td>pbt</td><td>san</td><td>snd</td><td>sin</td><td>som</td><td>Average</td></tr><tr><td>mGPT</td><td>7.49</td><td>18.29</td><td>17.44</td><td>9.41</td><td>9.79</td><td>5.14</td><td>7.85</td><td>7.28</td><td>14.14</td><td>6.35</td><td>18.53</td><td>11.28</td><td>12.69</td><td>18.66</td><td>11.74</td></tr><tr><td>MAD-X (mGPT)</td><td>63.1</td><td>51.15</td><td>62.28</td><td>55.6</td><td>43.79</td><td>60.55</td><td>60.32</td><td>55.75</td><td>63.11</td><td>50.03</td><td>61.62</td><td>56.66</td><td>64.27</td><td>61.36</td><td>57.83</td></tr><tr><td>UniBridge (mGPT)</td><td>61.09</td><td>60.32</td><td>65.13</td><td>63.73</td><td>54.06</td><td>69.43</td><td>62.35</td><td>55.38</td><td>62.24</td><td>54.28</td><td>66.07</td><td>54.51</td><td>66.42</td><td>70.29</td><td>61.81</td></tr><tr><td>mBART</td><td>19.19</td><td>15.47</td><td>10.46</td><td>9.1</td><td>14.92</td><td>18.86</td><td>13.16</td><td>15.52</td><td>6.22</td><td>11.45</td><td>19.31</td><td>16.68</td><td>13.21</td><td>14.04</td><td>14.11</td></tr><tr><td>MAD-X (mBART)</td><td>67.03</td><td>51.24</td><td>56.57</td><td>29.73</td><td>39.13</td><td>51.5</td><td>28.79</td><td>43.52</td><td>49.72</td><td>45.25</td><td>51.85</td><td>58.64</td><td>60.33</td><td>51.2</td><td>48.89</td></tr><tr><td>UniBridge (mBART)</td><td>69.15</td><td>67.5</td><td>67.89</td><td>61.91</td><td>67.14</td><td>57.07</td><td>41.74</td><td>48.37</td><td>44.1</td><td>52.47</td><td>60.99</td><td>59.12</td><td>59.29</td><td>54.48</td><td>57.94</td></tr></table>
|
| 472 |
+
|
| 473 |
+
Table 23: Various configurations with the mGPT and mBART backbone on the WikiANN dataset. We highlight in bold the highest F1 score and underline the second highest of each target language for each backbone model.
|
2024/UniBridge_ A Unified Approach to Cross-Lingual Transfer Learning for Low-Resource Languages/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:7ff0a8a2622fa9755b2ad85124919847f9763a96aae56edb5ec89abe0308b41b
|
| 3 |
+
size 785629
|
2024/UniBridge_ A Unified Approach to Cross-Lingual Transfer Learning for Low-Resource Languages/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/UniCoder_ Scaling Code Large Language Model via Universal Code/6625637b-146d-4630-b379-0fc1c52e41b0_content_list.json
ADDED
|
@@ -0,0 +1,1758 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[
|
| 2 |
+
{
|
| 3 |
+
"type": "text",
|
| 4 |
+
"text": "UNICODER : Scaling Code Large Language Model via Universal Code",
|
| 5 |
+
"text_level": 1,
|
| 6 |
+
"bbox": [
|
| 7 |
+
115,
|
| 8 |
+
89,
|
| 9 |
+
882,
|
| 10 |
+
112
|
| 11 |
+
],
|
| 12 |
+
"page_idx": 0
|
| 13 |
+
},
|
| 14 |
+
{
|
| 15 |
+
"type": "text",
|
| 16 |
+
"text": "Tao Sun $^{1}$ , Linzheng Chai $^{1*}$ , Jian Yang $^{1*†}$ , Yuwei Yin $^{2}$ , Hongcheng Guo $^{1}$ , Jiaheng Liu $^{1}$ , Bing Wang $^{1}$ , Liquun Yang $^{1}$ , Zhoujun Li $^{1}$",
|
| 17 |
+
"bbox": [
|
| 18 |
+
191,
|
| 19 |
+
127,
|
| 20 |
+
815,
|
| 21 |
+
161
|
| 22 |
+
],
|
| 23 |
+
"page_idx": 0
|
| 24 |
+
},
|
| 25 |
+
{
|
| 26 |
+
"type": "text",
|
| 27 |
+
"text": "$^{1}$ State Key Laboratory of Complex & Critical Software Environment, Beihang University; $^{2}$ Department of Computer Science, University of British Columbia {buaast, challenging, jiaya, hongchengguo} @ buaa.edu.cn; {liujiaheng, bingwang, lqyang, lizj} @ buaa.edu.cn; yuweiyin@cs.ubc.ca",
|
| 28 |
+
"bbox": [
|
| 29 |
+
136,
|
| 30 |
+
161,
|
| 31 |
+
863,
|
| 32 |
+
229
|
| 33 |
+
],
|
| 34 |
+
"page_idx": 0
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"type": "text",
|
| 38 |
+
"text": "Abstract",
|
| 39 |
+
"text_level": 1,
|
| 40 |
+
"bbox": [
|
| 41 |
+
260,
|
| 42 |
+
252,
|
| 43 |
+
339,
|
| 44 |
+
266
|
| 45 |
+
],
|
| 46 |
+
"page_idx": 0
|
| 47 |
+
},
|
| 48 |
+
{
|
| 49 |
+
"type": "text",
|
| 50 |
+
"text": "Intermediate reasoning or acting steps have successfully improved large language models (LLMs) for handling various downstream natural language processing (NLP) tasks. When applying LLMs for code generation, recent works mainly focus on directing the models to articulate intermediate natural-language reasoning steps, as in chain-of-thought (CoT) prompting, and then output code with the natural language or other structured intermediate steps. However, such output is not suitable for code translation or generation tasks since the standard CoT has different logical structures and forms of expression with the code. In this work, we introduce the universal code (UniCode) as the intermediate representation. It is a description of algorithm steps using a mix of conventions of programming languages, such as assignment operator, conditional operator, and loop. Hence, we collect an instruction dataset UNICODER-INSTRUCT to train our model UNICODER on multi-task learning objectives. UNICODER-INSTRUCT comprises natural-language questions, code solutions, and the corresponding universal code. The alignment between the intermediate universal code representation and the final code solution significantly improves the quality of the generated code. The experimental results demonstrate that UNICODER with the universal code significantly outperforms the previous prompting methods by a large margin, showcasing the effectiveness of the structural clues in pseudo-code.<sup>1</sup>",
|
| 51 |
+
"bbox": [
|
| 52 |
+
141,
|
| 53 |
+
281,
|
| 54 |
+
460,
|
| 55 |
+
750
|
| 56 |
+
],
|
| 57 |
+
"page_idx": 0
|
| 58 |
+
},
|
| 59 |
+
{
|
| 60 |
+
"type": "text",
|
| 61 |
+
"text": "1 Introduction",
|
| 62 |
+
"text_level": 1,
|
| 63 |
+
"bbox": [
|
| 64 |
+
114,
|
| 65 |
+
764,
|
| 66 |
+
258,
|
| 67 |
+
778
|
| 68 |
+
],
|
| 69 |
+
"page_idx": 0
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"type": "text",
|
| 73 |
+
"text": "The field of code translation and generation has advanced significantly (Szafraniec et al., 2023; Yan et al., 2023) with the advent of code-specific large language models (LLMs). Code LLMs, such as StarCoder (Li et al., 2023b) and Code",
|
| 74 |
+
"bbox": [
|
| 75 |
+
112,
|
| 76 |
+
790,
|
| 77 |
+
489,
|
| 78 |
+
870
|
| 79 |
+
],
|
| 80 |
+
"page_idx": 0
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"type": "image",
|
| 84 |
+
"img_path": "images/9b58023e2630706a642eb8db55316074fbfae84310834e07c3d886ef843eb298.jpg",
|
| 85 |
+
"image_caption": [
|
| 86 |
+
"Figure 1: An example of UNICODER. The Code LLM solves the code generation question by \"translating\" the pseudocode description (Universal Code) into executable code of the target programming language."
|
| 87 |
+
],
|
| 88 |
+
"image_footnote": [],
|
| 89 |
+
"bbox": [
|
| 90 |
+
512,
|
| 91 |
+
250,
|
| 92 |
+
877,
|
| 93 |
+
502
|
| 94 |
+
],
|
| 95 |
+
"page_idx": 0
|
| 96 |
+
},
|
| 97 |
+
{
|
| 98 |
+
"type": "text",
|
| 99 |
+
"text": "Llama (Rozière et al., 2023), are capable of generating executable code by analyzing natural language prompts. Chain-of-thought (CoT) prompting (Wei et al., 2022b) has emerged as the leading technique in enhancing LLMs, where the intermediate steps provide a structured pathway from the problem statement to the solution, effectively mirroring the human problem-solving process.",
|
| 100 |
+
"bbox": [
|
| 101 |
+
507,
|
| 102 |
+
593,
|
| 103 |
+
882,
|
| 104 |
+
721
|
| 105 |
+
],
|
| 106 |
+
"page_idx": 0
|
| 107 |
+
},
|
| 108 |
+
{
|
| 109 |
+
"type": "text",
|
| 110 |
+
"text": "Considering the low accuracy of CoT in coder generation, structure CoT (SCoT) (Li et al., 2023a) is proposed to minimize the gap between the intermediate steps and the generated code. More intuitively, using a universal code as the intermediate representation to handle multiple programming languages (PL) is promising. Here, universal code is a blueprint for implementing an algorithm, which helps to make the design of algorithms logically clear and readily comprehensible. Moreover, it is universal across different programming languages (PL-agnostic) since it typically does not follow spe",
|
| 111 |
+
"bbox": [
|
| 112 |
+
507,
|
| 113 |
+
726,
|
| 114 |
+
884,
|
| 115 |
+
919
|
| 116 |
+
],
|
| 117 |
+
"page_idx": 0
|
| 118 |
+
},
|
| 119 |
+
{
|
| 120 |
+
"type": "page_footnote",
|
| 121 |
+
"text": "*Equal contribution.",
|
| 122 |
+
"bbox": [
|
| 123 |
+
139,
|
| 124 |
+
879,
|
| 125 |
+
270,
|
| 126 |
+
891
|
| 127 |
+
],
|
| 128 |
+
"page_idx": 0
|
| 129 |
+
},
|
| 130 |
+
{
|
| 131 |
+
"type": "page_footnote",
|
| 132 |
+
"text": "† Corresponding Author.",
|
| 133 |
+
"bbox": [
|
| 134 |
+
139,
|
| 135 |
+
891,
|
| 136 |
+
292,
|
| 137 |
+
904
|
| 138 |
+
],
|
| 139 |
+
"page_idx": 0
|
| 140 |
+
},
|
| 141 |
+
{
|
| 142 |
+
"type": "page_footnote",
|
| 143 |
+
"text": "<https://github.com/ASC8384/UniCoder>",
|
| 144 |
+
"bbox": [
|
| 145 |
+
136,
|
| 146 |
+
904,
|
| 147 |
+
462,
|
| 148 |
+
917
|
| 149 |
+
],
|
| 150 |
+
"page_idx": 0
|
| 151 |
+
},
|
| 152 |
+
{
|
| 153 |
+
"type": "page_number",
|
| 154 |
+
"text": "1812",
|
| 155 |
+
"bbox": [
|
| 156 |
+
480,
|
| 157 |
+
927,
|
| 158 |
+
519,
|
| 159 |
+
940
|
| 160 |
+
],
|
| 161 |
+
"page_idx": 0
|
| 162 |
+
},
|
| 163 |
+
{
|
| 164 |
+
"type": "footer",
|
| 165 |
+
"text": "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1812-1824 August 11-16, 2024 ©2024 Association for Computational Linguistics",
|
| 166 |
+
"bbox": [
|
| 167 |
+
89,
|
| 168 |
+
945,
|
| 169 |
+
907,
|
| 170 |
+
973
|
| 171 |
+
],
|
| 172 |
+
"page_idx": 0
|
| 173 |
+
},
|
| 174 |
+
{
|
| 175 |
+
"type": "text",
|
| 176 |
+
"text": "cific syntax and omits execution details. Yet, how the universal code is used for code translation and generation in multilingual scenarios remains underexplored.",
|
| 177 |
+
"bbox": [
|
| 178 |
+
112,
|
| 179 |
+
84,
|
| 180 |
+
489,
|
| 181 |
+
148
|
| 182 |
+
],
|
| 183 |
+
"page_idx": 1
|
| 184 |
+
},
|
| 185 |
+
{
|
| 186 |
+
"type": "text",
|
| 187 |
+
"text": "In this work, we scale up the code LLMs to support multiple programming languages via the universal code (UniCode), which is used as an efficient and language-independent intermediate representation of the key algorithm principles. Specifically, we first define UniCode by specifying grammar rules and providing paradigms, followed by prompting GPT-4 (OpenAI, 2023) to create an instruction dataset UNICODER-INSTRUCT comprising natural-language questions, code solutions, and the corresponding universal code, as shown in Figure 1. Then, the UNICODER model is built by performing instruction tuning (Wei et al., 2022a) on multi-task learning objectives, including zero-shot question-answer generation (question $\\rightarrow$ code), question-universal-code generation (question $\\rightarrow$ UniCode $\\rightarrow$ code), universal-code-solution translation (UniCode $\\rightarrow$ code), and Universal-code-of-Thought (UoT) objectives. In UoT, the model is required to generate the universal code before the executable code.",
|
| 188 |
+
"bbox": [
|
| 189 |
+
115,
|
| 190 |
+
149,
|
| 191 |
+
489,
|
| 192 |
+
485
|
| 193 |
+
],
|
| 194 |
+
"page_idx": 1
|
| 195 |
+
},
|
| 196 |
+
{
|
| 197 |
+
"type": "text",
|
| 198 |
+
"text": "UNICODER is evaluated on the Python benchmark (Humaneval (Chen et al., 2021) and MBPP (Austin et al., 2021)) and the extended multilingual benchmark MultiPL-E. The results demonstrate that UNICODER consistently achieves state-of-the-art performance across all languages, notably surpassing the previous baselines. Furthermore, the ablation study verifies the efficacy of the proposed method, and extra discussions provide insights into the effect of our method. The contributions are summarized as follows:",
|
| 199 |
+
"bbox": [
|
| 200 |
+
112,
|
| 201 |
+
486,
|
| 202 |
+
489,
|
| 203 |
+
663
|
| 204 |
+
],
|
| 205 |
+
"page_idx": 1
|
| 206 |
+
},
|
| 207 |
+
{
|
| 208 |
+
"type": "list",
|
| 209 |
+
"sub_type": "text",
|
| 210 |
+
"list_items": [
|
| 211 |
+
"- We introduce the universal code UniCode, which is agnostic to programming languages, allowing LLMs to grasp the essence of algorithms step by step. In addition, the instruction dataset UNICODER-INSTRUCT is collected and provided for follow-up research.",
|
| 212 |
+
"- We propose UNICODER, a code generation method that uses multi-task learning objectives to fine-tune the code LLMs with the help of UniCode. The objectives include question-answer generation (QA), question-universal-code generation (QP), universal-code-answer translation (PA), and Universal-code-of-Thought (UoT).",
|
| 213 |
+
"- As extensive experiments show, our method"
|
| 214 |
+
],
|
| 215 |
+
"bbox": [
|
| 216 |
+
134,
|
| 217 |
+
669,
|
| 218 |
+
489,
|
| 219 |
+
917
|
| 220 |
+
],
|
| 221 |
+
"page_idx": 1
|
| 222 |
+
},
|
| 223 |
+
{
|
| 224 |
+
"type": "image",
|
| 225 |
+
"img_path": "images/7feb259c45a02f637d4acfe6cbadc8b05a136cbbe2d081fae3d4ce86a28ec150.jpg",
|
| 226 |
+
"image_caption": [
|
| 227 |
+
"Figure 2: Definition of the universal code."
|
| 228 |
+
],
|
| 229 |
+
"image_footnote": [],
|
| 230 |
+
"bbox": [
|
| 231 |
+
544,
|
| 232 |
+
85,
|
| 233 |
+
845,
|
| 234 |
+
422
|
| 235 |
+
],
|
| 236 |
+
"page_idx": 1
|
| 237 |
+
},
|
| 238 |
+
{
|
| 239 |
+
"type": "text",
|
| 240 |
+
"text": "UNICODER consistently outperforms the previous baselines on different benchmarks, including HumanEval, MBPP, and MultiPL-E. To further verify the effectiveness of the universal code, we propose UNICODER-BENCH to test the capabilities of code LLMs.",
|
| 241 |
+
"bbox": [
|
| 242 |
+
544,
|
| 243 |
+
470,
|
| 244 |
+
884,
|
| 245 |
+
565
|
| 246 |
+
],
|
| 247 |
+
"page_idx": 1
|
| 248 |
+
},
|
| 249 |
+
{
|
| 250 |
+
"type": "text",
|
| 251 |
+
"text": "2 UNICODER-INSTRUCT",
|
| 252 |
+
"text_level": 1,
|
| 253 |
+
"bbox": [
|
| 254 |
+
507,
|
| 255 |
+
577,
|
| 256 |
+
746,
|
| 257 |
+
593
|
| 258 |
+
],
|
| 259 |
+
"page_idx": 1
|
| 260 |
+
},
|
| 261 |
+
{
|
| 262 |
+
"type": "text",
|
| 263 |
+
"text": "Definition of Universal Code. Universal code is designed for expressing algorithms in a form that is easily understood by humans, blending programming language syntax with natural language descriptions and mathematical notation to outline the steps of an algorithm without the complexity of full coding details. It omits machine-specific implementations to focus on the core logic, making it a popular choice for documentation in educational materials and the preliminary design phases of software development. By abstracting away from the intricacies of actual code, pseudocode facilitates clear communication of algorithmic concepts across various programming environments. The definition of the universal code, as shown in Figure 2, is based on the following principles:",
|
| 264 |
+
"bbox": [
|
| 265 |
+
505,
|
| 266 |
+
602,
|
| 267 |
+
884,
|
| 268 |
+
860
|
| 269 |
+
],
|
| 270 |
+
"page_idx": 1
|
| 271 |
+
},
|
| 272 |
+
{
|
| 273 |
+
"type": "text",
|
| 274 |
+
"text": "- Comments: Provide explanations and context for code segments, making it easier for others to understand the intent and functionality.",
|
| 275 |
+
"bbox": [
|
| 276 |
+
531,
|
| 277 |
+
871,
|
| 278 |
+
882,
|
| 279 |
+
917
|
| 280 |
+
],
|
| 281 |
+
"page_idx": 1
|
| 282 |
+
},
|
| 283 |
+
{
|
| 284 |
+
"type": "page_number",
|
| 285 |
+
"text": "1813",
|
| 286 |
+
"bbox": [
|
| 287 |
+
482,
|
| 288 |
+
927,
|
| 289 |
+
519,
|
| 290 |
+
940
|
| 291 |
+
],
|
| 292 |
+
"page_idx": 1
|
| 293 |
+
},
|
| 294 |
+
{
|
| 295 |
+
"type": "text",
|
| 296 |
+
"text": "{Definition of Universal Code}",
|
| 297 |
+
"bbox": [
|
| 298 |
+
151,
|
| 299 |
+
86,
|
| 300 |
+
361,
|
| 301 |
+
102
|
| 302 |
+
],
|
| 303 |
+
"page_idx": 2
|
| 304 |
+
},
|
| 305 |
+
{
|
| 306 |
+
"type": "text",
|
| 307 |
+
"text": "Question",
|
| 308 |
+
"bbox": [
|
| 309 |
+
154,
|
| 310 |
+
103,
|
| 311 |
+
243,
|
| 312 |
+
115
|
| 313 |
+
],
|
| 314 |
+
"page_idx": 2
|
| 315 |
+
},
|
| 316 |
+
{
|
| 317 |
+
"type": "text",
|
| 318 |
+
"text": "{Question}",
|
| 319 |
+
"bbox": [
|
| 320 |
+
154,
|
| 321 |
+
116,
|
| 322 |
+
228,
|
| 323 |
+
129
|
| 324 |
+
],
|
| 325 |
+
"page_idx": 2
|
| 326 |
+
},
|
| 327 |
+
{
|
| 328 |
+
"type": "text",
|
| 329 |
+
"text": "Response",
|
| 330 |
+
"bbox": [
|
| 331 |
+
154,
|
| 332 |
+
130,
|
| 333 |
+
247,
|
| 334 |
+
142
|
| 335 |
+
],
|
| 336 |
+
"page_idx": 2
|
| 337 |
+
},
|
| 338 |
+
{
|
| 339 |
+
"type": "text",
|
| 340 |
+
"text": "{Answer}",
|
| 341 |
+
"bbox": [
|
| 342 |
+
154,
|
| 343 |
+
143,
|
| 344 |
+
220,
|
| 345 |
+
156
|
| 346 |
+
],
|
| 347 |
+
"page_idx": 2
|
| 348 |
+
},
|
| 349 |
+
{
|
| 350 |
+
"type": "text",
|
| 351 |
+
"text": "Your Task",
|
| 352 |
+
"bbox": [
|
| 353 |
+
154,
|
| 354 |
+
158,
|
| 355 |
+
250,
|
| 356 |
+
168
|
| 357 |
+
],
|
| 358 |
+
"page_idx": 2
|
| 359 |
+
},
|
| 360 |
+
{
|
| 361 |
+
"type": "text",
|
| 362 |
+
"text": "Please combine the above Question and",
|
| 363 |
+
"bbox": [
|
| 364 |
+
152,
|
| 365 |
+
171,
|
| 366 |
+
416,
|
| 367 |
+
183
|
| 368 |
+
],
|
| 369 |
+
"page_idx": 2
|
| 370 |
+
},
|
| 371 |
+
{
|
| 372 |
+
"type": "text",
|
| 373 |
+
"text": "Response to comply with the pseudocode standard to write the corresponding",
|
| 374 |
+
"bbox": [
|
| 375 |
+
152,
|
| 376 |
+
185,
|
| 377 |
+
426,
|
| 378 |
+
212
|
| 379 |
+
],
|
| 380 |
+
"page_idx": 2
|
| 381 |
+
},
|
| 382 |
+
{
|
| 383 |
+
"type": "text",
|
| 384 |
+
"text": "pseudocode of solution. Adopt a meticulous methodology, breaking down the generation process into manageable steps. Just output the generated pseudocode for the solution and do not include the Question and Response in the output.",
|
| 385 |
+
"bbox": [
|
| 386 |
+
152,
|
| 387 |
+
212,
|
| 388 |
+
443,
|
| 389 |
+
294
|
| 390 |
+
],
|
| 391 |
+
"page_idx": 2
|
| 392 |
+
},
|
| 393 |
+
{
|
| 394 |
+
"type": "text",
|
| 395 |
+
"text": "The output format is as follows, Use \\*\\*pseudocode to put the generated pseudocode in markdown quotes:",
|
| 396 |
+
"bbox": [
|
| 397 |
+
151,
|
| 398 |
+
307,
|
| 399 |
+
394,
|
| 400 |
+
350
|
| 401 |
+
],
|
| 402 |
+
"page_idx": 2
|
| 403 |
+
},
|
| 404 |
+
{
|
| 405 |
+
"type": "text",
|
| 406 |
+
"text": "```\n```\n```\n```\n```\n```\n``",
|
| 407 |
+
"bbox": [
|
| 408 |
+
152,
|
| 409 |
+
363,
|
| 410 |
+
250,
|
| 411 |
+
376
|
| 412 |
+
],
|
| 413 |
+
"page_idx": 2
|
| 414 |
+
},
|
| 415 |
+
{
|
| 416 |
+
"type": "text",
|
| 417 |
+
"text": "{Offers a pseudocode version of the solution.}",
|
| 418 |
+
"bbox": [
|
| 419 |
+
154,
|
| 420 |
+
378,
|
| 421 |
+
400,
|
| 422 |
+
405
|
| 423 |
+
],
|
| 424 |
+
"page_idx": 2
|
| 425 |
+
},
|
| 426 |
+
{
|
| 427 |
+
"type": "text",
|
| 428 |
+
"text": "1",
|
| 429 |
+
"bbox": [
|
| 430 |
+
154,
|
| 431 |
+
406,
|
| 432 |
+
171,
|
| 433 |
+
411
|
| 434 |
+
],
|
| 435 |
+
"page_idx": 2
|
| 436 |
+
},
|
| 437 |
+
{
|
| 438 |
+
"type": "text",
|
| 439 |
+
"text": "Figure 3: Prompt of generating UniCode.",
|
| 440 |
+
"bbox": [
|
| 441 |
+
152,
|
| 442 |
+
436,
|
| 443 |
+
445,
|
| 444 |
+
451
|
| 445 |
+
],
|
| 446 |
+
"page_idx": 2
|
| 447 |
+
},
|
| 448 |
+
{
|
| 449 |
+
"type": "list",
|
| 450 |
+
"sub_type": "text",
|
| 451 |
+
"list_items": [
|
| 452 |
+
"- Variables: Enhance code readability and maintainability by using meaningful names that convey the purpose of the variables without relying on data type specifications.",
|
| 453 |
+
"- Input/Output: Simplify the interaction with data entering and leaving the system, ensuring these operations are clear and easy to trace.",
|
| 454 |
+
"- Conditionals: Clarify decision-making processes within the code by using structured and indented conditional statements that define clear execution paths.",
|
| 455 |
+
"- Loops: Facilitate the repetition of code blocks in a controlled manner, with clearly defined start and end conditions, making the iterative processes understandable.",
|
| 456 |
+
"- Functions/Procedures: Increase modularity and reusability by naming functions and procedures descriptively, and by using parameters effectively to encapsulate functionality.",
|
| 457 |
+
"- Formatting: Improve the overall visual organization of the code by applying consistent indentation, which helps in delineating hierarchical structures and logical groupings within the code."
|
| 458 |
+
],
|
| 459 |
+
"bbox": [
|
| 460 |
+
136,
|
| 461 |
+
468,
|
| 462 |
+
487,
|
| 463 |
+
876
|
| 464 |
+
],
|
| 465 |
+
"page_idx": 2
|
| 466 |
+
},
|
| 467 |
+
{
|
| 468 |
+
"type": "text",
|
| 469 |
+
"text": "Construction From Instruction Dataset. For a programming language $L$ , given the existing code",
|
| 470 |
+
"bbox": [
|
| 471 |
+
112,
|
| 472 |
+
887,
|
| 473 |
+
487,
|
| 474 |
+
919
|
| 475 |
+
],
|
| 476 |
+
"page_idx": 2
|
| 477 |
+
},
|
| 478 |
+
{
|
| 479 |
+
"type": "text",
|
| 480 |
+
"text": "instruction pair $(q_{\\alpha},a_{\\alpha})\\in D_s^L$ , where $q_{\\alpha}$ and $a_{\\alpha}$ are question and answer from $D_{s}^{L}$ , we create the universal code instruction dataset $D_{u_{\\alpha}}^{L}$ by prompting LLMs to generate the universal code $p_{\\alpha}$ and then add $(q_{\\alpha},a_{\\alpha},p_{\\alpha})$ into $D_{u_{\\alpha}}^{L}$ . Figure 2 shows the definition of the code universal and Figure 3 is the prompt for LLMs to generate UniCode. {Definition of Universal Code}, {Question}, and {Answer} denote the slots for definition of the universal code $p_{\\alpha}$ , the question of the instruction data $q_{\\alpha}$ , and the answer of the instruction $a_{\\alpha}$ , respectively. Given $K$ different programming languages $L_{all} = \\{L_k\\}_{k = 1}^K$ the multilingual programming instruction dataset with the universal code $D_{u_{\\alpha}} = \\{D_{u_{\\alpha}}^{L_k}\\}_{k = 1}^K$ are created for supervised fine-tuning (SFT) (Ouyang et al., 2022). In this work, we adopt the open-source instruction dataset.",
|
| 481 |
+
"bbox": [
|
| 482 |
+
507,
|
| 483 |
+
83,
|
| 484 |
+
884,
|
| 485 |
+
357
|
| 486 |
+
],
|
| 487 |
+
"page_idx": 2
|
| 488 |
+
},
|
| 489 |
+
{
|
| 490 |
+
"type": "text",
|
| 491 |
+
"text": "Construction From Code Snippets. For the unsupervised data (code snippets) widely existing on many websites (e.g., GitHub), we also construct the instruction dataset with the universal code from raw code snippets. Specifically, we ask the LLM to generate the question $q_{\\beta}$ and the corresponding code answer $a_{\\beta}$ pair based on the original code snippet $c$ using the prompt \"Please generate the self-contained question and answer based on the given code snippet\". Then, we generate UniCode $p_{\\beta}$ and construct $(q_{\\beta}, a_{\\beta}, p_{\\beta})$ triplets the same way as in Paragraph 2. In addition, an LLM scorer is applied to filter out the low-quality $(q_{\\beta}, a_{\\beta}, p_{\\beta})$ triplets. Therefore, given raw code snippets of different programming languages $L_{k} \\in \\{L_{k}\\}_{k=1}^{K}$ , we can construct instruction dataset with the universal code $D_{u_{\\beta}} = \\{D_{u_{\\beta}}^{L_{k}}\\}_{k=1}^{K}$ directly from such unsupervised data. Finally, we combine these two instruction datasets to obtain $D_{u} = D_{u_{\\alpha}} \\cup D_{u_{\\beta}}$ , where $D_{u}^{L_{k}} = D_{u_{\\alpha}^{L_{k}}} \\cup D_{u_{\\beta}^{L_{k}}}$ for each program language $L_{k} \\in L_{all}$ .",
|
| 492 |
+
"bbox": [
|
| 493 |
+
507,
|
| 494 |
+
369,
|
| 495 |
+
884,
|
| 496 |
+
714
|
| 497 |
+
],
|
| 498 |
+
"page_idx": 2
|
| 499 |
+
},
|
| 500 |
+
{
|
| 501 |
+
"type": "text",
|
| 502 |
+
"text": "Evaluation Task for Universal Code. To test the capability of the LLMs in generating UniCode from questions and translating UniCode into answers, we design a code reconstruction task for evaluation. Given the code snippet $c$ , we require the LLM to generate UniCode $p$ and then translate it into the code $c'$ . The evaluation metric is not the similarity between $c$ and $c'$ but whether the restored code $c'$ can pass the test cases. We expand the HumanEval and MBPP datasets to create our benchmark UNICODER-BENCH comprising 164 HumanEval samples and 500 MBPP test samples.",
|
| 503 |
+
"bbox": [
|
| 504 |
+
507,
|
| 505 |
+
726,
|
| 506 |
+
882,
|
| 507 |
+
919
|
| 508 |
+
],
|
| 509 |
+
"page_idx": 2
|
| 510 |
+
},
|
| 511 |
+
{
|
| 512 |
+
"type": "page_number",
|
| 513 |
+
"text": "1814",
|
| 514 |
+
"bbox": [
|
| 515 |
+
482,
|
| 516 |
+
928,
|
| 517 |
+
519,
|
| 518 |
+
940
|
| 519 |
+
],
|
| 520 |
+
"page_idx": 2
|
| 521 |
+
},
|
| 522 |
+
{
|
| 523 |
+
"type": "image",
|
| 524 |
+
"img_path": "images/8b08779aedb06089c03e2f110ca17aa5a947b06209c7121797b0c7a2f38f5765.jpg",
|
| 525 |
+
"image_caption": [
|
| 526 |
+
"Figure 4: Overview of UNICODER. (a) The function of universal code UniCode; (b) The framework of our method UNICODER. The universal code as the intermediate representation, our proposed framework can support code generation, code translation, and code summarization. In (a), the LLM encodes the code snippets of multilingual programming languages or the problem description questions into UniCode. Then UniCode is translated into the target output, i.e., the executable code of multilingual programming languages with a descriptive code summarization. In (b), we first ask the LLM to generate UniCode with few-shot prompts. In the second stage, the instruction dataset, containing questions, answers, and UniCode, is fed into the code LLM for fine-tuning."
|
| 527 |
+
],
|
| 528 |
+
"image_footnote": [],
|
| 529 |
+
"bbox": [
|
| 530 |
+
114,
|
| 531 |
+
80,
|
| 532 |
+
670,
|
| 533 |
+
309
|
| 534 |
+
],
|
| 535 |
+
"page_idx": 3
|
| 536 |
+
},
|
| 537 |
+
{
|
| 538 |
+
"type": "image",
|
| 539 |
+
"img_path": "images/e4bd35c1cef07ff1156a28426b0be784a093d7cb94281765ba3693db6ce5fbfe.jpg",
|
| 540 |
+
"image_caption": [],
|
| 541 |
+
"image_footnote": [],
|
| 542 |
+
"bbox": [
|
| 543 |
+
678,
|
| 544 |
+
80,
|
| 545 |
+
873,
|
| 546 |
+
309
|
| 547 |
+
],
|
| 548 |
+
"page_idx": 3
|
| 549 |
+
},
|
| 550 |
+
{
|
| 551 |
+
"type": "text",
|
| 552 |
+
"text": "3 UNICODER",
|
| 553 |
+
"text_level": 1,
|
| 554 |
+
"bbox": [
|
| 555 |
+
112,
|
| 556 |
+
441,
|
| 557 |
+
250,
|
| 558 |
+
455
|
| 559 |
+
],
|
| 560 |
+
"page_idx": 3
|
| 561 |
+
},
|
| 562 |
+
{
|
| 563 |
+
"type": "text",
|
| 564 |
+
"text": "3.1 Model Overview",
|
| 565 |
+
"text_level": 1,
|
| 566 |
+
"bbox": [
|
| 567 |
+
112,
|
| 568 |
+
470,
|
| 569 |
+
290,
|
| 570 |
+
483
|
| 571 |
+
],
|
| 572 |
+
"page_idx": 3
|
| 573 |
+
},
|
| 574 |
+
{
|
| 575 |
+
"type": "text",
|
| 576 |
+
"text": "In Figure 4, we first define the concept of the universal code with the essential components and then prompt the LLM to generate UniCode $p$ based on the existing instruction data (questions $q$ and answers $a$ ) and the raw code snippets $c$ . UniCode is regarded as the intermediate representation for different tasks, including code generation, code translation, and code summarization. Our proposed model UNICODER is trained on the instruction dataset $D_{u}$ with the multilingual objectives to fully unleash the potential of UniCode.",
|
| 577 |
+
"bbox": [
|
| 578 |
+
112,
|
| 579 |
+
493,
|
| 580 |
+
487,
|
| 581 |
+
670
|
| 582 |
+
],
|
| 583 |
+
"page_idx": 3
|
| 584 |
+
},
|
| 585 |
+
{
|
| 586 |
+
"type": "text",
|
| 587 |
+
"text": "3.2 Code LLM with Universal Code",
|
| 588 |
+
"text_level": 1,
|
| 589 |
+
"bbox": [
|
| 590 |
+
112,
|
| 591 |
+
684,
|
| 592 |
+
413,
|
| 593 |
+
699
|
| 594 |
+
],
|
| 595 |
+
"page_idx": 3
|
| 596 |
+
},
|
| 597 |
+
{
|
| 598 |
+
"type": "text",
|
| 599 |
+
"text": "Given the instructions dataset with $K$ multilingual programming languages $D_{u} = \\{D_{u}^{L_{k}}\\}_{k = 1}^{K}$ , the pretrained code LLM $\\mathcal{M}$ trained on $D_{u}$ can support Universal-code-of-Thought (UoT). It can be described as:",
|
| 600 |
+
"bbox": [
|
| 601 |
+
112,
|
| 602 |
+
708,
|
| 603 |
+
489,
|
| 604 |
+
788
|
| 605 |
+
],
|
| 606 |
+
"page_idx": 3
|
| 607 |
+
},
|
| 608 |
+
{
|
| 609 |
+
"type": "equation",
|
| 610 |
+
"text": "\n$$\nP (p, a | q) = P (p | q; \\mathcal {M}) P (a | q, p; \\mathcal {M}) \\tag {1}\n$$\n",
|
| 611 |
+
"text_format": "latex",
|
| 612 |
+
"bbox": [
|
| 613 |
+
159,
|
| 614 |
+
804,
|
| 615 |
+
487,
|
| 616 |
+
822
|
| 617 |
+
],
|
| 618 |
+
"page_idx": 3
|
| 619 |
+
},
|
| 620 |
+
{
|
| 621 |
+
"type": "text",
|
| 622 |
+
"text": "where $q$ (question) and $a$ (answer) are the instruction pair from $D_{u}$ . Given the question $q$ , the code LLM $\\mathcal{M}$ first generates UniCode $p$ and then outputs the final answer $a$ , where $p$ provides key algorithm ideas with natural language comments.",
|
| 623 |
+
"bbox": [
|
| 624 |
+
112,
|
| 625 |
+
839,
|
| 626 |
+
489,
|
| 627 |
+
917
|
| 628 |
+
],
|
| 629 |
+
"page_idx": 3
|
| 630 |
+
},
|
| 631 |
+
{
|
| 632 |
+
"type": "text",
|
| 633 |
+
"text": "3.3 Multi-task Supervised Fine-tuning",
|
| 634 |
+
"text_level": 1,
|
| 635 |
+
"bbox": [
|
| 636 |
+
507,
|
| 637 |
+
442,
|
| 638 |
+
826,
|
| 639 |
+
458
|
| 640 |
+
],
|
| 641 |
+
"page_idx": 3
|
| 642 |
+
},
|
| 643 |
+
{
|
| 644 |
+
"type": "text",
|
| 645 |
+
"text": "To fully unleash the potential of the UniCode, we design multiple objectives to enhance the understanding and generation capability of code LLM.",
|
| 646 |
+
"bbox": [
|
| 647 |
+
507,
|
| 648 |
+
463,
|
| 649 |
+
882,
|
| 650 |
+
511
|
| 651 |
+
],
|
| 652 |
+
"page_idx": 3
|
| 653 |
+
},
|
| 654 |
+
{
|
| 655 |
+
"type": "text",
|
| 656 |
+
"text": "Multi-task Fine-tuning.",
|
| 657 |
+
"text_level": 1,
|
| 658 |
+
"bbox": [
|
| 659 |
+
507,
|
| 660 |
+
521,
|
| 661 |
+
697,
|
| 662 |
+
536
|
| 663 |
+
],
|
| 664 |
+
"page_idx": 3
|
| 665 |
+
},
|
| 666 |
+
{
|
| 667 |
+
"type": "equation",
|
| 668 |
+
"text": "\n$$\n\\mathcal {L} _ {a l l} = \\mathcal {L} _ {q a} + \\mathcal {L} _ {q p} + \\mathcal {L} _ {p a} + \\mathcal {L} _ {u o t} \\tag {2}\n$$\n",
|
| 669 |
+
"text_format": "latex",
|
| 670 |
+
"bbox": [
|
| 671 |
+
573,
|
| 672 |
+
550,
|
| 673 |
+
882,
|
| 674 |
+
567
|
| 675 |
+
],
|
| 676 |
+
"page_idx": 3
|
| 677 |
+
},
|
| 678 |
+
{
|
| 679 |
+
"type": "text",
|
| 680 |
+
"text": "where $\\mathcal{L}_{qa}$ is the question-answer generation objective, $\\mathcal{L}_{qp}$ is the question-universal-code generation objective, $\\mathcal{L}_{pa}$ is the universal-code-answer translation objective, and $\\mathcal{L}_{uot}$ is the Universal-code-of-Thought (UoT) objective.",
|
| 681 |
+
"bbox": [
|
| 682 |
+
507,
|
| 683 |
+
579,
|
| 684 |
+
882,
|
| 685 |
+
659
|
| 686 |
+
],
|
| 687 |
+
"page_idx": 3
|
| 688 |
+
},
|
| 689 |
+
{
|
| 690 |
+
"type": "text",
|
| 691 |
+
"text": "Here, we introduce all four training objectives. For all the following objectives, the multilingual corpora $D_{u} = \\{D_{u}^{L_{k}}\\}_{k = 1}^{K}$ are given. $\\mathcal{M}$ is the code LLM and $K$ is the number of programming languages.",
|
| 692 |
+
"bbox": [
|
| 693 |
+
507,
|
| 694 |
+
661,
|
| 695 |
+
882,
|
| 696 |
+
740
|
| 697 |
+
],
|
| 698 |
+
"page_idx": 3
|
| 699 |
+
},
|
| 700 |
+
{
|
| 701 |
+
"type": "text",
|
| 702 |
+
"text": "Question-Answer Objective. The training objective $\\mathcal{L}_{qa}$ of the standard instruction fine-tuning can be described as:",
|
| 703 |
+
"bbox": [
|
| 704 |
+
507,
|
| 705 |
+
750,
|
| 706 |
+
882,
|
| 707 |
+
797
|
| 708 |
+
],
|
| 709 |
+
"page_idx": 3
|
| 710 |
+
},
|
| 711 |
+
{
|
| 712 |
+
"type": "equation",
|
| 713 |
+
"text": "\n$$\n\\mathcal {L} _ {q a} = - \\sum_ {k = 1} ^ {K} \\mathbb {E} _ {q, a \\sim D _ {u} ^ {L k}} [ \\log P (a | q; \\mathcal {M}) ] \\tag {3}\n$$\n",
|
| 714 |
+
"text_format": "latex",
|
| 715 |
+
"bbox": [
|
| 716 |
+
536,
|
| 717 |
+
807,
|
| 718 |
+
882,
|
| 719 |
+
850
|
| 720 |
+
],
|
| 721 |
+
"page_idx": 3
|
| 722 |
+
},
|
| 723 |
+
{
|
| 724 |
+
"type": "text",
|
| 725 |
+
"text": "where $q$ and $a$ are the question and answer pair.",
|
| 726 |
+
"bbox": [
|
| 727 |
+
507,
|
| 728 |
+
862,
|
| 729 |
+
860,
|
| 730 |
+
878
|
| 731 |
+
],
|
| 732 |
+
"page_idx": 3
|
| 733 |
+
},
|
| 734 |
+
{
|
| 735 |
+
"type": "text",
|
| 736 |
+
"text": "Question-Universal-Code Objective. The training objective $\\mathcal{L}_{qp}$ of the auxiliary universal code",
|
| 737 |
+
"bbox": [
|
| 738 |
+
507,
|
| 739 |
+
887,
|
| 740 |
+
882,
|
| 741 |
+
920
|
| 742 |
+
],
|
| 743 |
+
"page_idx": 3
|
| 744 |
+
},
|
| 745 |
+
{
|
| 746 |
+
"type": "page_number",
|
| 747 |
+
"text": "1815",
|
| 748 |
+
"bbox": [
|
| 749 |
+
482,
|
| 750 |
+
927,
|
| 751 |
+
519,
|
| 752 |
+
940
|
| 753 |
+
],
|
| 754 |
+
"page_idx": 3
|
| 755 |
+
},
|
| 756 |
+
{
|
| 757 |
+
"type": "text",
|
| 758 |
+
"text": "generation task can be described as:",
|
| 759 |
+
"bbox": [
|
| 760 |
+
114,
|
| 761 |
+
84,
|
| 762 |
+
381,
|
| 763 |
+
99
|
| 764 |
+
],
|
| 765 |
+
"page_idx": 4
|
| 766 |
+
},
|
| 767 |
+
{
|
| 768 |
+
"type": "equation",
|
| 769 |
+
"text": "\n$$\n\\mathcal {L} _ {q p} = - \\sum_ {k = 1} ^ {K} \\mathbb {E} _ {q, p \\sim D _ {L _ {k}}} [ \\log P (p | q; \\mathcal {M}) ] \\tag {4}\n$$\n",
|
| 770 |
+
"text_format": "latex",
|
| 771 |
+
"bbox": [
|
| 772 |
+
142,
|
| 773 |
+
114,
|
| 774 |
+
487,
|
| 775 |
+
156
|
| 776 |
+
],
|
| 777 |
+
"page_idx": 4
|
| 778 |
+
},
|
| 779 |
+
{
|
| 780 |
+
"type": "text",
|
| 781 |
+
"text": "where $q$ and $p$ are the question and UniCode.",
|
| 782 |
+
"bbox": [
|
| 783 |
+
114,
|
| 784 |
+
168,
|
| 785 |
+
458,
|
| 786 |
+
185
|
| 787 |
+
],
|
| 788 |
+
"page_idx": 4
|
| 789 |
+
},
|
| 790 |
+
{
|
| 791 |
+
"type": "text",
|
| 792 |
+
"text": "Universal-Code-Answer Objective. The training objective $\\mathcal{L}_{pa}$ of generating the executable code answer from UniCode can be described as:",
|
| 793 |
+
"bbox": [
|
| 794 |
+
112,
|
| 795 |
+
196,
|
| 796 |
+
489,
|
| 797 |
+
244
|
| 798 |
+
],
|
| 799 |
+
"page_idx": 4
|
| 800 |
+
},
|
| 801 |
+
{
|
| 802 |
+
"type": "equation",
|
| 803 |
+
"text": "\n$$\n\\mathcal {L} _ {p a} = - \\sum_ {k = 1} ^ {K} \\mathbb {E} _ {p, a \\sim D _ {L _ {k}}} [ \\log P (a | p; \\mathcal {M}) ] \\tag {5}\n$$\n",
|
| 804 |
+
"text_format": "latex",
|
| 805 |
+
"bbox": [
|
| 806 |
+
139,
|
| 807 |
+
255,
|
| 808 |
+
487,
|
| 809 |
+
298
|
| 810 |
+
],
|
| 811 |
+
"page_idx": 4
|
| 812 |
+
},
|
| 813 |
+
{
|
| 814 |
+
"type": "text",
|
| 815 |
+
"text": "where $p$ and $a$ are UniCode and the answer.",
|
| 816 |
+
"bbox": [
|
| 817 |
+
112,
|
| 818 |
+
311,
|
| 819 |
+
448,
|
| 820 |
+
326
|
| 821 |
+
],
|
| 822 |
+
"page_idx": 4
|
| 823 |
+
},
|
| 824 |
+
{
|
| 825 |
+
"type": "text",
|
| 826 |
+
"text": "Universal-Code-of-Thought Objective. The training objective $\\mathcal{L}_{\\text {uot }}$ of generating UniCode and then the executable code answer can be described as:",
|
| 827 |
+
"bbox": [
|
| 828 |
+
112,
|
| 829 |
+
337,
|
| 830 |
+
485,
|
| 831 |
+
401
|
| 832 |
+
],
|
| 833 |
+
"page_idx": 4
|
| 834 |
+
},
|
| 835 |
+
{
|
| 836 |
+
"type": "equation",
|
| 837 |
+
"text": "\n$$\n\\mathcal {L} _ {u o t} = - \\sum_ {k = 1} ^ {K} \\mathbb {E} _ {q, p, a \\sim D _ {L _ {k}}} [ \\log P (p, a | q; \\mathcal {M}) ] \\tag {6}\n$$\n",
|
| 838 |
+
"text_format": "latex",
|
| 839 |
+
"bbox": [
|
| 840 |
+
122,
|
| 841 |
+
412,
|
| 842 |
+
487,
|
| 843 |
+
454
|
| 844 |
+
],
|
| 845 |
+
"page_idx": 4
|
| 846 |
+
},
|
| 847 |
+
{
|
| 848 |
+
"type": "text",
|
| 849 |
+
"text": "where $q$ , $a$ , and $p$ are the question, answer, and UniCode, respectively.",
|
| 850 |
+
"bbox": [
|
| 851 |
+
112,
|
| 852 |
+
468,
|
| 853 |
+
485,
|
| 854 |
+
500
|
| 855 |
+
],
|
| 856 |
+
"page_idx": 4
|
| 857 |
+
},
|
| 858 |
+
{
|
| 859 |
+
"type": "text",
|
| 860 |
+
"text": "4 Experimental Setup",
|
| 861 |
+
"text_level": 1,
|
| 862 |
+
"bbox": [
|
| 863 |
+
112,
|
| 864 |
+
514,
|
| 865 |
+
319,
|
| 866 |
+
530
|
| 867 |
+
],
|
| 868 |
+
"page_idx": 4
|
| 869 |
+
},
|
| 870 |
+
{
|
| 871 |
+
"type": "text",
|
| 872 |
+
"text": "4.1 Instruction Dataset",
|
| 873 |
+
"text_level": 1,
|
| 874 |
+
"bbox": [
|
| 875 |
+
112,
|
| 876 |
+
541,
|
| 877 |
+
312,
|
| 878 |
+
555
|
| 879 |
+
],
|
| 880 |
+
"page_idx": 4
|
| 881 |
+
},
|
| 882 |
+
{
|
| 883 |
+
"type": "text",
|
| 884 |
+
"text": "GPT-4 (gpt-4-1106-preview) (OpenAI, 2023) is used as the foundation model to generate the UNICODER-INSTRUCT. We randomly extract code snippets within 1024 tokens from the StarCoder dataset (Li et al., 2023b) and let GPT-4 summarize the code snippets as the universal code. Based on each code snippet and the corresponding universal code, a self-contained coding problem with a correct solution is created.",
|
| 885 |
+
"bbox": [
|
| 886 |
+
112,
|
| 887 |
+
562,
|
| 888 |
+
489,
|
| 889 |
+
706
|
| 890 |
+
],
|
| 891 |
+
"page_idx": 4
|
| 892 |
+
},
|
| 893 |
+
{
|
| 894 |
+
"type": "text",
|
| 895 |
+
"text": "4.2 Baselines",
|
| 896 |
+
"text_level": 1,
|
| 897 |
+
"bbox": [
|
| 898 |
+
112,
|
| 899 |
+
720,
|
| 900 |
+
233,
|
| 901 |
+
734
|
| 902 |
+
],
|
| 903 |
+
"page_idx": 4
|
| 904 |
+
},
|
| 905 |
+
{
|
| 906 |
+
"type": "text",
|
| 907 |
+
"text": "Proprietary Models. Based on a neural architecture known as generative pre-trained Transformers (GPT) (Vaswani et al., 2017; Radford et al., 2018), GPT-3.5 and GPT-4 are LLMs trained on massive datasets of text, code, math equations, and more. They are also trained to follow instructions (Ouyang et al., 2022), which allows them to generate human-like responses. We use GPT-3.5 Turbo and GPT-4 as the proprietary models because they perform excellently in various code understanding and generation tasks.",
|
| 908 |
+
"bbox": [
|
| 909 |
+
112,
|
| 910 |
+
741,
|
| 911 |
+
489,
|
| 912 |
+
917
|
| 913 |
+
],
|
| 914 |
+
"page_idx": 4
|
| 915 |
+
},
|
| 916 |
+
{
|
| 917 |
+
"type": "text",
|
| 918 |
+
"text": "Open-Source Models. To narrow the gap between open-source and closed-source models, a series of open-source models and instruction datasets are proposed to improve code LLMs and bootstrap their instruction-following ability. Starcoder (Li et al., 2023b), Code Llama (Rozière et al., 2023), and DeepSeek-Coder (Guo et al., 2024a) with different model sizes are introduced into the based model. OctoCoder (Muennighoff et al., 2023), WiazrdCoder (Luo et al., 2023), MagiCoder (Wei et al., 2023), and WaveCoder (Yu et al., 2023) are further fine-tuned on these based code LLMs.",
|
| 919 |
+
"bbox": [
|
| 920 |
+
507,
|
| 921 |
+
84,
|
| 922 |
+
884,
|
| 923 |
+
275
|
| 924 |
+
],
|
| 925 |
+
"page_idx": 4
|
| 926 |
+
},
|
| 927 |
+
{
|
| 928 |
+
"type": "text",
|
| 929 |
+
"text": "Decontamination. We apply data decontamination before training our UNICODER models to decontaminate the code snippets from the star-coder data (Li et al., 2023b), by removing exact matches from HumanEval (Chen et al., 2021), MBPP (Austin et al., 2021), DS-1000 (Lai et al., 2023), and GSM8K (Cobbe et al., 2021).",
|
| 930 |
+
"bbox": [
|
| 931 |
+
507,
|
| 932 |
+
287,
|
| 933 |
+
885,
|
| 934 |
+
399
|
| 935 |
+
],
|
| 936 |
+
"page_idx": 4
|
| 937 |
+
},
|
| 938 |
+
{
|
| 939 |
+
"type": "text",
|
| 940 |
+
"text": "4.3 Evaluation Benchmark",
|
| 941 |
+
"text_level": 1,
|
| 942 |
+
"bbox": [
|
| 943 |
+
507,
|
| 944 |
+
411,
|
| 945 |
+
737,
|
| 946 |
+
425
|
| 947 |
+
],
|
| 948 |
+
"page_idx": 4
|
| 949 |
+
},
|
| 950 |
+
{
|
| 951 |
+
"type": "text",
|
| 952 |
+
"text": "HumanEval. The HumanEval test set (Chen et al., 2021) is a crafted collection of 164 Python programming problems to test the abilities of code generation models. For each problem, there are roughly 9.6 test cases to check whether the generated code works as intended. Humaneval has become one of the most popular benchmarks to measure how well these code-writing AI models perform, making it a key tool in the field of AI and machine learning for coding.",
|
| 953 |
+
"bbox": [
|
| 954 |
+
507,
|
| 955 |
+
432,
|
| 956 |
+
882,
|
| 957 |
+
593
|
| 958 |
+
],
|
| 959 |
+
"page_idx": 4
|
| 960 |
+
},
|
| 961 |
+
{
|
| 962 |
+
"type": "text",
|
| 963 |
+
"text": "MBPP. The MBPP dataset (Austin et al., 2021), comprising approximately 1,000 Python programming challenges sourced from a crowd of contributors, is tailored for beginners in programming, focusing on core principles and the usage of the standard library. The MBPP test set comprised of 500 problems is selected to evaluate the few-shot inference of the code LLMs.",
|
| 964 |
+
"bbox": [
|
| 965 |
+
507,
|
| 966 |
+
602,
|
| 967 |
+
882,
|
| 968 |
+
731
|
| 969 |
+
],
|
| 970 |
+
"page_idx": 4
|
| 971 |
+
},
|
| 972 |
+
{
|
| 973 |
+
"type": "text",
|
| 974 |
+
"text": "MultiPL-E. The MultiPL-E test set (Cassano et al., 2022) translates the original HumanEval test set to other 18 programming languages, i.e., Javascript, Java, Typescript, C++, and Rust. We use the MultiPL-E to evaluate the multilingual capabilities of the code LLMs.",
|
| 975 |
+
"bbox": [
|
| 976 |
+
507,
|
| 977 |
+
741,
|
| 978 |
+
884,
|
| 979 |
+
838
|
| 980 |
+
],
|
| 981 |
+
"page_idx": 4
|
| 982 |
+
},
|
| 983 |
+
{
|
| 984 |
+
"type": "text",
|
| 985 |
+
"text": "4.4 Evaluation Metrics",
|
| 986 |
+
"text_level": 1,
|
| 987 |
+
"bbox": [
|
| 988 |
+
507,
|
| 989 |
+
848,
|
| 990 |
+
707,
|
| 991 |
+
863
|
| 992 |
+
],
|
| 993 |
+
"page_idx": 4
|
| 994 |
+
},
|
| 995 |
+
{
|
| 996 |
+
"type": "text",
|
| 997 |
+
"text": "Pass@k. We adopt the Pass@k metric (Chen et al., 2021) to improve the reliability of our evaluation. We then count the total number of success",
|
| 998 |
+
"bbox": [
|
| 999 |
+
507,
|
| 1000 |
+
871,
|
| 1001 |
+
882,
|
| 1002 |
+
917
|
| 1003 |
+
],
|
| 1004 |
+
"page_idx": 4
|
| 1005 |
+
},
|
| 1006 |
+
{
|
| 1007 |
+
"type": "page_number",
|
| 1008 |
+
"text": "1816",
|
| 1009 |
+
"bbox": [
|
| 1010 |
+
482,
|
| 1011 |
+
928,
|
| 1012 |
+
521,
|
| 1013 |
+
940
|
| 1014 |
+
],
|
| 1015 |
+
"page_idx": 4
|
| 1016 |
+
},
|
| 1017 |
+
{
|
| 1018 |
+
"type": "text",
|
| 1019 |
+
"text": "fully passing test cases, denoted as $k$ , to compute the Pass@k, thereby enhancing the accuracy and consistency of the performance assessment.",
|
| 1020 |
+
"bbox": [
|
| 1021 |
+
112,
|
| 1022 |
+
84,
|
| 1023 |
+
487,
|
| 1024 |
+
131
|
| 1025 |
+
],
|
| 1026 |
+
"page_idx": 5
|
| 1027 |
+
},
|
| 1028 |
+
{
|
| 1029 |
+
"type": "equation",
|
| 1030 |
+
"text": "\n$$\n\\operatorname {P a s s} @ \\mathrm {k} = \\mathbb {E} \\left[ 1 - \\frac {\\binom {n} {k - c}}{\\binom {n} {k}} \\right] \\tag {7}\n$$\n",
|
| 1031 |
+
"text_format": "latex",
|
| 1032 |
+
"bbox": [
|
| 1033 |
+
198,
|
| 1034 |
+
141,
|
| 1035 |
+
487,
|
| 1036 |
+
184
|
| 1037 |
+
],
|
| 1038 |
+
"page_idx": 5
|
| 1039 |
+
},
|
| 1040 |
+
{
|
| 1041 |
+
"type": "text",
|
| 1042 |
+
"text": "where $n$ is the total number of generated samples for each problem, and $c$ is the number of correct generated code snippets passing all the test cases $(n > k \\geq c)$ .",
|
| 1043 |
+
"bbox": [
|
| 1044 |
+
112,
|
| 1045 |
+
192,
|
| 1046 |
+
487,
|
| 1047 |
+
256
|
| 1048 |
+
],
|
| 1049 |
+
"page_idx": 5
|
| 1050 |
+
},
|
| 1051 |
+
{
|
| 1052 |
+
"type": "text",
|
| 1053 |
+
"text": "4.5 Implementmentation Details",
|
| 1054 |
+
"text_level": 1,
|
| 1055 |
+
"bbox": [
|
| 1056 |
+
112,
|
| 1057 |
+
267,
|
| 1058 |
+
352,
|
| 1059 |
+
282
|
| 1060 |
+
],
|
| 1061 |
+
"page_idx": 5
|
| 1062 |
+
},
|
| 1063 |
+
{
|
| 1064 |
+
"type": "text",
|
| 1065 |
+
"text": "We expand the open-source Evol-Instruct dataset evol-code-alpaca-v1 (Xu et al., 2023) with nearly 110K samples into the instruction dataset with the universal code. For the code snippets collected from starcoderdata $^{2}$ , we choose 5K code snippets of each language (Python, Javascript, C++, Java, Rust, and Go) to construct the synthetic instruction dataset with universal code. Finally, we obtain the instruction dataset UNICODERINSTRUCT contains nearly 140K training samples. Code-Llama and DeepSeek-Coder-Base are used as the foundational code LLMs for supervised fine-tuning (SFT). We fine-tune these foundation LLMs on nearly 150K samples generated from evol-codealpaca-v1 and the starcoder pre-training data. UNICODER is fine-tuned on Standford_Alpaca $^{3}$ with 8 NVIDIA A100-80GB GPUs. The learning rate first increases into $8 \\times 10^{-5}$ with 50 warmup steps and then adopts a cosine decay scheduler. We adopt the Adam optimizer (Kingma and Ba, 2015) with a global batch size of 128 samples, truncating sentences to 1536 tokens.",
|
| 1066 |
+
"bbox": [
|
| 1067 |
+
112,
|
| 1068 |
+
287,
|
| 1069 |
+
489,
|
| 1070 |
+
657
|
| 1071 |
+
],
|
| 1072 |
+
"page_idx": 5
|
| 1073 |
+
},
|
| 1074 |
+
{
|
| 1075 |
+
"type": "text",
|
| 1076 |
+
"text": "5 Results and Discussion",
|
| 1077 |
+
"text_level": 1,
|
| 1078 |
+
"bbox": [
|
| 1079 |
+
112,
|
| 1080 |
+
669,
|
| 1081 |
+
346,
|
| 1082 |
+
684
|
| 1083 |
+
],
|
| 1084 |
+
"page_idx": 5
|
| 1085 |
+
},
|
| 1086 |
+
{
|
| 1087 |
+
"type": "text",
|
| 1088 |
+
"text": "5.1 Main Results",
|
| 1089 |
+
"text_level": 1,
|
| 1090 |
+
"bbox": [
|
| 1091 |
+
112,
|
| 1092 |
+
695,
|
| 1093 |
+
265,
|
| 1094 |
+
709
|
| 1095 |
+
],
|
| 1096 |
+
"page_idx": 5
|
| 1097 |
+
},
|
| 1098 |
+
{
|
| 1099 |
+
"type": "text",
|
| 1100 |
+
"text": "Python Code Generation. Table 1 shows that UNICODER significantly beats previous strong open-source baselines using UoT, closing the gap with GPT-3.5 and GPT-4. Magicoder (Wei et al., 2023) and Wavecoder (Yu et al., 2023) both prove the effectiveness of instruction datasets from code snippets. Further, UNICODER outperforms the WizardCoder with 15B parameters and EvolInstruct techniques with the help of the UniCode.",
|
| 1101 |
+
"bbox": [
|
| 1102 |
+
112,
|
| 1103 |
+
715,
|
| 1104 |
+
489,
|
| 1105 |
+
859
|
| 1106 |
+
],
|
| 1107 |
+
"page_idx": 5
|
| 1108 |
+
},
|
| 1109 |
+
{
|
| 1110 |
+
"type": "text",
|
| 1111 |
+
"text": "$^{2}$ https://huggingface.co/datasets/bigcode/starcoderdata \n $^{3}$ https://github.com/tatsu-lab/stanford_alpaca",
|
| 1112 |
+
"bbox": [
|
| 1113 |
+
112,
|
| 1114 |
+
866,
|
| 1115 |
+
435,
|
| 1116 |
+
917
|
| 1117 |
+
],
|
| 1118 |
+
"page_idx": 5
|
| 1119 |
+
},
|
| 1120 |
+
{
|
| 1121 |
+
"type": "text",
|
| 1122 |
+
"text": "Multilingual Code Understanding. Table 2 shows that UNICODER significantly outperforms strong baselines Magicoder and WaveCoder, which both leverage the code snippets to construct the instruction dataset. Given the multilingual correct code snippet, the code LLM is tasked to generate an explanation of the code and then regenerate the code only based on its own explanation. For the different backbones (Code Llama and DeepseekCoder), our method beats most previous methods, especially in other languages, which demonstrates that UNICODER-INSTRUCT can bring the capability of multilingual understanding and generation.",
|
| 1123 |
+
"bbox": [
|
| 1124 |
+
507,
|
| 1125 |
+
84,
|
| 1126 |
+
884,
|
| 1127 |
+
294
|
| 1128 |
+
],
|
| 1129 |
+
"page_idx": 5
|
| 1130 |
+
},
|
| 1131 |
+
{
|
| 1132 |
+
"type": "text",
|
| 1133 |
+
"text": "5.2 Discussion",
|
| 1134 |
+
"text_level": 1,
|
| 1135 |
+
"bbox": [
|
| 1136 |
+
507,
|
| 1137 |
+
305,
|
| 1138 |
+
638,
|
| 1139 |
+
319
|
| 1140 |
+
],
|
| 1141 |
+
"page_idx": 5
|
| 1142 |
+
},
|
| 1143 |
+
{
|
| 1144 |
+
"type": "text",
|
| 1145 |
+
"text": "Ablation Study. To verify the efficacy of each component, we conduct the ablation study step by step on HumanEval and MBPP. In Table 3, we observe that removing the multi-tasks objective (only keeping the UoT objective: Equation 6) will have a $-1.6$ performance drop in HumanEval and a $-1.3$ drop in MBPP. Removing UniCode will further degrade the performance. The results support the effectiveness of each component of UNICODER.",
|
| 1146 |
+
"bbox": [
|
| 1147 |
+
507,
|
| 1148 |
+
324,
|
| 1149 |
+
882,
|
| 1150 |
+
470
|
| 1151 |
+
],
|
| 1152 |
+
"page_idx": 5
|
| 1153 |
+
},
|
| 1154 |
+
{
|
| 1155 |
+
"type": "text",
|
| 1156 |
+
"text": "Effect on Universal Code. To discuss the effect of the different formats of the universal code, we use different definitions of universal code for UNI-CODER. Specifically, we randomly sample 5K samples to generate the instruction dataset with different formats of UniCode.",
|
| 1157 |
+
"bbox": [
|
| 1158 |
+
507,
|
| 1159 |
+
479,
|
| 1160 |
+
882,
|
| 1161 |
+
574
|
| 1162 |
+
],
|
| 1163 |
+
"page_idx": 5
|
| 1164 |
+
},
|
| 1165 |
+
{
|
| 1166 |
+
"type": "list",
|
| 1167 |
+
"sub_type": "text",
|
| 1168 |
+
"list_items": [
|
| 1169 |
+
"- UniCode 1: It describes the naming conventions, variable declaration, operators, conditional statements, loops, and function structure that pseudocode should have.",
|
| 1170 |
+
"- UniCode 2: It separates the first set of standards and provides code examples for each, instead of applying them all together in the examples.",
|
| 1171 |
+
"- UniCode 3: It describes the code structure, variable rules, control structures, functions, comments, and assignment rules that pseudocode should have.",
|
| 1172 |
+
"- UniCode 4: It is similar to the first standard but specifies type-free names for variables.",
|
| 1173 |
+
"- UniCode 5: It provides an abstract, high-level architectural description, without setting standards for the code itself.",
|
| 1174 |
+
"- UniCode 6: It uses latex algorithm and algorithmic packages for description."
|
| 1175 |
+
],
|
| 1176 |
+
"bbox": [
|
| 1177 |
+
531,
|
| 1178 |
+
586,
|
| 1179 |
+
884,
|
| 1180 |
+
917
|
| 1181 |
+
],
|
| 1182 |
+
"page_idx": 5
|
| 1183 |
+
},
|
| 1184 |
+
{
|
| 1185 |
+
"type": "page_number",
|
| 1186 |
+
"text": "1817",
|
| 1187 |
+
"bbox": [
|
| 1188 |
+
482,
|
| 1189 |
+
927,
|
| 1190 |
+
519,
|
| 1191 |
+
940
|
| 1192 |
+
],
|
| 1193 |
+
"page_idx": 5
|
| 1194 |
+
},
|
| 1195 |
+
{
|
| 1196 |
+
"type": "table",
|
| 1197 |
+
"img_path": "images/dcb254f98d184af0bcc8e68b6be22b264cc9677ae42612b982401ad584f45b81.jpg",
|
| 1198 |
+
"table_caption": [],
|
| 1199 |
+
"table_footnote": [],
|
| 1200 |
+
"table_body": "<table><tr><td>Models</td><td>Base Model</td><td>Params</td><td>Instruction Data</td><td>Model Weight</td><td>HumanEval</td><td>MBPP</td></tr><tr><td colspan=\"7\">Proprietary Models</td></tr><tr><td>GPT-3.5</td><td>-</td><td>-</td><td>-</td><td>-</td><td>72.6</td><td>81.6</td></tr><tr><td>GPT-4</td><td>-</td><td>-</td><td>-</td><td>-</td><td>85.4</td><td>83.0</td></tr><tr><td colspan=\"7\">Open-source Models</td></tr><tr><td>StarCoder (Li et al., 2023b)</td><td>-</td><td>15B</td><td>X</td><td>✓</td><td>33.6</td><td>43.3</td></tr><tr><td>WizardCoder (Luo et al., 2023)</td><td>StarCoder</td><td>15B</td><td>✓</td><td>✓</td><td>57.3</td><td>51.8</td></tr><tr><td>OctoCoder (Muennighoff et al., 2023)</td><td>StarCoder</td><td>15B</td><td>✓</td><td>✓</td><td>46.2</td><td>43.5</td></tr><tr><td>WaveCoder-SC (Muennighoff et al., 2023)</td><td>StarCoder</td><td>15B</td><td>✓</td><td>✓</td><td>50.5</td><td>51.0</td></tr><tr><td>Code-Llama (Rozière et al., 2023)</td><td>-</td><td>7B</td><td>X</td><td>✓</td><td>33.5</td><td>41.4</td></tr><tr><td>Code-Llama-Instruct (Rozière et al., 2023)</td><td>Code Llama</td><td>7B</td><td>✓</td><td>✓</td><td>34.8</td><td>44.4</td></tr><tr><td>WaveCoder-CL (Yu et al., 2023)</td><td>Code Llama</td><td>7B</td><td>✓</td><td>✓</td><td>48.1</td><td>47.2</td></tr><tr><td>Magicoder-CL (Wei et al., 2023)</td><td>Code Llama</td><td>7B</td><td>✓</td><td>✓</td><td>60.4</td><td>64.2</td></tr><tr><td>UNICODER (our method)</td><td>Code Llama</td><td>7B</td><td>✓</td><td>✓</td><td>65.4</td><td>65.2</td></tr><tr><td>DeepseekCoder (Guo et al., 2024a)</td><td>-</td><td>6.7B</td><td>X</td><td>✓</td><td>49.4</td><td>60.6</td></tr><tr><td>WaveCoder-DS (Yu et al., 2023)</td><td>Deepseek-Coder</td><td>6.7B</td><td>✓</td><td>✓</td><td>64.0</td><td>62.8</td></tr><tr><td>UNICODER (our method)</td><td>Deepseek-Coder</td><td>6.7B</td><td>✓</td><td>✓</td><td>70.6</td><td>64.3</td></tr></table>",
|
| 1201 |
+
"bbox": [
|
| 1202 |
+
176,
|
| 1203 |
+
80,
|
| 1204 |
+
826,
|
| 1205 |
+
280
|
| 1206 |
+
],
|
| 1207 |
+
"page_idx": 6
|
| 1208 |
+
},
|
| 1209 |
+
{
|
| 1210 |
+
"type": "table",
|
| 1211 |
+
"img_path": "images/e63127ca84ac1180e532f34a75fd865acc7575a08027b810aac9e9179f00b985.jpg",
|
| 1212 |
+
"table_caption": [
|
| 1213 |
+
"Table 1: Evaluation results of Pass@1 on the HumanEval and MBPP benchmark. We use self-reported scores whenever available. All methods use greedy decoding and We use the reported scores of the previous work."
|
| 1214 |
+
],
|
| 1215 |
+
"table_footnote": [],
|
| 1216 |
+
"table_body": "<table><tr><td rowspan=\"2\">Model</td><td rowspan=\"2\">Params</td><td colspan=\"7\">Programming Language</td></tr><tr><td>Java</td><td>Javascript</td><td>C++</td><td>PHP</td><td>Swift</td><td>Rust</td><td>Avg.</td></tr><tr><td colspan=\"9\">Proprietary models</td></tr><tr><td>GPT-3.5</td><td>-</td><td>69.2</td><td>67.1</td><td>63.4</td><td>60.9</td><td>-</td><td>-</td><td>-</td></tr><tr><td>GPT-4</td><td>-</td><td>81.6</td><td>78.0</td><td>76.4</td><td>77.2</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan=\"9\">Open-source models</td></tr><tr><td>CodeLlama (Rozière et al., 2023)</td><td>34B</td><td>40.2</td><td>41.7</td><td>41.4</td><td>40.4</td><td>35.3</td><td>38.7</td><td>39.6</td></tr><tr><td>CodeLlama-Python (Rozière et al., 2023)</td><td>34B</td><td>39.5</td><td>44.7</td><td>39.1</td><td>39.8</td><td>34.3</td><td>39.7</td><td>39.5</td></tr><tr><td>CodeLlama-Instruct (Rozière et al., 2023)</td><td>34B</td><td>41.5</td><td>45.9</td><td>41.5</td><td>37.0</td><td>37.6</td><td>39.3</td><td>40.5</td></tr><tr><td>WizardCoder-CL (Luo et al., 2023)</td><td>34B</td><td>44.9</td><td>55.3</td><td>47.2</td><td>47.2</td><td>44.3</td><td>46.2</td><td>47.5</td></tr><tr><td>StarCoderBase (Li et al., 2023b)</td><td>15B</td><td>28.5</td><td>31.7</td><td>30.6</td><td>26.8</td><td>16.7</td><td>24.5</td><td>26.5</td></tr><tr><td>StarCoder (Li et al., 2023b)</td><td>15B</td><td>30.2</td><td>30.8</td><td>31.6</td><td>26.1</td><td>22.7</td><td>21.8</td><td>27.2</td></tr><tr><td>WizardCoder-SC (Luo et al., 2023)</td><td>15B</td><td>35.8</td><td>41.9</td><td>39.0</td><td>39.3</td><td>33.7</td><td>27.1</td><td>36.1</td></tr><tr><td>CodeLlama (Rozière et al., 2023)</td><td>7B</td><td>29.3</td><td>31.7</td><td>27.0</td><td>25.1</td><td>25.6</td><td>25.5</td><td>27.4</td></tr><tr><td>CodeLlama-Python (Rozière et al., 2023)</td><td>7B</td><td>42.4</td><td>51.9</td><td>42.3</td><td>46.5</td><td>29.4</td><td>33.6</td><td>29.7</td></tr><tr><td>UNICODER (Our method)</td><td>7B</td><td>46.4</td><td>50.2</td><td>39.2</td><td>40.4</td><td>41.2</td><td>32.4</td><td>41.6</td></tr></table>",
|
| 1217 |
+
"bbox": [
|
| 1218 |
+
173,
|
| 1219 |
+
331,
|
| 1220 |
+
826,
|
| 1221 |
+
576
|
| 1222 |
+
],
|
| 1223 |
+
"page_idx": 6
|
| 1224 |
+
},
|
| 1225 |
+
{
|
| 1226 |
+
"type": "table",
|
| 1227 |
+
"img_path": "images/5a63424b9afb0f222824470b44789cb80431a364db6fd59fcc7f9ba11e64140e.jpg",
|
| 1228 |
+
"table_caption": [
|
| 1229 |
+
"Table 2: Evaluation results of Pass@1 (%) performance on the MultiPL-E benchmark. The baseline results are partly from the previous work (Wei et al., 2023)."
|
| 1230 |
+
],
|
| 1231 |
+
"table_footnote": [],
|
| 1232 |
+
"table_body": "<table><tr><td>ID</td><td>Methods</td><td>HumanEval</td><td>MBPP</td></tr><tr><td>①</td><td>UNICODER</td><td>70.6</td><td>64.3</td></tr><tr><td>②</td><td>① - Multi-tasks Objective</td><td>67.4</td><td>60.2</td></tr><tr><td>③</td><td>② - Universal Code</td><td>66.8</td><td>59.8</td></tr></table>",
|
| 1233 |
+
"bbox": [
|
| 1234 |
+
119,
|
| 1235 |
+
637,
|
| 1236 |
+
482,
|
| 1237 |
+
705
|
| 1238 |
+
],
|
| 1239 |
+
"page_idx": 6
|
| 1240 |
+
},
|
| 1241 |
+
{
|
| 1242 |
+
"type": "text",
|
| 1243 |
+
"text": "In Table 4, we can observe that the evaluation results of UniCode $1 \\sim$ UniCode 4 have better performance. Compared to the universal code format UniCode 5 and UniCode 6, UniCode $1 \\sim$ UniCode 4 has a clear definition and common structure, which brings more support for code generation. Notably, the experiment ⑦ performs the best by combing the training data of $1 \\sim 4$ . The experimental results show that the concrete defi",
|
| 1244 |
+
"bbox": [
|
| 1245 |
+
112,
|
| 1246 |
+
774,
|
| 1247 |
+
490,
|
| 1248 |
+
919
|
| 1249 |
+
],
|
| 1250 |
+
"page_idx": 6
|
| 1251 |
+
},
|
| 1252 |
+
{
|
| 1253 |
+
"type": "table",
|
| 1254 |
+
"img_path": "images/1b9e9681ba89eb9ef4b9719b9c152c657f814caa061929d6f7fcbb82c2db2859.jpg",
|
| 1255 |
+
"table_caption": [
|
| 1256 |
+
"Table 3: Ablation study of our proposed method on HumanEval and MBPP. UNICODER is fine-tuned on the UNICODER-INSTRUCT with the multi-task objectives."
|
| 1257 |
+
],
|
| 1258 |
+
"table_footnote": [],
|
| 1259 |
+
"table_body": "<table><tr><td>ID</td><td>Methods</td><td>HumanEval</td><td>MBPP</td></tr><tr><td>①</td><td>UniCode 1</td><td>53.2</td><td>51.5</td></tr><tr><td>②</td><td>UniCode 2</td><td>52.8</td><td>51.2</td></tr><tr><td>③</td><td>UniCode 3</td><td>53.5</td><td>50.5</td></tr><tr><td>④</td><td>UniCode 4</td><td>53.8</td><td>49.5</td></tr><tr><td>⑤</td><td>UniCode 5</td><td>49.5</td><td>50.2</td></tr><tr><td>⑥</td><td>UniCode 6</td><td>48.2</td><td>48.4</td></tr><tr><td>⑦</td><td>UniCode 1~4</td><td>55.5</td><td>52.2</td></tr></table>",
|
| 1260 |
+
"bbox": [
|
| 1261 |
+
542,
|
| 1262 |
+
637,
|
| 1263 |
+
850,
|
| 1264 |
+
760
|
| 1265 |
+
],
|
| 1266 |
+
"page_idx": 6
|
| 1267 |
+
},
|
| 1268 |
+
{
|
| 1269 |
+
"type": "text",
|
| 1270 |
+
"text": "Table 4: Evaluation results of our method with different formats of the universal code.",
|
| 1271 |
+
"bbox": [
|
| 1272 |
+
507,
|
| 1273 |
+
770,
|
| 1274 |
+
882,
|
| 1275 |
+
797
|
| 1276 |
+
],
|
| 1277 |
+
"page_idx": 6
|
| 1278 |
+
},
|
| 1279 |
+
{
|
| 1280 |
+
"type": "text",
|
| 1281 |
+
"text": "nition of UniCode and the combination of it can effectively improve the model performance.",
|
| 1282 |
+
"bbox": [
|
| 1283 |
+
507,
|
| 1284 |
+
816,
|
| 1285 |
+
880,
|
| 1286 |
+
847
|
| 1287 |
+
],
|
| 1288 |
+
"page_idx": 6
|
| 1289 |
+
},
|
| 1290 |
+
{
|
| 1291 |
+
"type": "text",
|
| 1292 |
+
"text": "5.3 Code-UniCode-Code",
|
| 1293 |
+
"text_level": 1,
|
| 1294 |
+
"bbox": [
|
| 1295 |
+
507,
|
| 1296 |
+
863,
|
| 1297 |
+
726,
|
| 1298 |
+
877
|
| 1299 |
+
],
|
| 1300 |
+
"page_idx": 6
|
| 1301 |
+
},
|
| 1302 |
+
{
|
| 1303 |
+
"type": "text",
|
| 1304 |
+
"text": "To compare the capabilities of different code LLMs, we create a test set by prompting the code",
|
| 1305 |
+
"bbox": [
|
| 1306 |
+
507,
|
| 1307 |
+
887,
|
| 1308 |
+
882,
|
| 1309 |
+
919
|
| 1310 |
+
],
|
| 1311 |
+
"page_idx": 6
|
| 1312 |
+
},
|
| 1313 |
+
{
|
| 1314 |
+
"type": "page_number",
|
| 1315 |
+
"text": "1818",
|
| 1316 |
+
"bbox": [
|
| 1317 |
+
482,
|
| 1318 |
+
928,
|
| 1319 |
+
519,
|
| 1320 |
+
940
|
| 1321 |
+
],
|
| 1322 |
+
"page_idx": 6
|
| 1323 |
+
},
|
| 1324 |
+
{
|
| 1325 |
+
"type": "table",
|
| 1326 |
+
"img_path": "images/5f0cc445156f8021d313100b79366a5a3b32538bcf33f6b617a253d3c49a13ba.jpg",
|
| 1327 |
+
"table_caption": [],
|
| 1328 |
+
"table_footnote": [],
|
| 1329 |
+
"table_body": "<table><tr><td>Method</td><td>Params</td><td>Python</td><td>Other Languages</td></tr><tr><td>Code-Llama-Instruct</td><td>7B</td><td>33.3</td><td>26.2</td></tr><tr><td>Code-Llama-Alpaca</td><td>7B</td><td>44.2</td><td>29.1</td></tr><tr><td>UNICODER</td><td>7B</td><td>45.2</td><td>31.3</td></tr></table>",
|
| 1330 |
+
"bbox": [
|
| 1331 |
+
119,
|
| 1332 |
+
80,
|
| 1333 |
+
482,
|
| 1334 |
+
142
|
| 1335 |
+
],
|
| 1336 |
+
"page_idx": 7
|
| 1337 |
+
},
|
| 1338 |
+
{
|
| 1339 |
+
"type": "text",
|
| 1340 |
+
"text": "Table 5: Pass@1 scores of our method UNICODER and two Code-Llama baselines for Code-UniCode-Code.",
|
| 1341 |
+
"bbox": [
|
| 1342 |
+
112,
|
| 1343 |
+
152,
|
| 1344 |
+
487,
|
| 1345 |
+
181
|
| 1346 |
+
],
|
| 1347 |
+
"page_idx": 7
|
| 1348 |
+
},
|
| 1349 |
+
{
|
| 1350 |
+
"type": "text",
|
| 1351 |
+
"text": "LLM to generate UniCode and translate it into the executable code. We check the correctness of each translated code with the test cases, denoted as Pass@1 of the universal code. Code-Llama-7B is fine-tuned on the Code Alpaca dataset and our dataset UNICODER-INSTRUCT separately. The results of fine-tuned Code-Llama models on UNICODER-BENCH are shown in Table 5. Our method UNICODER is more accurate in passing the test cases than the Code-Llama baselines, demonstrating its excellent code understanding and generation abilities.",
|
| 1352 |
+
"bbox": [
|
| 1353 |
+
112,
|
| 1354 |
+
195,
|
| 1355 |
+
489,
|
| 1356 |
+
387
|
| 1357 |
+
],
|
| 1358 |
+
"page_idx": 7
|
| 1359 |
+
},
|
| 1360 |
+
{
|
| 1361 |
+
"type": "text",
|
| 1362 |
+
"text": "6 Related Work",
|
| 1363 |
+
"text_level": 1,
|
| 1364 |
+
"bbox": [
|
| 1365 |
+
112,
|
| 1366 |
+
400,
|
| 1367 |
+
270,
|
| 1368 |
+
416
|
| 1369 |
+
],
|
| 1370 |
+
"page_idx": 7
|
| 1371 |
+
},
|
| 1372 |
+
{
|
| 1373 |
+
"type": "text",
|
| 1374 |
+
"text": "Code Understanding and Generation. Code understanding and generation as the key tasks to substantially facilitate the project development process, including code generation (Chen et al., 2021; Austin et al., 2021; Zhang et al., 2023), code translation (Szafraniec et al., 2023), automated testing (Deng et al., 2023), bug fixing (Muennighoff et al., 2023), code refinement (Liu et al., 2023c), code question answering (Liu and Wan, 2021), and code summarization (Ahmad et al., 2020). Researchers Chai et al. (2023) have undertaken extensive endeavors to bridge natural language and programming languages. With less ambiguous prompt styles, Mishra et al. (2023) using pseudocode improves the performance of NLP tasks. Oda et al. (2015) uses traditional machine learning to achieve code to pseudocode conversion. Jiang et al. (2022) also shows that designers and programmers can speed up the prototyping process, and ground communication between collaborators via prompt-based prototyping. To verify that the generated code is correct, there are some code synthesis evaluation frameworks, including EvalPlus (Liu et al., 2023b), HumanEval (Chen et al., 2021), HumanEval-X (Zheng et al., 2023), and MBPP (Austin et al., 2021).",
|
| 1375 |
+
"bbox": [
|
| 1376 |
+
112,
|
| 1377 |
+
426,
|
| 1378 |
+
489,
|
| 1379 |
+
843
|
| 1380 |
+
],
|
| 1381 |
+
"page_idx": 7
|
| 1382 |
+
},
|
| 1383 |
+
{
|
| 1384 |
+
"type": "text",
|
| 1385 |
+
"text": "Large Language Models for Code. Since CodeBERT (Feng et al., 2020) first connected code tasks with pre-trained models, large language models for code have developed rapidly, demonstrating ex",
|
| 1386 |
+
"bbox": [
|
| 1387 |
+
112,
|
| 1388 |
+
854,
|
| 1389 |
+
489,
|
| 1390 |
+
919
|
| 1391 |
+
],
|
| 1392 |
+
"page_idx": 7
|
| 1393 |
+
},
|
| 1394 |
+
{
|
| 1395 |
+
"type": "text",
|
| 1396 |
+
"text": "traordinary performance on almost all code tasks, rather than a single task. Prominent large models include Codex (Chen et al., 2021), AlphaCode (Li et al., 2022), SantaCoder (Allal et al., 2023), Starcoder (Li et al., 2023b), WizardCoder (Luo et al., 2023), InCoder (Fried et al., 2022), CodeT5 (Wang et al., 2021), CodeGeeX (Zheng et al., 2023), Code Llama (Rozière et al., 2023), and CodeQWen (Bai et al., 2023). To improve the performance of code generation, researchers used optimized prompts (Liu et al., 2023a; Reynolds and McDonell, 2021; Zan et al., 2023; Beurer-Kellner et al., 2023), bring test cases (Chen et al., 2023) and collaborative roles (Dong et al., 2023). There are also some related studies on using large language models for other code tasks, such as dynamic programming (Dagan et al., 2023), compiler optimization (Cummins et al., 2023), multilingual prompts (Di et al., 2023), and program of thoughts (Chen et al., 2022) (PoT).",
|
| 1397 |
+
"bbox": [
|
| 1398 |
+
507,
|
| 1399 |
+
84,
|
| 1400 |
+
884,
|
| 1401 |
+
407
|
| 1402 |
+
],
|
| 1403 |
+
"page_idx": 7
|
| 1404 |
+
},
|
| 1405 |
+
{
|
| 1406 |
+
"type": "text",
|
| 1407 |
+
"text": "Chain-of-Thought Prompting. To unleash the potential of LLMs in addressing complex reasoning tasks, chain-of-thought (CoT) prompting (Wei et al., 2022b; Kojima et al., 2022) extends in-context learning with step-by-step reasoning processes, which handles complex reasoning tasks in the field of the code and mathematics by encouraging them to engage in step-by-step reasoning processes. Following this line of research, X-of-Thought (XoT) reasoning (CoT and its structural variants further) (Chai et al., 2024; Yao et al., 2023; Li et al., 2023a; Lei et al., 2023; Guo et al., 2023; Ji et al., 2024; Guo et al., 2024b) further expands the capabilities and applications of LLMs in complex reasoning and planning scenarios.",
|
| 1408 |
+
"bbox": [
|
| 1409 |
+
507,
|
| 1410 |
+
420,
|
| 1411 |
+
884,
|
| 1412 |
+
663
|
| 1413 |
+
],
|
| 1414 |
+
"page_idx": 7
|
| 1415 |
+
},
|
| 1416 |
+
{
|
| 1417 |
+
"type": "text",
|
| 1418 |
+
"text": "Intermediate Representation In the field of natural language processing, there exist many works using intermediate representation (Gan et al., 2021; Yang et al., 2022, 2024, 2019, 2020b,a; Liang et al., 2024), such as text generation and translation. The universal code is used as the intermediate representation, which typically omits details that are essential for the machine implementation of the algorithm. We perform the coarse-to-fine pattern for the code generation and translation, where the universal code first summarizes the algorithm process and then the programming language gives the accurate solution. The Unicode provides explicit help for code generation such as Chain-of-thought in LLM.",
|
| 1419 |
+
"bbox": [
|
| 1420 |
+
507,
|
| 1421 |
+
677,
|
| 1422 |
+
884,
|
| 1423 |
+
917
|
| 1424 |
+
],
|
| 1425 |
+
"page_idx": 7
|
| 1426 |
+
},
|
| 1427 |
+
{
|
| 1428 |
+
"type": "page_number",
|
| 1429 |
+
"text": "1819",
|
| 1430 |
+
"bbox": [
|
| 1431 |
+
482,
|
| 1432 |
+
928,
|
| 1433 |
+
521,
|
| 1434 |
+
940
|
| 1435 |
+
],
|
| 1436 |
+
"page_idx": 7
|
| 1437 |
+
},
|
| 1438 |
+
{
|
| 1439 |
+
"type": "text",
|
| 1440 |
+
"text": "7 Conclusion",
|
| 1441 |
+
"text_level": 1,
|
| 1442 |
+
"bbox": [
|
| 1443 |
+
114,
|
| 1444 |
+
84,
|
| 1445 |
+
247,
|
| 1446 |
+
98
|
| 1447 |
+
],
|
| 1448 |
+
"page_idx": 8
|
| 1449 |
+
},
|
| 1450 |
+
{
|
| 1451 |
+
"type": "text",
|
| 1452 |
+
"text": "In this work, we put forth a state-of-the-art framework UNICODER for both code translation and code generation. Using the universal code UniCode as the intermediate representation, we effectively bridge different programming languages and facilitate code tasks. In addition, we collect a dataset UNICODER-INSTRUCT with 140K instruction instances from existing instruction datasets and the raw code snippets. After being fine-tuned on UNICODER-INSTRUCT with multi-task learning objectives, our model generates UniCode and translates it into the final answer (executable code). The evaluation results on code translation and generation tasks demonstrate that our method significantly improves the generalization ability, showing the efficacy and superiority of UNICODER.",
|
| 1453 |
+
"bbox": [
|
| 1454 |
+
112,
|
| 1455 |
+
109,
|
| 1456 |
+
492,
|
| 1457 |
+
367
|
| 1458 |
+
],
|
| 1459 |
+
"page_idx": 8
|
| 1460 |
+
},
|
| 1461 |
+
{
|
| 1462 |
+
"type": "text",
|
| 1463 |
+
"text": "Limitations",
|
| 1464 |
+
"text_level": 1,
|
| 1465 |
+
"bbox": [
|
| 1466 |
+
112,
|
| 1467 |
+
378,
|
| 1468 |
+
220,
|
| 1469 |
+
393
|
| 1470 |
+
],
|
| 1471 |
+
"page_idx": 8
|
| 1472 |
+
},
|
| 1473 |
+
{
|
| 1474 |
+
"type": "text",
|
| 1475 |
+
"text": "We acknowledge the following limitations of this study: (1) The evaluation focuses on benchmark datasets (Humaneval, MBPP, and MultiPL-E), and the model's effectiveness in real-world programming scenarios or industry applications is not fully explored. (2) Our method is developed and evaluated primarily on programming language benchmarks. Its effectiveness in other domains or for non-programming-related tasks is not assessed, which limits the generalizability of our findings.",
|
| 1476 |
+
"bbox": [
|
| 1477 |
+
112,
|
| 1478 |
+
404,
|
| 1479 |
+
489,
|
| 1480 |
+
565
|
| 1481 |
+
],
|
| 1482 |
+
"page_idx": 8
|
| 1483 |
+
},
|
| 1484 |
+
{
|
| 1485 |
+
"type": "text",
|
| 1486 |
+
"text": "Acknowledge",
|
| 1487 |
+
"text_level": 1,
|
| 1488 |
+
"bbox": [
|
| 1489 |
+
114,
|
| 1490 |
+
577,
|
| 1491 |
+
243,
|
| 1492 |
+
593
|
| 1493 |
+
],
|
| 1494 |
+
"page_idx": 8
|
| 1495 |
+
},
|
| 1496 |
+
{
|
| 1497 |
+
"type": "text",
|
| 1498 |
+
"text": "This work was supported in part by the National Natural Science Foundation of China (Grant Nos. U1636211, U2333205, 61672081, 62302025, 62276017), a fund project: State Grid Co., Ltd. Technology R&D Project (ProjectName: Research on Key Technologies of Data Scenario-based Security Governance and Emergency Blocking in Power Monitoring System, Project No.: 5108-202303439A-3-2-ZN), the 2022 CCF-NSFOCUS Kun-Peng Scientific Research Fund and the Opening Project of Shanghai Trusted Industrial Control Platform and the State Key Laboratory of Complex & Critical Software Environment (Grant No. SKLSDE-2021ZX-18).",
|
| 1499 |
+
"bbox": [
|
| 1500 |
+
112,
|
| 1501 |
+
602,
|
| 1502 |
+
489,
|
| 1503 |
+
827
|
| 1504 |
+
],
|
| 1505 |
+
"page_idx": 8
|
| 1506 |
+
},
|
| 1507 |
+
{
|
| 1508 |
+
"type": "text",
|
| 1509 |
+
"text": "References",
|
| 1510 |
+
"text_level": 1,
|
| 1511 |
+
"bbox": [
|
| 1512 |
+
114,
|
| 1513 |
+
854,
|
| 1514 |
+
213,
|
| 1515 |
+
871
|
| 1516 |
+
],
|
| 1517 |
+
"page_idx": 8
|
| 1518 |
+
},
|
| 1519 |
+
{
|
| 1520 |
+
"type": "ref_text",
|
| 1521 |
+
"text": "Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2020. A transformer-based approach for source code summarization. In Proceed",
|
| 1522 |
+
"bbox": [
|
| 1523 |
+
114,
|
| 1524 |
+
877,
|
| 1525 |
+
489,
|
| 1526 |
+
917
|
| 1527 |
+
],
|
| 1528 |
+
"page_idx": 8
|
| 1529 |
+
},
|
| 1530 |
+
{
|
| 1531 |
+
"type": "list",
|
| 1532 |
+
"sub_type": "ref_text",
|
| 1533 |
+
"list_items": [
|
| 1534 |
+
"ings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 4998-5007. Association for Computational Linguistics.",
|
| 1535 |
+
"Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, et al. 2023. SantaCoder: Don't reach for the stars! arXiv preprint arXiv:2301.03988.",
|
| 1536 |
+
"Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. 2021. Program synthesis with large language models. arXiv preprint arXiv:2108.07732.",
|
| 1537 |
+
"Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, and Tianhang Zhu. 2023. Qwen technical report. arXiv preprint arXiv:2309.16609, abs/2309.16609.",
|
| 1538 |
+
"Luca Beurer-Kellner, Marc Fischer, and Martin T. Vechev. 2023. Prompting is programming: A query language for large language models. Proc. ACM Program. Lang., 7(PLDI):1946-1969.",
|
| 1539 |
+
"Federico Cassano, John Gouwar, Daniel Nguyen, Sydney Nguyen, Luna Phipps-Costin, Donald Pinckney, Ming-Ho Yee, Yangtian Zi, Carolyn Jane Anderson, Molly Q Feldman, et al. 2022. Multi-: A scalable and extensible approach to benchmarking neural code generation. arXiv preprint arXiv:2208.08227.",
|
| 1540 |
+
"Linzheng Chai, Jian Yang, Tao Sun, Hongcheng Guo, Jiaheng Liu, Bing Wang, Xinnian Liang, Jiaqi Bai, Tongliang Li, Qiyao Peng, and Zhoujun Li. 2024. xcot: Cross-lingual instruction tuning for cross-lingual chain-of-thought reasoning. arXiv preprint arXiv:2401.07037, abs/2401.07037.",
|
| 1541 |
+
"Yekun Chai, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, and Hua Wu. 2023. Ernie-code: Beyond english-centric cross-lingual pretraining for programming languages. In Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023, pages 10628-10650. Association for Computational Linguistics.",
|
| 1542 |
+
"Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, and Weizhu Chen. 2023. Codet: Code generation with generated tests. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net."
|
| 1543 |
+
],
|
| 1544 |
+
"bbox": [
|
| 1545 |
+
510,
|
| 1546 |
+
85,
|
| 1547 |
+
884,
|
| 1548 |
+
917
|
| 1549 |
+
],
|
| 1550 |
+
"page_idx": 8
|
| 1551 |
+
},
|
| 1552 |
+
{
|
| 1553 |
+
"type": "page_number",
|
| 1554 |
+
"text": "1820",
|
| 1555 |
+
"bbox": [
|
| 1556 |
+
482,
|
| 1557 |
+
928,
|
| 1558 |
+
519,
|
| 1559 |
+
940
|
| 1560 |
+
],
|
| 1561 |
+
"page_idx": 8
|
| 1562 |
+
},
|
| 1563 |
+
{
|
| 1564 |
+
"type": "list",
|
| 1565 |
+
"sub_type": "ref_text",
|
| 1566 |
+
"list_items": [
|
| 1567 |
+
"Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidi Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, abs/2107.03374.",
|
| 1568 |
+
"Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. 2022. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588, abs/2211.12588.",
|
| 1569 |
+
"Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.",
|
| 1570 |
+
"Chris Cummins, Volker Seeker, Dejan Grubisic, Mostafa Elhoushi, Youwei Liang, Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Kim M. Hazelwood, Gabriel Synnaeve, and Hugh Leather. 2023. Large language models for compiler optimization. arXiv preprint arXiv:2309.07062, abs/2309.07062.",
|
| 1571 |
+
"Gautier Dagan, Frank Keller, and Alex Lascarides. 2023. Dynamic planning with a LLM. arXiv preprint arXiv:2308.06391, abs/2308.06391.",
|
| 1572 |
+
"Yinlin Deng, Chunqiu Steven Xia, Chenyuan Yang, Shizhuo Dylan Zhang, Shujing Yang, and Lingming Zhang. 2023. Large language models are edge-case fuzzers: Testing deep learning libraries via fuzzgpt. arXiv preprint arXiv:2304.02014, abs/2304.02014.",
|
| 1573 |
+
"Peng Di, Jianguo Li, Hang Yu, Wei Jiang, Wenting Cai, Yang Cao, Chaoyu Chen, Dajun Chen, Hongwei Chen, Liang Chen, Gang Fan, Jie Gong, Zi Gong, Wen Hu, Tingting Guo, Zhichao Lei, Ting Li, Zheng Li, Ming Liang, Cong Liao, Bingchang Liu, Jiachen Liu, Zhiwei Liu, Shaojun Lu, Min Shen, Guangpei Wang, Huan Wang, Zhi Wang, Zhaogui Xu, Jiawei Yang, Qing Ye, Gehao Zhang, Yu Zhang, Zelin Zhao, Xunjin Zheng, Hailian Zhou, Lifu Zhu, and Xianying Zhu. 2023. Codefuse-13b: A pretrained multilingual code large language model. arXiv preprint arXiv:2310.06266, abs/2310.06266."
|
| 1574 |
+
],
|
| 1575 |
+
"bbox": [
|
| 1576 |
+
115,
|
| 1577 |
+
85,
|
| 1578 |
+
490,
|
| 1579 |
+
917
|
| 1580 |
+
],
|
| 1581 |
+
"page_idx": 9
|
| 1582 |
+
},
|
| 1583 |
+
{
|
| 1584 |
+
"type": "list",
|
| 1585 |
+
"sub_type": "ref_text",
|
| 1586 |
+
"list_items": [
|
| 1587 |
+
"Yihong Dong, Xue Jiang, Zhi Jin, and Ge Li. 2023. Self-collaboration code generation via chatgpt. arXiv preprint arXiv:2304.07590, abs/2304.07590.",
|
| 1588 |
+
"Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. 2020. Codebert: A pre-trained model for programming and natural languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of Findings of ACL, pages 1536-1547. Association for Computational Linguistics.",
|
| 1589 |
+
"Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida I. Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen tau Yih, Luke Zettlemoyer, and Mike Lewis. 2022. Incoder: A generative model for code infilling and synthesis. arXiv preprint arXiv:2204.05999, abs/2204.05999.",
|
| 1590 |
+
"Shiwei Gan, Yafeng Yin, Zhiwei Jiang, Lei Xie, and Sanglu Lu. 2021. Skeleton-aware neural sign language translation. In MM '21: ACM Multimedia Conference, Virtual Event, China, October 20 - 24, 2021, pages 4353-4361. ACM.",
|
| 1591 |
+
"Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi, Y Wu, YK Li, et al. 2024a. Deepseek-coder: When the large language model meets programming-the rise of code intelligence. arXiv preprint arXiv:2401.14196.",
|
| 1592 |
+
"Hongcheng Guo, Jian Yang, Jiaheng Liu, Liquan Yang, Linzheng Chai, Jiaqi Bai, Junran Peng, Xiaorong Hu, Chao Chen, Dongfeng Zhang, Xu Shi, Tieqiao Zheng, Liangfan Zheng, Bo Zhang, Ke Xu, and Zhoujun Li. 2023. OWL: A large language model for IT operations. CoRR, abs/2309.09298.",
|
| 1593 |
+
"Hongcheng Guo, Wei Zhang, Anjie Le, Jian Yang, Jiaheng Liu, Zhoujun Li, Tieqiao Zheng, Shi Xu, Runqiang Zang, Liangfan Zheng, et al. 2024b. Lemur: Log parsing with entropy sampling and chain-of-thought merging. arXiv preprint arXiv:2402.18205.",
|
| 1594 |
+
"Hangyuan Ji, Jian Yang, Linzheng Chai, Chaoren Wei, Liquan Yang, Yunlong Duan, Yunli Wang, Tianzhen Sun, Hongcheng Guo, Tongliang Li, et al. 2024. Sevenllm: Benchmarking, eliciting, and enhancing abilities of large language models in cyber threat intelligence. arXiv preprint arXiv:2405.03446.",
|
| 1595 |
+
"Ellen Jiang, Kristen Olson, Edwin Toh, Alejandra Molina, Aaron Donsbach, Michael Terry, and Carrie J. Cai. 2022. Promptmaker: Prompt-based prototyping with large language models. In CHI '22: CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April 2022 - 5 May 2022, Extended Abstracts, pages 35:1-35:8. ACM.",
|
| 1596 |
+
"Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations,"
|
| 1597 |
+
],
|
| 1598 |
+
"bbox": [
|
| 1599 |
+
510,
|
| 1600 |
+
85,
|
| 1601 |
+
882,
|
| 1602 |
+
917
|
| 1603 |
+
],
|
| 1604 |
+
"page_idx": 9
|
| 1605 |
+
},
|
| 1606 |
+
{
|
| 1607 |
+
"type": "page_number",
|
| 1608 |
+
"text": "1821",
|
| 1609 |
+
"bbox": [
|
| 1610 |
+
482,
|
| 1611 |
+
928,
|
| 1612 |
+
517,
|
| 1613 |
+
940
|
| 1614 |
+
],
|
| 1615 |
+
"page_idx": 9
|
| 1616 |
+
},
|
| 1617 |
+
{
|
| 1618 |
+
"type": "list",
|
| 1619 |
+
"sub_type": "ref_text",
|
| 1620 |
+
"list_items": [
|
| 1621 |
+
"ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
|
| 1622 |
+
"Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022.",
|
| 1623 |
+
"Yuhang Lai, Chengxi Li, Yiming Wang, Tianyi Zhang, Ruiqi Zhong, Luke Zettlemoyer, Wen-Tau Yih, Daniel Fried, Sida I. Wang, and Tao Yu. 2023. DS1000: A natural and reliable benchmark for data science code generation. In International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 18319-18345. PMLR.",
|
| 1624 |
+
"Bin Lei, Pei-Hung Lin, Chunhua Liao, and Caiwen Ding. 2023. Boosting logical reasoning in large language models through a new framework: The graph of thought. arXiv preprint arXiv:2308.08614, abs/2308.08614.",
|
| 1625 |
+
"Jia Li, Ge Li, Yongmin Li, and Zhi Jin. 2023a. Structured chain-of-thought prompting for code generation. arXiv preprint arXiv:2305.06599.",
|
| 1626 |
+
"Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, João Monteiro, Oleh Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Logesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov, Zhiruo Wang, Rudra Murthy V, Jason Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca, Manan Dey, Zhihan Zhang, Nour MoustafaFahmy, Urvashi Bhattacharyya, Wenhao Yu, Swayam Singh, Sasha Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav Timor, Jennifer Ding, Claire Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex Gu, Jennifer Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor, Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos Munoz Ferrandis, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, and Harm de Vries. 2023b. StarCoder: May the source be with you! arXiv preprint arXiv:2305.06161, abs/2305.06161.",
|
| 1627 |
+
"Yujia Li, David H. Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson d'Autume, Igor Babuschkin, Xinyun Chen, PoSen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel J. Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando"
|
| 1628 |
+
],
|
| 1629 |
+
"bbox": [
|
| 1630 |
+
115,
|
| 1631 |
+
85,
|
| 1632 |
+
489,
|
| 1633 |
+
917
|
| 1634 |
+
],
|
| 1635 |
+
"page_idx": 10
|
| 1636 |
+
},
|
| 1637 |
+
{
|
| 1638 |
+
"type": "list",
|
| 1639 |
+
"sub_type": "ref_text",
|
| 1640 |
+
"list_items": [
|
| 1641 |
+
"de Freitas, Koray Kavukcuoglu, and Oriol Vinyals. 2022. Competition-level code generation with AlphaCode. arXiv preprint arXiv:2203.07814, abs/2203.07814.",
|
| 1642 |
+
"Yaobo Liang, Quanzhi Zhu, Junhe Zhao, and Nan Duan. 2024. Machine-created universal language for cross-lingual transfer. In Thirty-Eighth AAAI Conference on Artificial Intelligence, AAAI 2024, Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence, IAAI 2024, Fourteenth Symposium on Educational Advances in Artificial Intelligence, EAAI 2014, February 20-27, 2024, Vancouver, Canada, pages 18617-18625. AAAI Press.",
|
| 1643 |
+
"Chao Liu, Xuanlin Bao, Hongyu Zhang, Neng Zhang, Haibo Hu, Xiaohong Zhang, and Meng Yan. 2023a. Improving chatgpt prompt for code generation. arXiv preprint arXiv:2305.08360, abs/2305.08360.",
|
| 1644 |
+
"Chenxiao Liu and Xiaojun Wan. 2021. CodeQA: A question answering dataset for source code comprehension. In Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 16-20 November, 2021, pages 2618-2632. Association for Computational Linguistics.",
|
| 1645 |
+
"Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. 2023b. Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation. arXiv preprint arXiv:2305.01210, abs/2305.01210.",
|
| 1646 |
+
"Yue Liu, Thanh Le-Cong, Ratnadira Widyasari, Chakkrit Tantithamthavorn, Li Li, Xuan-Bach Dinh Le, and David Lo. 2023c. Refining ChatGPT-generated code: Characterizing and mitigating code quality issues. arXiv preprint arXiv:2307.12596, abs/2307.12596.",
|
| 1647 |
+
"Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xi-ubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. 2023. WizardCoder: Empowering code large language models with evol-instruct. arXiv preprint arXiv:2306.08568.",
|
| 1648 |
+
"Mayank Mishra, Prince Kumar, Riyadh Bhat, Rudra Murthy V, Danish Contractor, and Srikanth Tamilselvam. 2023. Prompting with pseudo-code instructions. arXiv preprint arXiv:2305.11790, abs/2305.11790.",
|
| 1649 |
+
"Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, and Shayne Longpre. 2023. OctoPack: Instruction tuning code large language models. arXiv preprint arXiv:2308.07124, abs/2308.07124.",
|
| 1650 |
+
"Yusuke Oda, Hiroyuki Fudaba, Graham Neubig, Hideaki Hata, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Learning to generate"
|
| 1651 |
+
],
|
| 1652 |
+
"bbox": [
|
| 1653 |
+
510,
|
| 1654 |
+
85,
|
| 1655 |
+
882,
|
| 1656 |
+
917
|
| 1657 |
+
],
|
| 1658 |
+
"page_idx": 10
|
| 1659 |
+
},
|
| 1660 |
+
{
|
| 1661 |
+
"type": "page_number",
|
| 1662 |
+
"text": "1822",
|
| 1663 |
+
"bbox": [
|
| 1664 |
+
482,
|
| 1665 |
+
928,
|
| 1666 |
+
519,
|
| 1667 |
+
940
|
| 1668 |
+
],
|
| 1669 |
+
"page_idx": 10
|
| 1670 |
+
},
|
| 1671 |
+
{
|
| 1672 |
+
"type": "list",
|
| 1673 |
+
"sub_type": "ref_text",
|
| 1674 |
+
"list_items": [
|
| 1675 |
+
"pseudo-code from source code using statistical machine translation (T). In 30th IEEE/ACM International Conference on Automated Software Engineering, ASE 2015, Lincoln, NE, USA, November 9-13, 2015, pages 574-584. IEEE Computer Society.",
|
| 1676 |
+
"OpenAI. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.",
|
| 1677 |
+
"Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022.",
|
| 1678 |
+
"Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training. OpenAI blog.",
|
| 1679 |
+
"Laria Reynolds and Kyle McDonell. 2021. Prompt programming for large language models: Beyond the few-shot paradigm. In *CHI '21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama Japan, May 8-13, 2021, Extended Abstracts*, pages 314:1-314:7. ACM.",
|
| 1680 |
+
"Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. 2023. Code Llama: Open foundation models for code. arXiv preprint arXiv:2308.12950.",
|
| 1681 |
+
"Marc Szafraniec, Baptiste Rozière, Hugh Leather, Patrick Labatut, François Charton, and Gabriel Synnaeve. 2023. Code translation with compiler representations. In *The Eleventh International Conference on Learning Representations*, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net.",
|
| 1682 |
+
"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.",
|
| 1683 |
+
"Yue Wang, Weishi Wang, Shafiq Joty, and Steven CH Hoi. 2021. CodeT5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. arXiv preprint arXiv:2109.00859.",
|
| 1684 |
+
"Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2022a. Finetuned language models are zero-shot learners. In The Tenth"
|
| 1685 |
+
],
|
| 1686 |
+
"bbox": [
|
| 1687 |
+
115,
|
| 1688 |
+
85,
|
| 1689 |
+
489,
|
| 1690 |
+
917
|
| 1691 |
+
],
|
| 1692 |
+
"page_idx": 11
|
| 1693 |
+
},
|
| 1694 |
+
{
|
| 1695 |
+
"type": "list",
|
| 1696 |
+
"sub_type": "ref_text",
|
| 1697 |
+
"list_items": [
|
| 1698 |
+
"International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.",
|
| 1699 |
+
"Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. 2022b. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022.",
|
| 1700 |
+
"Yuxiang Wei, Zhe Wang, Jiawei Liu, Yifeng Ding, and Lingming Zhang. 2023. Magicoder: Source code is all you need. arXiv preprint arXiv:2312.02120, abs/2312.02120.",
|
| 1701 |
+
"Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. 2023. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244.",
|
| 1702 |
+
"Weixiang Yan, Yuchen Tian, Yunzhe Li, Qian Chen, and Wen Wang. 2023. CodetransOcean: A comprehensive multilingual benchmark for code translation. In *Findings of the Association for Computational Linguistics: EMNLP* 2023, Singapore, December 6-10, 2023, pages 5067-5089. Association for Computational Linguistics.",
|
| 1703 |
+
"Jian Yang, Hongcheng Guo, Yuwei Yin, Jiaqi Bai, Bing Wang, Jiaheng Liu, Xinnian Liang, Linzheng Chai, Liqu Yang, and Zhoujun Li. 2024. m3p: Towards multimodal multilingual translation with multimodal prompt. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC/COLING 2024, 20-25 May, 2024, Torino, Italy, pages 10858-10871. ELRA and ICCL.",
|
| 1704 |
+
"Jian Yang, Shuming Ma, Dongdong Zhang, Zhoujun Li, and Ming Zhou. 2020a. Improving neural machine translation with soft template prediction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5979-5989. Association for Computational Linguistics.",
|
| 1705 |
+
"Jian Yang, Shuming Ma, Dongdong Zhang, Shuangzhi Wu, Zhoujun Li, and Ming Zhou. 2020b. Alternating language modeling for cross-lingual pre-training. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 9386-9393. AAAI Press.",
|
| 1706 |
+
"Jian Yang, Yuwei Yin, Shuming Ma, Dongdong Zhang, Shuangzhi Wu, Hongcheng Guo, Zhoujun Li, and Furu Wei. 2022. UM4: unified multilingual multiple teacher-student model for zero-resource neural machine translation. In Proceedings of the Thirty-First"
|
| 1707 |
+
],
|
| 1708 |
+
"bbox": [
|
| 1709 |
+
510,
|
| 1710 |
+
85,
|
| 1711 |
+
882,
|
| 1712 |
+
917
|
| 1713 |
+
],
|
| 1714 |
+
"page_idx": 11
|
| 1715 |
+
},
|
| 1716 |
+
{
|
| 1717 |
+
"type": "page_number",
|
| 1718 |
+
"text": "1823",
|
| 1719 |
+
"bbox": [
|
| 1720 |
+
482,
|
| 1721 |
+
928,
|
| 1722 |
+
519,
|
| 1723 |
+
940
|
| 1724 |
+
],
|
| 1725 |
+
"page_idx": 11
|
| 1726 |
+
},
|
| 1727 |
+
{
|
| 1728 |
+
"type": "list",
|
| 1729 |
+
"sub_type": "ref_text",
|
| 1730 |
+
"list_items": [
|
| 1731 |
+
"International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, pages 4454-4460. ijcai.org.",
|
| 1732 |
+
"Ze Yang, Wei Wu, Jian Yang, Can Xu, and Zhoujun Li. 2019. Low-resource response generation with template prior. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 1886-1897. Association for Computational Linguistics.",
|
| 1733 |
+
"Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, abs/2305.10601.",
|
| 1734 |
+
"Zhaojian Yu, Xin Zhang, Ning Shang, Yangyu Huang, Can Xu, Yishujie Zhao, Wenxiang Hu, and Qiu feng Yin. 2023. Wavecoder: Widespread and versatile enhanced instruction tuning with refined data generation. arXiv preprint arXiv:2312.14187, abs/2312.14187.",
|
| 1735 |
+
"Daoguang Zan, Ailun Yu, Bo Shen, Jiaxin Zhang, Taihong Chen, Bing Geng, Bei Chen, Jichuan Ji, Yafen Yao, Yongji Wang, and Qianxiang Wang. 2023. Can programming languages boost each other via instruction tuning? arXiv preprint arXiv:2308.16824, abs/2308.16824.",
|
| 1736 |
+
"Fengji Zhang, Bei Chen, Yue Zhang, Jin Liu, Daoguang Zan, Yi Mao, Jian-Guang Lou, and Weizhu Chen. 2023. RepoCoder: Repository-level code completion through iterative retrieval and generation. arXiv preprint arXiv:2303.12570, abs/2303.12570.",
|
| 1737 |
+
"Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Zihan Wang, Lei Shen, Andi Wang, Yang Li, Teng Su, Zhilin Yang, and Jie Tang. 2023. Codegeex: A pre-trained model for code generation with multilingual evaluations on humaneval-x. arXiv preprint arXiv:2303.17568, abs/2303.17568."
|
| 1738 |
+
],
|
| 1739 |
+
"bbox": [
|
| 1740 |
+
115,
|
| 1741 |
+
85,
|
| 1742 |
+
489,
|
| 1743 |
+
658
|
| 1744 |
+
],
|
| 1745 |
+
"page_idx": 12
|
| 1746 |
+
},
|
| 1747 |
+
{
|
| 1748 |
+
"type": "page_number",
|
| 1749 |
+
"text": "1824",
|
| 1750 |
+
"bbox": [
|
| 1751 |
+
482,
|
| 1752 |
+
928,
|
| 1753 |
+
519,
|
| 1754 |
+
940
|
| 1755 |
+
],
|
| 1756 |
+
"page_idx": 12
|
| 1757 |
+
}
|
| 1758 |
+
]
|
2024/UniCoder_ Scaling Code Large Language Model via Universal Code/6625637b-146d-4630-b379-0fc1c52e41b0_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/UniCoder_ Scaling Code Large Language Model via Universal Code/6625637b-146d-4630-b379-0fc1c52e41b0_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a4dc433a133462ad25789a7940c04541afc2bcb94210d09d72ab15a5b6534ab7
|
| 3 |
+
size 1887555
|
2024/UniCoder_ Scaling Code Large Language Model via Universal Code/full.md
ADDED
|
@@ -0,0 +1,356 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# UNICODER : Scaling Code Large Language Model via Universal Code
|
| 2 |
+
|
| 3 |
+
Tao Sun $^{1}$ , Linzheng Chai $^{1*}$ , Jian Yang $^{1*†}$ , Yuwei Yin $^{2}$ , Hongcheng Guo $^{1}$ , Jiaheng Liu $^{1}$ , Bing Wang $^{1}$ , Liquun Yang $^{1}$ , Zhoujun Li $^{1}$
|
| 4 |
+
|
| 5 |
+
$^{1}$ State Key Laboratory of Complex & Critical Software Environment, Beihang University; $^{2}$ Department of Computer Science, University of British Columbia {buaast, challenging, jiaya, hongchengguo} @ buaa.edu.cn; {liujiaheng, bingwang, lqyang, lizj} @ buaa.edu.cn; yuweiyin@cs.ubc.ca
|
| 6 |
+
|
| 7 |
+
# Abstract
|
| 8 |
+
|
| 9 |
+
Intermediate reasoning or acting steps have successfully improved large language models (LLMs) for handling various downstream natural language processing (NLP) tasks. When applying LLMs for code generation, recent works mainly focus on directing the models to articulate intermediate natural-language reasoning steps, as in chain-of-thought (CoT) prompting, and then output code with the natural language or other structured intermediate steps. However, such output is not suitable for code translation or generation tasks since the standard CoT has different logical structures and forms of expression with the code. In this work, we introduce the universal code (UniCode) as the intermediate representation. It is a description of algorithm steps using a mix of conventions of programming languages, such as assignment operator, conditional operator, and loop. Hence, we collect an instruction dataset UNICODER-INSTRUCT to train our model UNICODER on multi-task learning objectives. UNICODER-INSTRUCT comprises natural-language questions, code solutions, and the corresponding universal code. The alignment between the intermediate universal code representation and the final code solution significantly improves the quality of the generated code. The experimental results demonstrate that UNICODER with the universal code significantly outperforms the previous prompting methods by a large margin, showcasing the effectiveness of the structural clues in pseudo-code.<sup>1</sup>
|
| 10 |
+
|
| 11 |
+
# 1 Introduction
|
| 12 |
+
|
| 13 |
+
The field of code translation and generation has advanced significantly (Szafraniec et al., 2023; Yan et al., 2023) with the advent of code-specific large language models (LLMs). Code LLMs, such as StarCoder (Li et al., 2023b) and Code
|
| 14 |
+
|
| 15 |
+

|
| 16 |
+
Figure 1: An example of UNICODER. The Code LLM solves the code generation question by "translating" the pseudocode description (Universal Code) into executable code of the target programming language.
|
| 17 |
+
|
| 18 |
+
Llama (Rozière et al., 2023), are capable of generating executable code by analyzing natural language prompts. Chain-of-thought (CoT) prompting (Wei et al., 2022b) has emerged as the leading technique in enhancing LLMs, where the intermediate steps provide a structured pathway from the problem statement to the solution, effectively mirroring the human problem-solving process.
|
| 19 |
+
|
| 20 |
+
Considering the low accuracy of CoT in coder generation, structure CoT (SCoT) (Li et al., 2023a) is proposed to minimize the gap between the intermediate steps and the generated code. More intuitively, using a universal code as the intermediate representation to handle multiple programming languages (PL) is promising. Here, universal code is a blueprint for implementing an algorithm, which helps to make the design of algorithms logically clear and readily comprehensible. Moreover, it is universal across different programming languages (PL-agnostic) since it typically does not follow spe
|
| 21 |
+
|
| 22 |
+
cific syntax and omits execution details. Yet, how the universal code is used for code translation and generation in multilingual scenarios remains underexplored.
|
| 23 |
+
|
| 24 |
+
In this work, we scale up the code LLMs to support multiple programming languages via the universal code (UniCode), which is used as an efficient and language-independent intermediate representation of the key algorithm principles. Specifically, we first define UniCode by specifying grammar rules and providing paradigms, followed by prompting GPT-4 (OpenAI, 2023) to create an instruction dataset UNICODER-INSTRUCT comprising natural-language questions, code solutions, and the corresponding universal code, as shown in Figure 1. Then, the UNICODER model is built by performing instruction tuning (Wei et al., 2022a) on multi-task learning objectives, including zero-shot question-answer generation (question $\rightarrow$ code), question-universal-code generation (question $\rightarrow$ UniCode $\rightarrow$ code), universal-code-solution translation (UniCode $\rightarrow$ code), and Universal-code-of-Thought (UoT) objectives. In UoT, the model is required to generate the universal code before the executable code.
|
| 25 |
+
|
| 26 |
+
UNICODER is evaluated on the Python benchmark (Humaneval (Chen et al., 2021) and MBPP (Austin et al., 2021)) and the extended multilingual benchmark MultiPL-E. The results demonstrate that UNICODER consistently achieves state-of-the-art performance across all languages, notably surpassing the previous baselines. Furthermore, the ablation study verifies the efficacy of the proposed method, and extra discussions provide insights into the effect of our method. The contributions are summarized as follows:
|
| 27 |
+
|
| 28 |
+
- We introduce the universal code UniCode, which is agnostic to programming languages, allowing LLMs to grasp the essence of algorithms step by step. In addition, the instruction dataset UNICODER-INSTRUCT is collected and provided for follow-up research.
|
| 29 |
+
- We propose UNICODER, a code generation method that uses multi-task learning objectives to fine-tune the code LLMs with the help of UniCode. The objectives include question-answer generation (QA), question-universal-code generation (QP), universal-code-answer translation (PA), and Universal-code-of-Thought (UoT).
|
| 30 |
+
- As extensive experiments show, our method
|
| 31 |
+
|
| 32 |
+

|
| 33 |
+
Figure 2: Definition of the universal code.
|
| 34 |
+
|
| 35 |
+
UNICODER consistently outperforms the previous baselines on different benchmarks, including HumanEval, MBPP, and MultiPL-E. To further verify the effectiveness of the universal code, we propose UNICODER-BENCH to test the capabilities of code LLMs.
|
| 36 |
+
|
| 37 |
+
# 2 UNICODER-INSTRUCT
|
| 38 |
+
|
| 39 |
+
Definition of Universal Code. Universal code is designed for expressing algorithms in a form that is easily understood by humans, blending programming language syntax with natural language descriptions and mathematical notation to outline the steps of an algorithm without the complexity of full coding details. It omits machine-specific implementations to focus on the core logic, making it a popular choice for documentation in educational materials and the preliminary design phases of software development. By abstracting away from the intricacies of actual code, pseudocode facilitates clear communication of algorithmic concepts across various programming environments. The definition of the universal code, as shown in Figure 2, is based on the following principles:
|
| 40 |
+
|
| 41 |
+
- Comments: Provide explanations and context for code segments, making it easier for others to understand the intent and functionality.
|
| 42 |
+
|
| 43 |
+
{Definition of Universal Code}
|
| 44 |
+
|
| 45 |
+
Question
|
| 46 |
+
|
| 47 |
+
{Question}
|
| 48 |
+
|
| 49 |
+
Response
|
| 50 |
+
|
| 51 |
+
{Answer}
|
| 52 |
+
|
| 53 |
+
Your Task
|
| 54 |
+
|
| 55 |
+
Please combine the above Question and
|
| 56 |
+
|
| 57 |
+
Response to comply with the pseudocode standard to write the corresponding
|
| 58 |
+
|
| 59 |
+
pseudocode of solution. Adopt a meticulous methodology, breaking down the generation process into manageable steps. Just output the generated pseudocode for the solution and do not include the Question and Response in the output.
|
| 60 |
+
|
| 61 |
+
The output format is as follows, Use \*\*pseudocode to put the generated pseudocode in markdown quotes:
|
| 62 |
+
|
| 63 |
+
```
|
| 64 |
+
```
|
| 65 |
+
```
|
| 66 |
+
```
|
| 67 |
+
```
|
| 68 |
+
```
|
| 69 |
+
``
|
| 70 |
+
|
| 71 |
+
{Offers a pseudocode version of the solution.}
|
| 72 |
+
|
| 73 |
+
1
|
| 74 |
+
|
| 75 |
+
Figure 3: Prompt of generating UniCode.
|
| 76 |
+
|
| 77 |
+
- Variables: Enhance code readability and maintainability by using meaningful names that convey the purpose of the variables without relying on data type specifications.
|
| 78 |
+
- Input/Output: Simplify the interaction with data entering and leaving the system, ensuring these operations are clear and easy to trace.
|
| 79 |
+
- Conditionals: Clarify decision-making processes within the code by using structured and indented conditional statements that define clear execution paths.
|
| 80 |
+
- Loops: Facilitate the repetition of code blocks in a controlled manner, with clearly defined start and end conditions, making the iterative processes understandable.
|
| 81 |
+
- Functions/Procedures: Increase modularity and reusability by naming functions and procedures descriptively, and by using parameters effectively to encapsulate functionality.
|
| 82 |
+
- Formatting: Improve the overall visual organization of the code by applying consistent indentation, which helps in delineating hierarchical structures and logical groupings within the code.
|
| 83 |
+
|
| 84 |
+
Construction From Instruction Dataset. For a programming language $L$ , given the existing code
|
| 85 |
+
|
| 86 |
+
instruction pair $(q_{\alpha},a_{\alpha})\in D_s^L$ , where $q_{\alpha}$ and $a_{\alpha}$ are question and answer from $D_{s}^{L}$ , we create the universal code instruction dataset $D_{u_{\alpha}}^{L}$ by prompting LLMs to generate the universal code $p_{\alpha}$ and then add $(q_{\alpha},a_{\alpha},p_{\alpha})$ into $D_{u_{\alpha}}^{L}$ . Figure 2 shows the definition of the code universal and Figure 3 is the prompt for LLMs to generate UniCode. {Definition of Universal Code}, {Question}, and {Answer} denote the slots for definition of the universal code $p_{\alpha}$ , the question of the instruction data $q_{\alpha}$ , and the answer of the instruction $a_{\alpha}$ , respectively. Given $K$ different programming languages $L_{all} = \{L_k\}_{k = 1}^K$ the multilingual programming instruction dataset with the universal code $D_{u_{\alpha}} = \{D_{u_{\alpha}}^{L_k}\}_{k = 1}^K$ are created for supervised fine-tuning (SFT) (Ouyang et al., 2022). In this work, we adopt the open-source instruction dataset.
|
| 87 |
+
|
| 88 |
+
Construction From Code Snippets. For the unsupervised data (code snippets) widely existing on many websites (e.g., GitHub), we also construct the instruction dataset with the universal code from raw code snippets. Specifically, we ask the LLM to generate the question $q_{\beta}$ and the corresponding code answer $a_{\beta}$ pair based on the original code snippet $c$ using the prompt "Please generate the self-contained question and answer based on the given code snippet". Then, we generate UniCode $p_{\beta}$ and construct $(q_{\beta}, a_{\beta}, p_{\beta})$ triplets the same way as in Paragraph 2. In addition, an LLM scorer is applied to filter out the low-quality $(q_{\beta}, a_{\beta}, p_{\beta})$ triplets. Therefore, given raw code snippets of different programming languages $L_{k} \in \{L_{k}\}_{k=1}^{K}$ , we can construct instruction dataset with the universal code $D_{u_{\beta}} = \{D_{u_{\beta}}^{L_{k}}\}_{k=1}^{K}$ directly from such unsupervised data. Finally, we combine these two instruction datasets to obtain $D_{u} = D_{u_{\alpha}} \cup D_{u_{\beta}}$ , where $D_{u}^{L_{k}} = D_{u_{\alpha}^{L_{k}}} \cup D_{u_{\beta}^{L_{k}}}$ for each program language $L_{k} \in L_{all}$ .
|
| 89 |
+
|
| 90 |
+
Evaluation Task for Universal Code. To test the capability of the LLMs in generating UniCode from questions and translating UniCode into answers, we design a code reconstruction task for evaluation. Given the code snippet $c$ , we require the LLM to generate UniCode $p$ and then translate it into the code $c'$ . The evaluation metric is not the similarity between $c$ and $c'$ but whether the restored code $c'$ can pass the test cases. We expand the HumanEval and MBPP datasets to create our benchmark UNICODER-BENCH comprising 164 HumanEval samples and 500 MBPP test samples.
|
| 91 |
+
|
| 92 |
+

|
| 93 |
+
Figure 4: Overview of UNICODER. (a) The function of universal code UniCode; (b) The framework of our method UNICODER. The universal code as the intermediate representation, our proposed framework can support code generation, code translation, and code summarization. In (a), the LLM encodes the code snippets of multilingual programming languages or the problem description questions into UniCode. Then UniCode is translated into the target output, i.e., the executable code of multilingual programming languages with a descriptive code summarization. In (b), we first ask the LLM to generate UniCode with few-shot prompts. In the second stage, the instruction dataset, containing questions, answers, and UniCode, is fed into the code LLM for fine-tuning.
|
| 94 |
+
|
| 95 |
+

|
| 96 |
+
|
| 97 |
+
# 3 UNICODER
|
| 98 |
+
|
| 99 |
+
# 3.1 Model Overview
|
| 100 |
+
|
| 101 |
+
In Figure 4, we first define the concept of the universal code with the essential components and then prompt the LLM to generate UniCode $p$ based on the existing instruction data (questions $q$ and answers $a$ ) and the raw code snippets $c$ . UniCode is regarded as the intermediate representation for different tasks, including code generation, code translation, and code summarization. Our proposed model UNICODER is trained on the instruction dataset $D_{u}$ with the multilingual objectives to fully unleash the potential of UniCode.
|
| 102 |
+
|
| 103 |
+
# 3.2 Code LLM with Universal Code
|
| 104 |
+
|
| 105 |
+
Given the instructions dataset with $K$ multilingual programming languages $D_{u} = \{D_{u}^{L_{k}}\}_{k = 1}^{K}$ , the pretrained code LLM $\mathcal{M}$ trained on $D_{u}$ can support Universal-code-of-Thought (UoT). It can be described as:
|
| 106 |
+
|
| 107 |
+
$$
|
| 108 |
+
P (p, a | q) = P (p | q; \mathcal {M}) P (a | q, p; \mathcal {M}) \tag {1}
|
| 109 |
+
$$
|
| 110 |
+
|
| 111 |
+
where $q$ (question) and $a$ (answer) are the instruction pair from $D_{u}$ . Given the question $q$ , the code LLM $\mathcal{M}$ first generates UniCode $p$ and then outputs the final answer $a$ , where $p$ provides key algorithm ideas with natural language comments.
|
| 112 |
+
|
| 113 |
+
# 3.3 Multi-task Supervised Fine-tuning
|
| 114 |
+
|
| 115 |
+
To fully unleash the potential of the UniCode, we design multiple objectives to enhance the understanding and generation capability of code LLM.
|
| 116 |
+
|
| 117 |
+
# Multi-task Fine-tuning.
|
| 118 |
+
|
| 119 |
+
$$
|
| 120 |
+
\mathcal {L} _ {a l l} = \mathcal {L} _ {q a} + \mathcal {L} _ {q p} + \mathcal {L} _ {p a} + \mathcal {L} _ {u o t} \tag {2}
|
| 121 |
+
$$
|
| 122 |
+
|
| 123 |
+
where $\mathcal{L}_{qa}$ is the question-answer generation objective, $\mathcal{L}_{qp}$ is the question-universal-code generation objective, $\mathcal{L}_{pa}$ is the universal-code-answer translation objective, and $\mathcal{L}_{uot}$ is the Universal-code-of-Thought (UoT) objective.
|
| 124 |
+
|
| 125 |
+
Here, we introduce all four training objectives. For all the following objectives, the multilingual corpora $D_{u} = \{D_{u}^{L_{k}}\}_{k = 1}^{K}$ are given. $\mathcal{M}$ is the code LLM and $K$ is the number of programming languages.
|
| 126 |
+
|
| 127 |
+
Question-Answer Objective. The training objective $\mathcal{L}_{qa}$ of the standard instruction fine-tuning can be described as:
|
| 128 |
+
|
| 129 |
+
$$
|
| 130 |
+
\mathcal {L} _ {q a} = - \sum_ {k = 1} ^ {K} \mathbb {E} _ {q, a \sim D _ {u} ^ {L k}} [ \log P (a | q; \mathcal {M}) ] \tag {3}
|
| 131 |
+
$$
|
| 132 |
+
|
| 133 |
+
where $q$ and $a$ are the question and answer pair.
|
| 134 |
+
|
| 135 |
+
Question-Universal-Code Objective. The training objective $\mathcal{L}_{qp}$ of the auxiliary universal code
|
| 136 |
+
|
| 137 |
+
generation task can be described as:
|
| 138 |
+
|
| 139 |
+
$$
|
| 140 |
+
\mathcal {L} _ {q p} = - \sum_ {k = 1} ^ {K} \mathbb {E} _ {q, p \sim D _ {L _ {k}}} [ \log P (p | q; \mathcal {M}) ] \tag {4}
|
| 141 |
+
$$
|
| 142 |
+
|
| 143 |
+
where $q$ and $p$ are the question and UniCode.
|
| 144 |
+
|
| 145 |
+
Universal-Code-Answer Objective. The training objective $\mathcal{L}_{pa}$ of generating the executable code answer from UniCode can be described as:
|
| 146 |
+
|
| 147 |
+
$$
|
| 148 |
+
\mathcal {L} _ {p a} = - \sum_ {k = 1} ^ {K} \mathbb {E} _ {p, a \sim D _ {L _ {k}}} [ \log P (a | p; \mathcal {M}) ] \tag {5}
|
| 149 |
+
$$
|
| 150 |
+
|
| 151 |
+
where $p$ and $a$ are UniCode and the answer.
|
| 152 |
+
|
| 153 |
+
Universal-Code-of-Thought Objective. The training objective $\mathcal{L}_{\text {uot }}$ of generating UniCode and then the executable code answer can be described as:
|
| 154 |
+
|
| 155 |
+
$$
|
| 156 |
+
\mathcal {L} _ {u o t} = - \sum_ {k = 1} ^ {K} \mathbb {E} _ {q, p, a \sim D _ {L _ {k}}} [ \log P (p, a | q; \mathcal {M}) ] \tag {6}
|
| 157 |
+
$$
|
| 158 |
+
|
| 159 |
+
where $q$ , $a$ , and $p$ are the question, answer, and UniCode, respectively.
|
| 160 |
+
|
| 161 |
+
# 4 Experimental Setup
|
| 162 |
+
|
| 163 |
+
# 4.1 Instruction Dataset
|
| 164 |
+
|
| 165 |
+
GPT-4 (gpt-4-1106-preview) (OpenAI, 2023) is used as the foundation model to generate the UNICODER-INSTRUCT. We randomly extract code snippets within 1024 tokens from the StarCoder dataset (Li et al., 2023b) and let GPT-4 summarize the code snippets as the universal code. Based on each code snippet and the corresponding universal code, a self-contained coding problem with a correct solution is created.
|
| 166 |
+
|
| 167 |
+
# 4.2 Baselines
|
| 168 |
+
|
| 169 |
+
Proprietary Models. Based on a neural architecture known as generative pre-trained Transformers (GPT) (Vaswani et al., 2017; Radford et al., 2018), GPT-3.5 and GPT-4 are LLMs trained on massive datasets of text, code, math equations, and more. They are also trained to follow instructions (Ouyang et al., 2022), which allows them to generate human-like responses. We use GPT-3.5 Turbo and GPT-4 as the proprietary models because they perform excellently in various code understanding and generation tasks.
|
| 170 |
+
|
| 171 |
+
Open-Source Models. To narrow the gap between open-source and closed-source models, a series of open-source models and instruction datasets are proposed to improve code LLMs and bootstrap their instruction-following ability. Starcoder (Li et al., 2023b), Code Llama (Rozière et al., 2023), and DeepSeek-Coder (Guo et al., 2024a) with different model sizes are introduced into the based model. OctoCoder (Muennighoff et al., 2023), WiazrdCoder (Luo et al., 2023), MagiCoder (Wei et al., 2023), and WaveCoder (Yu et al., 2023) are further fine-tuned on these based code LLMs.
|
| 172 |
+
|
| 173 |
+
Decontamination. We apply data decontamination before training our UNICODER models to decontaminate the code snippets from the star-coder data (Li et al., 2023b), by removing exact matches from HumanEval (Chen et al., 2021), MBPP (Austin et al., 2021), DS-1000 (Lai et al., 2023), and GSM8K (Cobbe et al., 2021).
|
| 174 |
+
|
| 175 |
+
# 4.3 Evaluation Benchmark
|
| 176 |
+
|
| 177 |
+
HumanEval. The HumanEval test set (Chen et al., 2021) is a crafted collection of 164 Python programming problems to test the abilities of code generation models. For each problem, there are roughly 9.6 test cases to check whether the generated code works as intended. Humaneval has become one of the most popular benchmarks to measure how well these code-writing AI models perform, making it a key tool in the field of AI and machine learning for coding.
|
| 178 |
+
|
| 179 |
+
MBPP. The MBPP dataset (Austin et al., 2021), comprising approximately 1,000 Python programming challenges sourced from a crowd of contributors, is tailored for beginners in programming, focusing on core principles and the usage of the standard library. The MBPP test set comprised of 500 problems is selected to evaluate the few-shot inference of the code LLMs.
|
| 180 |
+
|
| 181 |
+
MultiPL-E. The MultiPL-E test set (Cassano et al., 2022) translates the original HumanEval test set to other 18 programming languages, i.e., Javascript, Java, Typescript, C++, and Rust. We use the MultiPL-E to evaluate the multilingual capabilities of the code LLMs.
|
| 182 |
+
|
| 183 |
+
# 4.4 Evaluation Metrics
|
| 184 |
+
|
| 185 |
+
Pass@k. We adopt the Pass@k metric (Chen et al., 2021) to improve the reliability of our evaluation. We then count the total number of success
|
| 186 |
+
|
| 187 |
+
fully passing test cases, denoted as $k$ , to compute the Pass@k, thereby enhancing the accuracy and consistency of the performance assessment.
|
| 188 |
+
|
| 189 |
+
$$
|
| 190 |
+
\operatorname {P a s s} @ \mathrm {k} = \mathbb {E} \left[ 1 - \frac {\binom {n} {k - c}}{\binom {n} {k}} \right] \tag {7}
|
| 191 |
+
$$
|
| 192 |
+
|
| 193 |
+
where $n$ is the total number of generated samples for each problem, and $c$ is the number of correct generated code snippets passing all the test cases $(n > k \geq c)$ .
|
| 194 |
+
|
| 195 |
+
# 4.5 Implementmentation Details
|
| 196 |
+
|
| 197 |
+
We expand the open-source Evol-Instruct dataset evol-code-alpaca-v1 (Xu et al., 2023) with nearly 110K samples into the instruction dataset with the universal code. For the code snippets collected from starcoderdata $^{2}$ , we choose 5K code snippets of each language (Python, Javascript, C++, Java, Rust, and Go) to construct the synthetic instruction dataset with universal code. Finally, we obtain the instruction dataset UNICODERINSTRUCT contains nearly 140K training samples. Code-Llama and DeepSeek-Coder-Base are used as the foundational code LLMs for supervised fine-tuning (SFT). We fine-tune these foundation LLMs on nearly 150K samples generated from evol-codealpaca-v1 and the starcoder pre-training data. UNICODER is fine-tuned on Standford_Alpaca $^{3}$ with 8 NVIDIA A100-80GB GPUs. The learning rate first increases into $8 \times 10^{-5}$ with 50 warmup steps and then adopts a cosine decay scheduler. We adopt the Adam optimizer (Kingma and Ba, 2015) with a global batch size of 128 samples, truncating sentences to 1536 tokens.
|
| 198 |
+
|
| 199 |
+
# 5 Results and Discussion
|
| 200 |
+
|
| 201 |
+
# 5.1 Main Results
|
| 202 |
+
|
| 203 |
+
Python Code Generation. Table 1 shows that UNICODER significantly beats previous strong open-source baselines using UoT, closing the gap with GPT-3.5 and GPT-4. Magicoder (Wei et al., 2023) and Wavecoder (Yu et al., 2023) both prove the effectiveness of instruction datasets from code snippets. Further, UNICODER outperforms the WizardCoder with 15B parameters and EvolInstruct techniques with the help of the UniCode.
|
| 204 |
+
|
| 205 |
+
$^{2}$ https://huggingface.co/datasets/bigcode/starcoderdata
|
| 206 |
+
$^{3}$ https://github.com/tatsu-lab/stanford_alpaca
|
| 207 |
+
|
| 208 |
+
Multilingual Code Understanding. Table 2 shows that UNICODER significantly outperforms strong baselines Magicoder and WaveCoder, which both leverage the code snippets to construct the instruction dataset. Given the multilingual correct code snippet, the code LLM is tasked to generate an explanation of the code and then regenerate the code only based on its own explanation. For the different backbones (Code Llama and DeepseekCoder), our method beats most previous methods, especially in other languages, which demonstrates that UNICODER-INSTRUCT can bring the capability of multilingual understanding and generation.
|
| 209 |
+
|
| 210 |
+
# 5.2 Discussion
|
| 211 |
+
|
| 212 |
+
Ablation Study. To verify the efficacy of each component, we conduct the ablation study step by step on HumanEval and MBPP. In Table 3, we observe that removing the multi-tasks objective (only keeping the UoT objective: Equation 6) will have a $-1.6$ performance drop in HumanEval and a $-1.3$ drop in MBPP. Removing UniCode will further degrade the performance. The results support the effectiveness of each component of UNICODER.
|
| 213 |
+
|
| 214 |
+
Effect on Universal Code. To discuss the effect of the different formats of the universal code, we use different definitions of universal code for UNI-CODER. Specifically, we randomly sample 5K samples to generate the instruction dataset with different formats of UniCode.
|
| 215 |
+
|
| 216 |
+
- UniCode 1: It describes the naming conventions, variable declaration, operators, conditional statements, loops, and function structure that pseudocode should have.
|
| 217 |
+
- UniCode 2: It separates the first set of standards and provides code examples for each, instead of applying them all together in the examples.
|
| 218 |
+
- UniCode 3: It describes the code structure, variable rules, control structures, functions, comments, and assignment rules that pseudocode should have.
|
| 219 |
+
- UniCode 4: It is similar to the first standard but specifies type-free names for variables.
|
| 220 |
+
- UniCode 5: It provides an abstract, high-level architectural description, without setting standards for the code itself.
|
| 221 |
+
- UniCode 6: It uses latex algorithm and algorithmic packages for description.
|
| 222 |
+
|
| 223 |
+
<table><tr><td>Models</td><td>Base Model</td><td>Params</td><td>Instruction Data</td><td>Model Weight</td><td>HumanEval</td><td>MBPP</td></tr><tr><td colspan="7">Proprietary Models</td></tr><tr><td>GPT-3.5</td><td>-</td><td>-</td><td>-</td><td>-</td><td>72.6</td><td>81.6</td></tr><tr><td>GPT-4</td><td>-</td><td>-</td><td>-</td><td>-</td><td>85.4</td><td>83.0</td></tr><tr><td colspan="7">Open-source Models</td></tr><tr><td>StarCoder (Li et al., 2023b)</td><td>-</td><td>15B</td><td>X</td><td>✓</td><td>33.6</td><td>43.3</td></tr><tr><td>WizardCoder (Luo et al., 2023)</td><td>StarCoder</td><td>15B</td><td>✓</td><td>✓</td><td>57.3</td><td>51.8</td></tr><tr><td>OctoCoder (Muennighoff et al., 2023)</td><td>StarCoder</td><td>15B</td><td>✓</td><td>✓</td><td>46.2</td><td>43.5</td></tr><tr><td>WaveCoder-SC (Muennighoff et al., 2023)</td><td>StarCoder</td><td>15B</td><td>✓</td><td>✓</td><td>50.5</td><td>51.0</td></tr><tr><td>Code-Llama (Rozière et al., 2023)</td><td>-</td><td>7B</td><td>X</td><td>✓</td><td>33.5</td><td>41.4</td></tr><tr><td>Code-Llama-Instruct (Rozière et al., 2023)</td><td>Code Llama</td><td>7B</td><td>✓</td><td>✓</td><td>34.8</td><td>44.4</td></tr><tr><td>WaveCoder-CL (Yu et al., 2023)</td><td>Code Llama</td><td>7B</td><td>✓</td><td>✓</td><td>48.1</td><td>47.2</td></tr><tr><td>Magicoder-CL (Wei et al., 2023)</td><td>Code Llama</td><td>7B</td><td>✓</td><td>✓</td><td>60.4</td><td>64.2</td></tr><tr><td>UNICODER (our method)</td><td>Code Llama</td><td>7B</td><td>✓</td><td>✓</td><td>65.4</td><td>65.2</td></tr><tr><td>DeepseekCoder (Guo et al., 2024a)</td><td>-</td><td>6.7B</td><td>X</td><td>✓</td><td>49.4</td><td>60.6</td></tr><tr><td>WaveCoder-DS (Yu et al., 2023)</td><td>Deepseek-Coder</td><td>6.7B</td><td>✓</td><td>✓</td><td>64.0</td><td>62.8</td></tr><tr><td>UNICODER (our method)</td><td>Deepseek-Coder</td><td>6.7B</td><td>✓</td><td>✓</td><td>70.6</td><td>64.3</td></tr></table>
|
| 224 |
+
|
| 225 |
+
Table 1: Evaluation results of Pass@1 on the HumanEval and MBPP benchmark. We use self-reported scores whenever available. All methods use greedy decoding and We use the reported scores of the previous work.
|
| 226 |
+
|
| 227 |
+
<table><tr><td rowspan="2">Model</td><td rowspan="2">Params</td><td colspan="7">Programming Language</td></tr><tr><td>Java</td><td>Javascript</td><td>C++</td><td>PHP</td><td>Swift</td><td>Rust</td><td>Avg.</td></tr><tr><td colspan="9">Proprietary models</td></tr><tr><td>GPT-3.5</td><td>-</td><td>69.2</td><td>67.1</td><td>63.4</td><td>60.9</td><td>-</td><td>-</td><td>-</td></tr><tr><td>GPT-4</td><td>-</td><td>81.6</td><td>78.0</td><td>76.4</td><td>77.2</td><td>-</td><td>-</td><td>-</td></tr><tr><td colspan="9">Open-source models</td></tr><tr><td>CodeLlama (Rozière et al., 2023)</td><td>34B</td><td>40.2</td><td>41.7</td><td>41.4</td><td>40.4</td><td>35.3</td><td>38.7</td><td>39.6</td></tr><tr><td>CodeLlama-Python (Rozière et al., 2023)</td><td>34B</td><td>39.5</td><td>44.7</td><td>39.1</td><td>39.8</td><td>34.3</td><td>39.7</td><td>39.5</td></tr><tr><td>CodeLlama-Instruct (Rozière et al., 2023)</td><td>34B</td><td>41.5</td><td>45.9</td><td>41.5</td><td>37.0</td><td>37.6</td><td>39.3</td><td>40.5</td></tr><tr><td>WizardCoder-CL (Luo et al., 2023)</td><td>34B</td><td>44.9</td><td>55.3</td><td>47.2</td><td>47.2</td><td>44.3</td><td>46.2</td><td>47.5</td></tr><tr><td>StarCoderBase (Li et al., 2023b)</td><td>15B</td><td>28.5</td><td>31.7</td><td>30.6</td><td>26.8</td><td>16.7</td><td>24.5</td><td>26.5</td></tr><tr><td>StarCoder (Li et al., 2023b)</td><td>15B</td><td>30.2</td><td>30.8</td><td>31.6</td><td>26.1</td><td>22.7</td><td>21.8</td><td>27.2</td></tr><tr><td>WizardCoder-SC (Luo et al., 2023)</td><td>15B</td><td>35.8</td><td>41.9</td><td>39.0</td><td>39.3</td><td>33.7</td><td>27.1</td><td>36.1</td></tr><tr><td>CodeLlama (Rozière et al., 2023)</td><td>7B</td><td>29.3</td><td>31.7</td><td>27.0</td><td>25.1</td><td>25.6</td><td>25.5</td><td>27.4</td></tr><tr><td>CodeLlama-Python (Rozière et al., 2023)</td><td>7B</td><td>42.4</td><td>51.9</td><td>42.3</td><td>46.5</td><td>29.4</td><td>33.6</td><td>29.7</td></tr><tr><td>UNICODER (Our method)</td><td>7B</td><td>46.4</td><td>50.2</td><td>39.2</td><td>40.4</td><td>41.2</td><td>32.4</td><td>41.6</td></tr></table>
|
| 228 |
+
|
| 229 |
+
Table 2: Evaluation results of Pass@1 (%) performance on the MultiPL-E benchmark. The baseline results are partly from the previous work (Wei et al., 2023).
|
| 230 |
+
|
| 231 |
+
<table><tr><td>ID</td><td>Methods</td><td>HumanEval</td><td>MBPP</td></tr><tr><td>①</td><td>UNICODER</td><td>70.6</td><td>64.3</td></tr><tr><td>②</td><td>① - Multi-tasks Objective</td><td>67.4</td><td>60.2</td></tr><tr><td>③</td><td>② - Universal Code</td><td>66.8</td><td>59.8</td></tr></table>
|
| 232 |
+
|
| 233 |
+
In Table 4, we can observe that the evaluation results of UniCode $1 \sim$ UniCode 4 have better performance. Compared to the universal code format UniCode 5 and UniCode 6, UniCode $1 \sim$ UniCode 4 has a clear definition and common structure, which brings more support for code generation. Notably, the experiment ⑦ performs the best by combing the training data of $1 \sim 4$ . The experimental results show that the concrete defi
|
| 234 |
+
|
| 235 |
+
Table 3: Ablation study of our proposed method on HumanEval and MBPP. UNICODER is fine-tuned on the UNICODER-INSTRUCT with the multi-task objectives.
|
| 236 |
+
|
| 237 |
+
<table><tr><td>ID</td><td>Methods</td><td>HumanEval</td><td>MBPP</td></tr><tr><td>①</td><td>UniCode 1</td><td>53.2</td><td>51.5</td></tr><tr><td>②</td><td>UniCode 2</td><td>52.8</td><td>51.2</td></tr><tr><td>③</td><td>UniCode 3</td><td>53.5</td><td>50.5</td></tr><tr><td>④</td><td>UniCode 4</td><td>53.8</td><td>49.5</td></tr><tr><td>⑤</td><td>UniCode 5</td><td>49.5</td><td>50.2</td></tr><tr><td>⑥</td><td>UniCode 6</td><td>48.2</td><td>48.4</td></tr><tr><td>⑦</td><td>UniCode 1~4</td><td>55.5</td><td>52.2</td></tr></table>
|
| 238 |
+
|
| 239 |
+
Table 4: Evaluation results of our method with different formats of the universal code.
|
| 240 |
+
|
| 241 |
+
nition of UniCode and the combination of it can effectively improve the model performance.
|
| 242 |
+
|
| 243 |
+
# 5.3 Code-UniCode-Code
|
| 244 |
+
|
| 245 |
+
To compare the capabilities of different code LLMs, we create a test set by prompting the code
|
| 246 |
+
|
| 247 |
+
<table><tr><td>Method</td><td>Params</td><td>Python</td><td>Other Languages</td></tr><tr><td>Code-Llama-Instruct</td><td>7B</td><td>33.3</td><td>26.2</td></tr><tr><td>Code-Llama-Alpaca</td><td>7B</td><td>44.2</td><td>29.1</td></tr><tr><td>UNICODER</td><td>7B</td><td>45.2</td><td>31.3</td></tr></table>
|
| 248 |
+
|
| 249 |
+
Table 5: Pass@1 scores of our method UNICODER and two Code-Llama baselines for Code-UniCode-Code.
|
| 250 |
+
|
| 251 |
+
LLM to generate UniCode and translate it into the executable code. We check the correctness of each translated code with the test cases, denoted as Pass@1 of the universal code. Code-Llama-7B is fine-tuned on the Code Alpaca dataset and our dataset UNICODER-INSTRUCT separately. The results of fine-tuned Code-Llama models on UNICODER-BENCH are shown in Table 5. Our method UNICODER is more accurate in passing the test cases than the Code-Llama baselines, demonstrating its excellent code understanding and generation abilities.
|
| 252 |
+
|
| 253 |
+
# 6 Related Work
|
| 254 |
+
|
| 255 |
+
Code Understanding and Generation. Code understanding and generation as the key tasks to substantially facilitate the project development process, including code generation (Chen et al., 2021; Austin et al., 2021; Zhang et al., 2023), code translation (Szafraniec et al., 2023), automated testing (Deng et al., 2023), bug fixing (Muennighoff et al., 2023), code refinement (Liu et al., 2023c), code question answering (Liu and Wan, 2021), and code summarization (Ahmad et al., 2020). Researchers Chai et al. (2023) have undertaken extensive endeavors to bridge natural language and programming languages. With less ambiguous prompt styles, Mishra et al. (2023) using pseudocode improves the performance of NLP tasks. Oda et al. (2015) uses traditional machine learning to achieve code to pseudocode conversion. Jiang et al. (2022) also shows that designers and programmers can speed up the prototyping process, and ground communication between collaborators via prompt-based prototyping. To verify that the generated code is correct, there are some code synthesis evaluation frameworks, including EvalPlus (Liu et al., 2023b), HumanEval (Chen et al., 2021), HumanEval-X (Zheng et al., 2023), and MBPP (Austin et al., 2021).
|
| 256 |
+
|
| 257 |
+
Large Language Models for Code. Since CodeBERT (Feng et al., 2020) first connected code tasks with pre-trained models, large language models for code have developed rapidly, demonstrating ex
|
| 258 |
+
|
| 259 |
+
traordinary performance on almost all code tasks, rather than a single task. Prominent large models include Codex (Chen et al., 2021), AlphaCode (Li et al., 2022), SantaCoder (Allal et al., 2023), Starcoder (Li et al., 2023b), WizardCoder (Luo et al., 2023), InCoder (Fried et al., 2022), CodeT5 (Wang et al., 2021), CodeGeeX (Zheng et al., 2023), Code Llama (Rozière et al., 2023), and CodeQWen (Bai et al., 2023). To improve the performance of code generation, researchers used optimized prompts (Liu et al., 2023a; Reynolds and McDonell, 2021; Zan et al., 2023; Beurer-Kellner et al., 2023), bring test cases (Chen et al., 2023) and collaborative roles (Dong et al., 2023). There are also some related studies on using large language models for other code tasks, such as dynamic programming (Dagan et al., 2023), compiler optimization (Cummins et al., 2023), multilingual prompts (Di et al., 2023), and program of thoughts (Chen et al., 2022) (PoT).
|
| 260 |
+
|
| 261 |
+
Chain-of-Thought Prompting. To unleash the potential of LLMs in addressing complex reasoning tasks, chain-of-thought (CoT) prompting (Wei et al., 2022b; Kojima et al., 2022) extends in-context learning with step-by-step reasoning processes, which handles complex reasoning tasks in the field of the code and mathematics by encouraging them to engage in step-by-step reasoning processes. Following this line of research, X-of-Thought (XoT) reasoning (CoT and its structural variants further) (Chai et al., 2024; Yao et al., 2023; Li et al., 2023a; Lei et al., 2023; Guo et al., 2023; Ji et al., 2024; Guo et al., 2024b) further expands the capabilities and applications of LLMs in complex reasoning and planning scenarios.
|
| 262 |
+
|
| 263 |
+
Intermediate Representation In the field of natural language processing, there exist many works using intermediate representation (Gan et al., 2021; Yang et al., 2022, 2024, 2019, 2020b,a; Liang et al., 2024), such as text generation and translation. The universal code is used as the intermediate representation, which typically omits details that are essential for the machine implementation of the algorithm. We perform the coarse-to-fine pattern for the code generation and translation, where the universal code first summarizes the algorithm process and then the programming language gives the accurate solution. The Unicode provides explicit help for code generation such as Chain-of-thought in LLM.
|
| 264 |
+
|
| 265 |
+
# 7 Conclusion
|
| 266 |
+
|
| 267 |
+
In this work, we put forth a state-of-the-art framework UNICODER for both code translation and code generation. Using the universal code UniCode as the intermediate representation, we effectively bridge different programming languages and facilitate code tasks. In addition, we collect a dataset UNICODER-INSTRUCT with 140K instruction instances from existing instruction datasets and the raw code snippets. After being fine-tuned on UNICODER-INSTRUCT with multi-task learning objectives, our model generates UniCode and translates it into the final answer (executable code). The evaluation results on code translation and generation tasks demonstrate that our method significantly improves the generalization ability, showing the efficacy and superiority of UNICODER.
|
| 268 |
+
|
| 269 |
+
# Limitations
|
| 270 |
+
|
| 271 |
+
We acknowledge the following limitations of this study: (1) The evaluation focuses on benchmark datasets (Humaneval, MBPP, and MultiPL-E), and the model's effectiveness in real-world programming scenarios or industry applications is not fully explored. (2) Our method is developed and evaluated primarily on programming language benchmarks. Its effectiveness in other domains or for non-programming-related tasks is not assessed, which limits the generalizability of our findings.
|
| 272 |
+
|
| 273 |
+
# Acknowledge
|
| 274 |
+
|
| 275 |
+
This work was supported in part by the National Natural Science Foundation of China (Grant Nos. U1636211, U2333205, 61672081, 62302025, 62276017), a fund project: State Grid Co., Ltd. Technology R&D Project (ProjectName: Research on Key Technologies of Data Scenario-based Security Governance and Emergency Blocking in Power Monitoring System, Project No.: 5108-202303439A-3-2-ZN), the 2022 CCF-NSFOCUS Kun-Peng Scientific Research Fund and the Opening Project of Shanghai Trusted Industrial Control Platform and the State Key Laboratory of Complex & Critical Software Environment (Grant No. SKLSDE-2021ZX-18).
|
| 276 |
+
|
| 277 |
+
# References
|
| 278 |
+
|
| 279 |
+
Wasi Uddin Ahmad, Saikat Chakraborty, Baishakhi Ray, and Kai-Wei Chang. 2020. A transformer-based approach for source code summarization. In Proceed
|
| 280 |
+
|
| 281 |
+
ings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 4998-5007. Association for Computational Linguistics.
|
| 282 |
+
Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, et al. 2023. SantaCoder: Don't reach for the stars! arXiv preprint arXiv:2301.03988.
|
| 283 |
+
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. 2021. Program synthesis with large language models. arXiv preprint arXiv:2108.07732.
|
| 284 |
+
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, and Tianhang Zhu. 2023. Qwen technical report. arXiv preprint arXiv:2309.16609, abs/2309.16609.
|
| 285 |
+
Luca Beurer-Kellner, Marc Fischer, and Martin T. Vechev. 2023. Prompting is programming: A query language for large language models. Proc. ACM Program. Lang., 7(PLDI):1946-1969.
|
| 286 |
+
Federico Cassano, John Gouwar, Daniel Nguyen, Sydney Nguyen, Luna Phipps-Costin, Donald Pinckney, Ming-Ho Yee, Yangtian Zi, Carolyn Jane Anderson, Molly Q Feldman, et al. 2022. Multi-: A scalable and extensible approach to benchmarking neural code generation. arXiv preprint arXiv:2208.08227.
|
| 287 |
+
Linzheng Chai, Jian Yang, Tao Sun, Hongcheng Guo, Jiaheng Liu, Bing Wang, Xinnian Liang, Jiaqi Bai, Tongliang Li, Qiyao Peng, and Zhoujun Li. 2024. xcot: Cross-lingual instruction tuning for cross-lingual chain-of-thought reasoning. arXiv preprint arXiv:2401.07037, abs/2401.07037.
|
| 288 |
+
Yekun Chai, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, and Hua Wu. 2023. Ernie-code: Beyond english-centric cross-lingual pretraining for programming languages. In Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023, pages 10628-10650. Association for Computational Linguistics.
|
| 289 |
+
Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, and Weizhu Chen. 2023. Codet: Code generation with generated tests. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net.
|
| 290 |
+
|
| 291 |
+
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidi Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, abs/2107.03374.
|
| 292 |
+
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. 2022. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588, abs/2211.12588.
|
| 293 |
+
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
|
| 294 |
+
Chris Cummins, Volker Seeker, Dejan Grubisic, Mostafa Elhoushi, Youwei Liang, Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Kim M. Hazelwood, Gabriel Synnaeve, and Hugh Leather. 2023. Large language models for compiler optimization. arXiv preprint arXiv:2309.07062, abs/2309.07062.
|
| 295 |
+
Gautier Dagan, Frank Keller, and Alex Lascarides. 2023. Dynamic planning with a LLM. arXiv preprint arXiv:2308.06391, abs/2308.06391.
|
| 296 |
+
Yinlin Deng, Chunqiu Steven Xia, Chenyuan Yang, Shizhuo Dylan Zhang, Shujing Yang, and Lingming Zhang. 2023. Large language models are edge-case fuzzers: Testing deep learning libraries via fuzzgpt. arXiv preprint arXiv:2304.02014, abs/2304.02014.
|
| 297 |
+
Peng Di, Jianguo Li, Hang Yu, Wei Jiang, Wenting Cai, Yang Cao, Chaoyu Chen, Dajun Chen, Hongwei Chen, Liang Chen, Gang Fan, Jie Gong, Zi Gong, Wen Hu, Tingting Guo, Zhichao Lei, Ting Li, Zheng Li, Ming Liang, Cong Liao, Bingchang Liu, Jiachen Liu, Zhiwei Liu, Shaojun Lu, Min Shen, Guangpei Wang, Huan Wang, Zhi Wang, Zhaogui Xu, Jiawei Yang, Qing Ye, Gehao Zhang, Yu Zhang, Zelin Zhao, Xunjin Zheng, Hailian Zhou, Lifu Zhu, and Xianying Zhu. 2023. Codefuse-13b: A pretrained multilingual code large language model. arXiv preprint arXiv:2310.06266, abs/2310.06266.
|
| 298 |
+
|
| 299 |
+
Yihong Dong, Xue Jiang, Zhi Jin, and Ge Li. 2023. Self-collaboration code generation via chatgpt. arXiv preprint arXiv:2304.07590, abs/2304.07590.
|
| 300 |
+
Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, and Ming Zhou. 2020. Codebert: A pre-trained model for programming and natural languages. In Findings of the Association for Computational Linguistics: EMNLP 2020, Online Event, 16-20 November 2020, volume EMNLP 2020 of Findings of ACL, pages 1536-1547. Association for Computational Linguistics.
|
| 301 |
+
Daniel Fried, Armen Aghajanyan, Jessy Lin, Sida I. Wang, Eric Wallace, Freda Shi, Ruiqi Zhong, Wen tau Yih, Luke Zettlemoyer, and Mike Lewis. 2022. Incoder: A generative model for code infilling and synthesis. arXiv preprint arXiv:2204.05999, abs/2204.05999.
|
| 302 |
+
Shiwei Gan, Yafeng Yin, Zhiwei Jiang, Lei Xie, and Sanglu Lu. 2021. Skeleton-aware neural sign language translation. In MM '21: ACM Multimedia Conference, Virtual Event, China, October 20 - 24, 2021, pages 4353-4361. ACM.
|
| 303 |
+
Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen, Xiao Bi, Y Wu, YK Li, et al. 2024a. Deepseek-coder: When the large language model meets programming-the rise of code intelligence. arXiv preprint arXiv:2401.14196.
|
| 304 |
+
Hongcheng Guo, Jian Yang, Jiaheng Liu, Liquan Yang, Linzheng Chai, Jiaqi Bai, Junran Peng, Xiaorong Hu, Chao Chen, Dongfeng Zhang, Xu Shi, Tieqiao Zheng, Liangfan Zheng, Bo Zhang, Ke Xu, and Zhoujun Li. 2023. OWL: A large language model for IT operations. CoRR, abs/2309.09298.
|
| 305 |
+
Hongcheng Guo, Wei Zhang, Anjie Le, Jian Yang, Jiaheng Liu, Zhoujun Li, Tieqiao Zheng, Shi Xu, Runqiang Zang, Liangfan Zheng, et al. 2024b. Lemur: Log parsing with entropy sampling and chain-of-thought merging. arXiv preprint arXiv:2402.18205.
|
| 306 |
+
Hangyuan Ji, Jian Yang, Linzheng Chai, Chaoren Wei, Liquan Yang, Yunlong Duan, Yunli Wang, Tianzhen Sun, Hongcheng Guo, Tongliang Li, et al. 2024. Sevenllm: Benchmarking, eliciting, and enhancing abilities of large language models in cyber threat intelligence. arXiv preprint arXiv:2405.03446.
|
| 307 |
+
Ellen Jiang, Kristen Olson, Edwin Toh, Alejandra Molina, Aaron Donsbach, Michael Terry, and Carrie J. Cai. 2022. Promptmaker: Prompt-based prototyping with large language models. In CHI '22: CHI Conference on Human Factors in Computing Systems, New Orleans, LA, USA, 29 April 2022 - 5 May 2022, Extended Abstracts, pages 35:1-35:8. ACM.
|
| 308 |
+
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In 3rd International Conference on Learning Representations,
|
| 309 |
+
|
| 310 |
+
ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
|
| 311 |
+
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022.
|
| 312 |
+
Yuhang Lai, Chengxi Li, Yiming Wang, Tianyi Zhang, Ruiqi Zhong, Luke Zettlemoyer, Wen-Tau Yih, Daniel Fried, Sida I. Wang, and Tao Yu. 2023. DS1000: A natural and reliable benchmark for data science code generation. In International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 18319-18345. PMLR.
|
| 313 |
+
Bin Lei, Pei-Hung Lin, Chunhua Liao, and Caiwen Ding. 2023. Boosting logical reasoning in large language models through a new framework: The graph of thought. arXiv preprint arXiv:2308.08614, abs/2308.08614.
|
| 314 |
+
Jia Li, Ge Li, Yongmin Li, and Zhi Jin. 2023a. Structured chain-of-thought prompting for code generation. arXiv preprint arXiv:2305.06599.
|
| 315 |
+
Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue Zhuo, Thomas Wang, Olivier Dehaene, Mishig Davaadorj, Joel Lamy-Poirier, João Monteiro, Oleh Shliazhko, Nicolas Gontier, Nicholas Meade, Armel Zebaze, Ming-Ho Yee, Logesh Kumar Umapathi, Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov, Zhiruo Wang, Rudra Murthy V, Jason Stillerman, Siva Sankalp Patel, Dmitry Abulkhanov, Marco Zocca, Manan Dey, Zhihan Zhang, Nour MoustafaFahmy, Urvashi Bhattacharyya, Wenhao Yu, Swayam Singh, Sasha Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee, Nadav Timor, Jennifer Ding, Claire Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra, Alex Gu, Jennifer Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor, Siva Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos Munoz Ferrandis, Sean Hughes, Thomas Wolf, Arjun Guha, Leandro von Werra, and Harm de Vries. 2023b. StarCoder: May the source be with you! arXiv preprint arXiv:2305.06161, abs/2305.06161.
|
| 316 |
+
Yujia Li, David H. Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson d'Autume, Igor Babuschkin, Xinyun Chen, PoSen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel J. Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando
|
| 317 |
+
|
| 318 |
+
de Freitas, Koray Kavukcuoglu, and Oriol Vinyals. 2022. Competition-level code generation with AlphaCode. arXiv preprint arXiv:2203.07814, abs/2203.07814.
|
| 319 |
+
Yaobo Liang, Quanzhi Zhu, Junhe Zhao, and Nan Duan. 2024. Machine-created universal language for cross-lingual transfer. In Thirty-Eighth AAAI Conference on Artificial Intelligence, AAAI 2024, Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence, IAAI 2024, Fourteenth Symposium on Educational Advances in Artificial Intelligence, EAAI 2014, February 20-27, 2024, Vancouver, Canada, pages 18617-18625. AAAI Press.
|
| 320 |
+
Chao Liu, Xuanlin Bao, Hongyu Zhang, Neng Zhang, Haibo Hu, Xiaohong Zhang, and Meng Yan. 2023a. Improving chatgpt prompt for code generation. arXiv preprint arXiv:2305.08360, abs/2305.08360.
|
| 321 |
+
Chenxiao Liu and Xiaojun Wan. 2021. CodeQA: A question answering dataset for source code comprehension. In Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 16-20 November, 2021, pages 2618-2632. Association for Computational Linguistics.
|
| 322 |
+
Jiawei Liu, Chunqiu Steven Xia, Yuyao Wang, and Lingming Zhang. 2023b. Is your code generated by chatgpt really correct? rigorous evaluation of large language models for code generation. arXiv preprint arXiv:2305.01210, abs/2305.01210.
|
| 323 |
+
Yue Liu, Thanh Le-Cong, Ratnadira Widyasari, Chakkrit Tantithamthavorn, Li Li, Xuan-Bach Dinh Le, and David Lo. 2023c. Refining ChatGPT-generated code: Characterizing and mitigating code quality issues. arXiv preprint arXiv:2307.12596, abs/2307.12596.
|
| 324 |
+
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xi-ubo Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. 2023. WizardCoder: Empowering code large language models with evol-instruct. arXiv preprint arXiv:2306.08568.
|
| 325 |
+
Mayank Mishra, Prince Kumar, Riyadh Bhat, Rudra Murthy V, Danish Contractor, and Srikanth Tamilselvam. 2023. Prompting with pseudo-code instructions. arXiv preprint arXiv:2305.11790, abs/2305.11790.
|
| 326 |
+
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui, Terry Yue Zhuo, Swayam Singh, Xiangru Tang, Leandro von Werra, and Shayne Longpre. 2023. OctoPack: Instruction tuning code large language models. arXiv preprint arXiv:2308.07124, abs/2308.07124.
|
| 327 |
+
Yusuke Oda, Hiroyuki Fudaba, Graham Neubig, Hideaki Hata, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Learning to generate
|
| 328 |
+
|
| 329 |
+
pseudo-code from source code using statistical machine translation (T). In 30th IEEE/ACM International Conference on Automated Software Engineering, ASE 2015, Lincoln, NE, USA, November 9-13, 2015, pages 574-584. IEEE Computer Society.
|
| 330 |
+
OpenAI. 2023. Gpt-4 technical report. arXiv preprint arXiv:2303.08774.
|
| 331 |
+
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with human feedback. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022.
|
| 332 |
+
Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018. Improving language understanding by generative pre-training. OpenAI blog.
|
| 333 |
+
Laria Reynolds and Kyle McDonell. 2021. Prompt programming for large language models: Beyond the few-shot paradigm. In *CHI '21: CHI Conference on Human Factors in Computing Systems, Virtual Event / Yokohama Japan, May 8-13, 2021, Extended Abstracts*, pages 314:1-314:7. ACM.
|
| 334 |
+
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaqing Ellen Tan, Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. 2023. Code Llama: Open foundation models for code. arXiv preprint arXiv:2308.12950.
|
| 335 |
+
Marc Szafraniec, Baptiste Rozière, Hugh Leather, Patrick Labatut, François Charton, and Gabriel Synnaeve. 2023. Code translation with compiler representations. In *The Eleventh International Conference on Learning Representations*, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net.
|
| 336 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008.
|
| 337 |
+
Yue Wang, Weishi Wang, Shafiq Joty, and Steven CH Hoi. 2021. CodeT5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. arXiv preprint arXiv:2109.00859.
|
| 338 |
+
Jason Wei, Maarten Bosma, Vincent Y. Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V. Le. 2022a. Finetuned language models are zero-shot learners. In The Tenth
|
| 339 |
+
|
| 340 |
+
International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
|
| 341 |
+
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. 2022b. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022.
|
| 342 |
+
Yuxiang Wei, Zhe Wang, Jiawei Liu, Yifeng Ding, and Lingming Zhang. 2023. Magicoder: Source code is all you need. arXiv preprint arXiv:2312.02120, abs/2312.02120.
|
| 343 |
+
Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin Jiang. 2023. Wizardlm: Empowering large language models to follow complex instructions. arXiv preprint arXiv:2304.12244.
|
| 344 |
+
Weixiang Yan, Yuchen Tian, Yunzhe Li, Qian Chen, and Wen Wang. 2023. CodetransOcean: A comprehensive multilingual benchmark for code translation. In *Findings of the Association for Computational Linguistics: EMNLP* 2023, Singapore, December 6-10, 2023, pages 5067-5089. Association for Computational Linguistics.
|
| 345 |
+
Jian Yang, Hongcheng Guo, Yuwei Yin, Jiaqi Bai, Bing Wang, Jiaheng Liu, Xinnian Liang, Linzheng Chai, Liqu Yang, and Zhoujun Li. 2024. m3p: Towards multimodal multilingual translation with multimodal prompt. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, LREC/COLING 2024, 20-25 May, 2024, Torino, Italy, pages 10858-10871. ELRA and ICCL.
|
| 346 |
+
Jian Yang, Shuming Ma, Dongdong Zhang, Zhoujun Li, and Ming Zhou. 2020a. Improving neural machine translation with soft template prediction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online, July 5-10, 2020, pages 5979-5989. Association for Computational Linguistics.
|
| 347 |
+
Jian Yang, Shuming Ma, Dongdong Zhang, Shuangzhi Wu, Zhoujun Li, and Ming Zhou. 2020b. Alternating language modeling for cross-lingual pre-training. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 9386-9393. AAAI Press.
|
| 348 |
+
Jian Yang, Yuwei Yin, Shuming Ma, Dongdong Zhang, Shuangzhi Wu, Hongcheng Guo, Zhoujun Li, and Furu Wei. 2022. UM4: unified multilingual multiple teacher-student model for zero-resource neural machine translation. In Proceedings of the Thirty-First
|
| 349 |
+
|
| 350 |
+
International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, pages 4454-4460. ijcai.org.
|
| 351 |
+
Ze Yang, Wei Wu, Jian Yang, Can Xu, and Zhoujun Li. 2019. Low-resource response generation with template prior. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 1886-1897. Association for Computational Linguistics.
|
| 352 |
+
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601, abs/2305.10601.
|
| 353 |
+
Zhaojian Yu, Xin Zhang, Ning Shang, Yangyu Huang, Can Xu, Yishujie Zhao, Wenxiang Hu, and Qiu feng Yin. 2023. Wavecoder: Widespread and versatile enhanced instruction tuning with refined data generation. arXiv preprint arXiv:2312.14187, abs/2312.14187.
|
| 354 |
+
Daoguang Zan, Ailun Yu, Bo Shen, Jiaxin Zhang, Taihong Chen, Bing Geng, Bei Chen, Jichuan Ji, Yafen Yao, Yongji Wang, and Qianxiang Wang. 2023. Can programming languages boost each other via instruction tuning? arXiv preprint arXiv:2308.16824, abs/2308.16824.
|
| 355 |
+
Fengji Zhang, Bei Chen, Yue Zhang, Jin Liu, Daoguang Zan, Yi Mao, Jian-Guang Lou, and Weizhu Chen. 2023. RepoCoder: Repository-level code completion through iterative retrieval and generation. arXiv preprint arXiv:2303.12570, abs/2303.12570.
|
| 356 |
+
Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Zihan Wang, Lei Shen, Andi Wang, Yang Li, Teng Su, Zhilin Yang, and Jie Tang. 2023. Codegeex: A pre-trained model for code generation with multilingual evaluations on humaneval-x. arXiv preprint arXiv:2303.17568, abs/2303.17568.
|
2024/UniCoder_ Scaling Code Large Language Model via Universal Code/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e51c7f92cd1cfd0a0e49dbdf0ea5e554aa741ccbed4a2d3720edf03fff7e9988
|
| 3 |
+
size 493027
|
2024/UniCoder_ Scaling Code Large Language Model via Universal Code/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Unified Hallucination Detection for Multimodal Large Language Models/ce735746-6b05-466e-b5fd-fd1af150fdd1_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Unified Hallucination Detection for Multimodal Large Language Models/ce735746-6b05-466e-b5fd-fd1af150fdd1_model.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Unified Hallucination Detection for Multimodal Large Language Models/ce735746-6b05-466e-b5fd-fd1af150fdd1_origin.pdf
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:99e85c068537b167d153e160422f0be947d74a0b48c931e5c7608104ad78723a
|
| 3 |
+
size 6379972
|
2024/Unified Hallucination Detection for Multimodal Large Language Models/full.md
ADDED
|
@@ -0,0 +1,744 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Unified Hallucination Detection for Multimodal Large Language Models
|
| 2 |
+
|
| 3 |
+
Xiang Chen\*, Chenxi Wang\*, Yida Xue\*, Ningyu Zhang\*, Xiaoyan Yang\* Qiang Li\*, Yue Shen\*, Lei Liang\*, Jinjie Gu\*, Huajun Chen\*
|
| 4 |
+
|
| 5 |
+
\*College of Computer Science and Technology, Zhejiang University
|
| 6 |
+
|
| 7 |
+
$\spadesuit$ School of Software Technology, Zhejiang University $\diamondsuit$ Ant Group
|
| 8 |
+
|
| 9 |
+
$^{\text{♥}}$ Zhejiang University-Ant Group Joint Laboratory of Knowledge Graph $\{xiang\_ chen, zhangningyu\} @zju.edu.cn$
|
| 10 |
+
|
| 11 |
+
https://www.zjukg.org/project/EasyDetect/
|
| 12 |
+
|
| 13 |
+
# Abstract
|
| 14 |
+
|
| 15 |
+
Despite significant strides in multimodal tasks, Multimodal Large Language Models (MLLMs) are plagued by the critical issue of hallucination. The reliable detection of such hallucinations in MLLMs has, therefore, become a vital aspect of model evaluation and the safeguarding of practical application deployment. Prior research in this domain has been constrained by a narrow focus on singular tasks, an inadequate range of hallucination categories addressed, and a lack of detailed granularity. In response to these challenges, our work expands the investigative horizons of hallucination detection. We present a novel meta-evaluation benchmark, MHaluBench, meticulously crafted to facilitate the evaluation of advancements in hallucination detection methods. Additionally, we unveil a novel unified multimodal hallucination detection framework, UNIHD, which leverages a suite of auxiliary tools to validate the occurrence of hallucinations robustly. We demonstrate the effectiveness of UNIHD through meticulous evaluation and comprehensive analysis. We also provide strategic insights on the application of specific tools for addressing various categories of hallucinations<sup>1</sup>.
|
| 16 |
+
|
| 17 |
+
# 1 Introduction
|
| 18 |
+
|
| 19 |
+
The recent emergence of MLLMs (Ho et al., 2020; OpenAI, 2023; Durante et al., 2024) that more closely mirror human cognition and learning has unleashed unprecedented possibilities for the future of artificial general intelligence (AGI). Despite MLLMs' impressive abilities, they are susceptible to generating seemingly credible content that contradicts input data or established world knowledge, a phenomenon termed "hallucination"(Liu et al., 2024; Wang et al., 2023a; Huang et al., 2023c;
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
Figure 1: Unified multimodal hallucination detection aims to identify and detect modality-conflicting hallucinations at various levels such as object, attribute, and scene-text, as well as fact-conflicting hallucinations in both image-to-text and text-to-image generation. Our benchmark emphasizes fine-grained detection, with "S1" representing the segment and "S1.1" and "S1.2" denoting its corresponding claims.
|
| 23 |
+
|
| 24 |
+
Tonmoy et al., 2024; Zhang et al., 2023a). These hallucinations hinder the practical deployment of MLLMs and contribute to the dissemination of misinformation. Consequently, detectors that could detect multimodal hallucinations (Yang et al., 2023) within responses from MLLMs are urgently needed to alert users to potential risks and drive the development of more reliable MLLMs.
|
| 25 |
+
|
| 26 |
+
Although several works have been conducted to detect hallucinations from MLLMs(Zhou et al., 2023; Zhai et al., 2023; Li et al., 2023b; Wang et al., 2023c) or alleviate hallucinations(Xing et al., 2024; Wu et al., 2024), these efforts operate in isolation and have certain limitations when compared with the aspects illustrated in Figure 1: (1) Task Singularity: Current research has primarily concentrated on specific tasks, such as image captioning while neglecting that text-to-image generation, an important component of AGI, also suffers from hal
|
| 27 |
+
|
| 28 |
+
lucinations induced by MLLMs. (2) Limited Hallucination Categories: Prior studies have focused on identifying hallucinations at the object level, yet they fail to consider the prevalence of scene-text or factual inconsistencies that also frequently occur in MLLMs. (3) Incomplete Granularity: It would be more valuable to assess hallucinations at a fine-grained level, examining individual claims within a response, rather than evaluating the entire response holistically. Considering these constraints hinder rapid progress in practical hallucination detection, it raises the question: Can we develop a unified perspective for detecting hallucinations from MLLMs?
|
| 29 |
+
|
| 30 |
+
To further investigate this problem, we have broadened the concept of multimodal hallucination within MLLMs to a holistic framework, integrating both image-to-text generation such as Image Captioning (IC) and Visual Question Answering (VQA), as well as text-to-image-synthesis (T2I) to align with MLLMs' capabilities of performing varied multimodal tasks. We are committed to exploring a broad spectrum of hallucinatory categories and the intricate nuances of claim-level hallucination through a lens that integrates both modality-conflicting and fact-conflicting hallucinations. Based on the outlined perspectives, We have developed the MultiModal Hallucination Detection Benchmark (MHaluBench) to assess the progress of unified multimodal hallucination detectors for MLLMs and embodied the data framework depicted in Figure 1.
|
| 31 |
+
|
| 32 |
+
At its core, leveraging MLLMs' inherent self-detection mechanisms to pinpoint diverse hallucinations encounters significant hurdles. We further develop a tool-augmented framework for unified hallucination detection, named UNIHD, which integrates evidence from multiple auxiliary tools through the following procedure: (1) Essential Claim Extraction involves extracting the core claims within the generated response for image-to-text generation or user queries in text-to-image generation; (2) Autonomous Tool Selection via Query Formulation prompts MLLMs (GPT-4/Gemini) to autonomously generate pertinent questions for each claim. These questions are crafted to determine the specific type of tool required for each claim and to establish the input for the tool's operation; (3) Parallel Tool Execution deploys a suite of specialized tools to operate concurrently, providing evidence from their outputs to reliably validate potential hallucinations; (4) Hallucination Verification with Rationales aggregates the collected evidence to in
|
| 33 |
+
|
| 34 |
+
struct the underlying MLLM to judge whether the claim hallucinatory with rationals for explanation.
|
| 35 |
+
|
| 36 |
+
We have conducted a thorough evaluation of the UNIHD framework, utilizing the underlying MLLM against the MHaluBench benchmark. Our findings underscore the effectiveness of our approach and confirm that multimodal hallucination detection remains a formidable challenge. In a nutshell, We conclude our contributions as:
|
| 37 |
+
|
| 38 |
+
- We propose a more unified problem setting for hallucination detection in MLLMs, encompassing a broad spectrum of multimodal tasks and hallucination categories, thus enriching the unified understanding of hallucination in MLLMs.
|
| 39 |
+
- We unveil MHaluBench, a meta-evaluation benchmark that encompasses various hallucination categories and multimodal tasks. This benchmark is equipped with fine-grained analytical features, gauging the progress of hallucination detectors.
|
| 40 |
+
- We introduce UNIHD, a task-agnostic, tool-enhanced framework for the detection of hallucinations in content produced by MLLMs. Our extensive experiments demonstrate the efficacy of this method, underscoring that MHaluBench continues to be a challenging yet vital task.
|
| 41 |
+
|
| 42 |
+

|
| 43 |
+
Figure 2: Unified multimodal hallucination detection.
|
| 44 |
+
|
| 45 |
+
# 2 Preliminaries
|
| 46 |
+
|
| 47 |
+
We explore a unified perspective on hallucination in MLLMs (illustrated in Figure 2) with the aspiration of developing a unified detection framework.
|
| 48 |
+
|
| 49 |
+
Unified View of Multimodal Hallucination Taxonomy. A prerequisite for unified detection is the coherent categorization of the principal categories of hallucinations within MLLMs. Our paper superficially examines the following Hallucination Taxonomy from a unified perspective:
|
| 50 |
+
|
| 51 |
+
- Modality-Conflicting Hallucination. MLLMs sometimes generate outputs that conflict with inputs from other modalities, leading to issues
|
| 52 |
+
|
| 53 |
+
<table><tr><td rowspan="2">Datasets</td><td rowspan="2">Response Generated by</td><td rowspan="2">Purpose</td><td rowspan="2">Granularity</td><td colspan="4">Hallucination Types</td><td rowspan="2">Modality</td><td rowspan="2">Scenario Task</td></tr><tr><td>Object</td><td>Attribute</td><td>Scene Text</td><td>Fact</td></tr><tr><td>FactCC (Kryscinski et al., 2020)</td><td>Synthetic</td><td>Check.</td><td>Sentence</td><td></td><td></td><td></td><td>✓</td><td>Text</td><td>Text2Text</td></tr><tr><td>QAGS (Wang et al., 2020)</td><td>Model</td><td>Check.</td><td>Summary</td><td></td><td></td><td></td><td>✓</td><td>Text</td><td>Text2Text</td></tr><tr><td>HaluEval (Li et al., 2023a)</td><td>ChatGPT</td><td>Det.</td><td>Response</td><td></td><td></td><td></td><td>✓</td><td>Text</td><td>Text2Text</td></tr><tr><td>POPE (Li et al., 2023b)</td><td>-</td><td>Eval.</td><td>Response</td><td>✓</td><td></td><td></td><td></td><td>Multi.</td><td>Image2Text</td></tr><tr><td>HaELM (Wang et al., 2023c)</td><td>-</td><td>Det.</td><td>Response</td><td></td><td></td><td></td><td></td><td>Multi.</td><td>Image2Text</td></tr><tr><td>AMBER (Wang et al., 2023b)</td><td>-</td><td>Eval.</td><td>Response</td><td>✓</td><td>✓</td><td></td><td></td><td>Multi.</td><td>Image2Text</td></tr><tr><td>MHaluBench (Ours)</td><td>MMLMs</td><td>Det.</td><td>Res.,Seg.,Claim</td><td>✓</td><td>✓</td><td>✓</td><td>✓</td><td>Multi.</td><td>Image2Text/Text2Image</td></tr></table>
|
| 54 |
+
|
| 55 |
+
Table 1: A comparison of benchmarks w.r.t existing fact-checking or hallucination evaluation. "Check." indicates verifying factual consistency, "Eval." denotes evaluating hallucinations generated by different LLMs, and its response is based on different LLMs under test, while "Det." embodies the evaluation of a detector's capability in identifying hallucinations.
|
| 56 |
+
|
| 57 |
+
such as incorrect objects, attributes, or scene text. An example in Figure 1 (a) includes an MLLM inaccurately describing an athlete's uniform color, showcasing an attribute-level conflict due to MLLMs' limited ability to achieve fine-grained text-image alignment.
|
| 58 |
+
|
| 59 |
+
- Fact-Conflicting Hallucination. Outputs from MLLMs may contradict established factual knowledge. Image-to-text models can generate narratives that stray from the actual content by incorporating irrelevant facts, while text-to-image models may produce visuals that fail to reflect the factual knowledge contained in text prompts. These discrepancies underline the struggle of MLLMs to maintain factual consistency, representing a significant challenge in the domain.
|
| 60 |
+
|
| 61 |
+
Unified Detection Problem Formulation. Unified detection of multimodal hallucination necessitates the check of each image-text pair $a = \{v, x\}$ , wherein $v$ denotes either the visual input provided to an MLLM, or the visual output synthetic by it. Correspondingly, $x$ signifies the MLLM's generated textual response based on the $v$ or the textual user query for synthesizing $v$ . Within this task, each $x$ may contain multiple claims, denoted as $\{c_i\}_{i=1\dots n}$ . The objective for hallucination detectors is to assess each claim from $a$ to determine whether it is "hallucinatory" or "non-hallucinatory", providing a rationale for their judgments based on the provided definition of hallucination. Text hallucination detection from LLMs denotes a sub-case in this setting, where $v$ is null.
|
| 62 |
+
|
| 63 |
+
# 3 Construction of MHaluBench
|
| 64 |
+
|
| 65 |
+
To facilitate research in this area, we introduce the meta-evaluation benchmark MHaluBench, which encompasses the content from image-to-text and text-to-image generation, aiming to rigorously assess the advancements in multimodal hallucination detectors. Our benchmark has been metric-
|
| 66 |
+
|
| 67 |
+
ulously curated to include a balanced distribution of instances across three pivotal tasks, which encompasses 200 exemplars for the task of IC 200 for VQA, and an additional 220 dedicated to Text-to-Image Generation. The comparison of MHaluBench with other benchmarks is detailed in Table 1 and the statistical details are provided in Figure 3 and Figure 4.
|
| 68 |
+
|
| 69 |
+
# 3.1 Hallucinatory Example Collection
|
| 70 |
+
|
| 71 |
+
Image-to-Text Generation. We focus on IC and VQA tasks, drawing samples from the MS-COCO 2014 validation set (Lin et al., 2014) and the TextVQA test set (Singh et al., 2019). We compile generative outputs from mPLUG (Ye et al., 2023), LLaVA (Liu et al., 2023c), and MiniGPT-4 (Zhu et al., 2023) to form the core dataset for MHaluBench. These models are representative of current leading MLLMs, characterized by their diverse content generation capabilities and a notable presence of hallucinations, as depicted in Figure 8.
|
| 72 |
+
|
| 73 |
+
Text-to-Image Generation. We source initial captions from DrawBench (Saharia et al., 2022) and T2I-CompBench (Huang et al., 2023a). These captions are augmented through ChatGPT to include more specific information such as objects, attributes, and factual details, among others. The refined caption guides the DALL-E 2 (Ramesh et al., 2022) and DALL-E 3 model (Betker et al., 2023) in producing visually detailed images.
|
| 74 |
+
|
| 75 |
+
# 3.2 Segment and Claim Extraction
|
| 76 |
+
|
| 77 |
+
Beyond evaluating overall responses, we introduce segmentation at both the segment and claim levels for a multi-granular assessment of hallucinations, enabling more precise feedback to improve model performance (Lightman et al., 2023). We leverage ChatGPT's advanced instruction-following ability to extract detailed segments and related claims. For image-to-text tasks, we split and extract the model's textual output into segments and claims;
|
| 78 |
+
|
| 79 |
+

|
| 80 |
+
Figure 3: Claim-Level data statistics of MHaluBench. The claims are fine-grained atoms extracted from the complete "Query-Response" pairs.
|
| 81 |
+
|
| 82 |
+
for text-to-image cases, we break down user queries into fundamental intent concepts, which are subsequently regarded as claims.
|
| 83 |
+
|
| 84 |
+
# 3.3 Human Annotation and Agreement.
|
| 85 |
+
|
| 86 |
+
Our annotation criteria evaluate whether image-to-text output conflicts with the input image or world knowledge and whether text-to-image visuals conflict with claims or world knowledge. Extracted claims are labeled as hallucinatory or non-hallucinatory, with a segment deemed hallucinatory if it contains any such claim; otherwise, it is labeled non-hallucinatory. An entire response is labeled hallucinatory if it includes even one hallucinatory segment. We allocate the dataset uniformly across three annotators with graduate-level qualifications for independent categorization. Decisions in uncertain cases were initially held by individual annotators and later resolved by majority rule. Inter-annotator reliability, measured by Fleiss's Kappa $(\kappa)$ , shows significant agreement $(\kappa = 0.822)$ over the full annotated dataset, indicating a high level of concordance within the range $0.80 \leq \kappa \leq 1.00$ .
|
| 87 |
+
|
| 88 |
+
# 4 UNIHD: Unified Hallucination Detection Framework for MLLMs
|
| 89 |
+
|
| 90 |
+
We present UNIHD in Figure 5 and follow. The specific prompts are listed in Appendix A
|
| 91 |
+
|
| 92 |
+
# 4.1 Essential Claim Extraction
|
| 93 |
+
|
| 94 |
+
To identify fine-grained hallucinations within the response, claim extraction is a prerequisite. Following the procedure in §3.2, we employ the advanced instruction-following abilities of MLLMs for efficient claim extraction. Specifically, GPT-4V/Gemini is adopted as the base LLM to efficiently derive verifiable claims from the outputs of image-to-text models (extracting each response
|
| 95 |
+
|
| 96 |
+

|
| 97 |
+
Figure 4: Distribution of hallucination categories within hallucination-labeled claims of MHaluBench.
|
| 98 |
+
|
| 99 |
+
into individual claims) and text-to-image models (deconstructing user queries into distinct claims) ${}^{2}$ .
|
| 100 |
+
|
| 101 |
+
# 4.2 Autonomous Tool Selection Via Query Formulation
|
| 102 |
+
|
| 103 |
+
After extracting essential claims from the input image-text pair $a = \{v, x\}$ , the challenge of hallucination detection is to aptly match each claim with appropriate aspect-oriented tools. We approach this issue by assessing whether the underlying MLLMs can generate pertinent queries for a given set of claims $\{c_i\}_{i=1\dots n}$ to provide relevant input to the specific aspect-oriented tool. To facilitate this, we prompt underlying MLLMs like GPT-4V/Gemini to autonomously formulate meaningful queries. Demonstrated in Figure 5, this module yields custom queries for each claim, or "none" when a tool is unnecessary. For example, the framework determines that claim1 calls for the attribute-oriented question "What color is the uniform of the athlete on the right side?" and the object-oriented inquiry "[athlete', 'uniform']", bypassing the need for scene-text and fact-oriented tools.
|
| 104 |
+
|
| 105 |
+
# 4.3 Parallel Tool Execution
|
| 106 |
+
|
| 107 |
+
Leveraging queries autonomously generated from various perspectives, we simultaneously deploy these tools in response to the queries, gathering a comprehensive array of insights to underpin the verification of hallucinations. The specific tools employed in our framework are detailed below, selected for their ability to effectively address a wide range of multimodal hallucination scenarios:
|
| 108 |
+
|
| 109 |
+
- Object-oriented tool: We employ the open-set object detection model Grounding DINO (Liu et al., 2023d) for capturing visual object information, crucial for detecting object-level hallucinations. For instance, inputting "[athlete', 'uniform']" prompts the model to return two
|
| 110 |
+
|
| 111 |
+

|
| 112 |
+
Figure 5: The specific illustration of UNiHD for unified multimodal hallucination detection.
|
| 113 |
+
|
| 114 |
+
uniform objects and two athlete objects, along with their normalized location coordinates.
|
| 115 |
+
|
| 116 |
+
- Attribute-Oriented Tool: Dealing with attributes such as positions, colors, and actions, we harness underlying MLLMs (such as GPT-4V and Gemini) to answer the specific attribute-level questions. These responses are leveraged for hallucination verification within the same MLLMs, mirroring a self-reflect akin to (Shinn et al., 2023).
|
| 117 |
+
- Scene-Text-Oriented Tool: Should the generated questions for scene text not be exclusively "none", we then invoke MAERec (Jiang et al., 2023) as our scene-text detection tool, which is capable of identifying scene text within images along with their corresponding normalized four-dimensional coordinates.
|
| 118 |
+
- Fact-Oriented Tool: To validate conflicting factual hallucinations, we harness the Serper Google Search API to perform web searches using specific fact-based questions. By extracting and scrutinizing the top results, we obtain a range of snippets from the API's responses for analysis.
|
| 119 |
+
|
| 120 |
+
Moreover, UNIHD is tool-agnostic, facilitating the seamless integration of emerging tools and detection strategies to amass tool knowledge, thereby bolstering the process of hallucination verification.
|
| 121 |
+
|
| 122 |
+
# 4.4 Hallucination Verification with Rationales
|
| 123 |
+
|
| 124 |
+
In the concluding phase of our process, we subject each claim, denoted as $c_{i}$ , to a binary prediction to ascertain its hallucinatory status. Claims are categorized as either HALLUCINATORY or NON-HALLUCINATORY based on the level of evidence support. To accomplish this, we aggregate the collected evidence from tools with the original image and its corresponding claim list into a comprehensive prompt. Subsequently, we instruct our chosen MLLM (GPT-4V or Gemini) to assess each claim's hallucinatory potential. In doing so, the MLLM also generates insightful explanations to elucidate the rationale behind its judgment.
|
| 125 |
+
|
| 126 |
+
# 5 Experiment
|
| 127 |
+
|
| 128 |
+
# 5.1 Experimental Settings
|
| 129 |
+
|
| 130 |
+
Baselines. We compare UNIHD on MHaluBench with two baselines, Self-Check (2-shot) and Self-Check (0-shot) based on
|
| 131 |
+
|
| 132 |
+
<table><tr><td rowspan="2">Tasks</td><td rowspan="2">LLMs</td><td rowspan="2">Methods</td><td rowspan="2">Levels</td><td colspan="3">Hallucinatory</td><td colspan="3">Non-Hallucinatory</td><td colspan="4">Average</td></tr><tr><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>Acc.</td><td>P</td><td>R</td><td>Mac.F1</td></tr><tr><td rowspan="12">Image-to-Text</td><td rowspan="6">Gemini</td><td rowspan="2">Self-Check (0-shot)</td><td>Claim</td><td>83.17</td><td>42.15</td><td>55.95</td><td>55.64</td><td>89.48</td><td>68.61</td><td>63.34</td><td>69.41</td><td>65.82</td><td>62.28</td></tr><tr><td>Segment</td><td>89.30</td><td>47.71</td><td>62.19</td><td>43.76</td><td>87.68</td><td>58.38</td><td>60.38</td><td>66.53</td><td>67.69</td><td>60.29</td></tr><tr><td rowspan="2">Self-Check (2-shot)</td><td>Claim</td><td>84.24</td><td>66.75</td><td>74.48</td><td>67.35</td><td>84.60</td><td>75.00</td><td>74.74</td><td>75.80</td><td>75.68</td><td>74.74</td></tr><tr><td>Segment</td><td>90.44</td><td>71.08</td><td>79.60</td><td>57.35</td><td>83.80</td><td>68.10</td><td>75.11</td><td>73.89</td><td>77.44</td><td>73.85</td></tr><tr><td rowspan="2">UNIHD</td><td>Claim</td><td>84.44</td><td>72.44</td><td>77.98</td><td>71.08</td><td>83.54</td><td>76.80</td><td>77.41</td><td>77.76</td><td>77.99</td><td>77.39</td></tr><tr><td>Segment</td><td>88.77</td><td>78.76</td><td>83.46</td><td>63.17</td><td>78.52</td><td>70.02</td><td>78.68</td><td>75.97</td><td>78.64</td><td>76.74</td></tr><tr><td rowspan="6">GPT-4v</td><td rowspan="2">Self-Check (0-shot)</td><td>Claim</td><td>79.37</td><td>74.17</td><td>76.68</td><td>70.52</td><td>76.22</td><td>73.26</td><td>75.09</td><td>74.94</td><td>75.19</td><td>74.97</td></tr><tr><td>Segment</td><td>84.78</td><td>80.07</td><td>82.35</td><td>61.64</td><td>69.01</td><td>65.12</td><td>76.56</td><td>73.21</td><td>74.54</td><td>73.73</td></tr><tr><td rowspan="2">Self-Check (2-shot)</td><td>Claim</td><td>82.00</td><td>79.98</td><td>80.98</td><td>76.04</td><td>78.35</td><td>77.18</td><td>79.25</td><td>79.02</td><td>79.16</td><td>79.08</td></tr><tr><td>Segment</td><td>86.54</td><td>85.13</td><td>85.83</td><td>69.05</td><td>71.48</td><td>70.24</td><td>80.80</td><td>77.80</td><td>78.30</td><td>78.04</td></tr><tr><td rowspan="2">UNIHD</td><td>Claim</td><td>82.54</td><td>85.29</td><td>83.89</td><td>81.08</td><td>77.74</td><td>79.38</td><td>81.91</td><td>81.81</td><td>81.52</td><td>81.63</td></tr><tr><td>Segment</td><td>87.03</td><td>91.01</td><td>88.98</td><td>78.52</td><td>70.77</td><td>74.44</td><td>84.60</td><td>82.77</td><td>80.89</td><td>81.71</td></tr><tr><td rowspan="12">Text-to-Image</td><td rowspan="6">Gemini</td><td rowspan="2">Self-Check (0-shot)</td><td>Claim</td><td>73.85</td><td>24.62</td><td>36.92</td><td>55.45</td><td>91.50</td><td>69.06</td><td>58.48</td><td>64.65</td><td>58.06</td><td>52.99</td></tr><tr><td>Segment</td><td>87.27</td><td>30.00</td><td>44.65</td><td>32.53</td><td>88.52</td><td>47.58</td><td>46.15</td><td>59.90</td><td>59.26</td><td>46.11</td></tr><tr><td rowspan="2">Self-Check (2-shot)</td><td>Claim</td><td>85.37</td><td>53.85</td><td>66.04</td><td>66.91</td><td>91.00</td><td>77.12</td><td>72.66</td><td>76.14</td><td>72.42</td><td>71.58</td></tr><tr><td>Segment</td><td>91.67</td><td>61.88</td><td>73.88</td><td>46.02</td><td>85.25</td><td>59.77</td><td>68.33</td><td>68.84</td><td>73.56</td><td>66.83</td></tr><tr><td rowspan="2">UNIHD</td><td>Claim</td><td>85.71</td><td>61.54</td><td>71.64</td><td>70.59</td><td>90.00</td><td>79.12</td><td>75.95</td><td>78.15</td><td>75.77</td><td>75.38</td></tr><tr><td>Segment</td><td>93.28</td><td>69.37</td><td>79.57</td><td>51.96</td><td>86.89</td><td>65.03</td><td>74.21</td><td>72.62</td><td>78.13</td><td>72.30</td></tr><tr><td rowspan="6">GPT-4v</td><td rowspan="2">Self-Check (0-shot)</td><td>Claim</td><td>88.55</td><td>59.49</td><td>71.17</td><td>70.08</td><td>92.50</td><td>79.74</td><td>76.20</td><td>79.31</td><td>75.99</td><td>75.45</td></tr><tr><td>Segment</td><td>93.69</td><td>65.00</td><td>76.75</td><td>49.09</td><td>88.52</td><td>63.16</td><td>71.49</td><td>71.39</td><td>76.76</td><td>69.96</td></tr><tr><td rowspan="2">Self-Check (2-shot)</td><td>Claim</td><td>84.39</td><td>74.87</td><td>79.35</td><td>77.93</td><td>86.50</td><td>81.99</td><td>80.76</td><td>81.16</td><td>80.69</td><td>80.67</td></tr><tr><td>Segment</td><td>89.63</td><td>75.62</td><td>82.03</td><td>54.65</td><td>77.05</td><td>63.95</td><td>76.02</td><td>72.14</td><td>76.34</td><td>72.99</td></tr><tr><td rowspan="2">UNIHD</td><td>Claim</td><td>84.92</td><td>86.67</td><td>85.79</td><td>86.73</td><td>85.00</td><td>85.86</td><td>85.82</td><td>85.83</td><td>85.83</td><td>85.82</td></tr><tr><td>Segment</td><td>91.25</td><td>91.25</td><td>91.25</td><td>77.05</td><td>77.05</td><td>77.05</td><td>87.33</td><td>84.15</td><td>84.15</td><td>84.15</td></tr></table>
|
| 133 |
+
|
| 134 |
+
Table 2: Experimental results of UNIHD powered by Gemini and GPT-4V on Image-to-Text and Text-to-Image Generation. The default F1 score is Micro-F1, whereas Mac.F1 represents the Macro-F1 score.
|
| 135 |
+
|
| 136 |
+
CoT (Wei et al., 2022), which assess the capability of the underlying MLLM to identify hallucinations without external knowledge and have shown effectiveness across other various tasks (Chern et al., 2023; Xie et al., 2023). We prompt GPT-4V (gpt-4-vision-preview) and Gemini (Pro Vision) to recognize fine-grained hallucinations and explain the reasoning behind this determination.
|
| 137 |
+
|
| 138 |
+
Evaluation Perspective. We compute the recall, precision, and Micro-F1 metrics individually for both hallucinatory and non-hallucinatory categories. Additionally, we assess the overall performance by measuring the average Macro-F1 scores at the claim and segment levels. We categorize a segment as non-hallucinatory only if all associated claims are classified as non-hallucinatory; it is deemed hallucinatory if any associated claims do not meet this criterion.
|
| 139 |
+
|
| 140 |
+
# 5.2 Evaluation Results
|
| 141 |
+
|
| 142 |
+
MHaluBench poses a challenging benchmark for multimodal hallucination detection. The segment-level and response-level outcomes are presented in Table 2. Even though all hallucinatory instances in MHaluBench are obtained from opensource MLLMs' outputs rather than being generated by GPT-4V/Gemini itself, it is noteworthy that the majority of detectors achieve an overall Macro
|
| 143 |
+
|
| 144 |
+
F1 score ranging between $70\% -80\%$ , exhibiting subpar performance on MHaluBench.
|
| 145 |
+
|
| 146 |
+

|
| 147 |
+
Figure 6: Comparative analysis to examine the augmentation in detection capabilities provided by specific tools on samples labeled with hallucinations. The x-axis labels "O", "A", "S" and "F" refer to object, attribute, scene-text, and fact, respectively.
|
| 148 |
+
|
| 149 |
+
GPT-4V surpasses Gemini as the detector base. GPT-4V-powered detectors consistently outperform Gemini counterparts, achieving higher Macro-F1 scores, especially in the text-to-image generation. For instance, Self-Check (0-shot) using GPT-4V achieves a claim-level Macro-F1 of 72.82, significantly surpassing Gemini's Macro-F1 score of 52.98. However, Gemini-powered detectors exhibit better performance in non-hallucinatory categories
|
| 150 |
+
|
| 151 |
+
# Image-to-Text
|
| 152 |
+
|
| 153 |
+
User Query: What soda is painted on this building?
|
| 154 |
+
|
| 155 |
+
Here is the claim list:
|
| 156 |
+
|
| 157 |
+
["claim1": "The soda painted on the building is Fanta", "claim2": "Fanta is a fruit-flavored carbonated soft drink.", "claim3": "Fanta originated in America during World War.", "claim4": "Fanta's origin was due to limited Coca-Cola availability"]
|
| 158 |
+
|
| 159 |
+

|
| 160 |
+
Figure 7: Case Study. The upper section depicts two exemplary cases where both UNIHD and Self-Check (2-shot) arrive at correct judgments, with a comparative demonstration of UNIHD providing explanations of superior reasonability. UNIHD (a) reveals a failure case where the tool presents erroneous evidence, leading to an incorrect verification outcome. Conversely, UNIHD (b) highlights a scenario where, despite the tool offering valid and correct evidence, GPT-4V persists in its original stance, resulting in a flawed verification.
|
| 161 |
+
|
| 162 |
+
UniHD Output: correct prediction [..., {"claim3":"hallucination","reason":"According to the external knowledge, Fanta originated in Germany, not America, during World War II. Therefore, there's a hallucination.}], ...]
|
| 163 |
+
|
| 164 |
+

|
| 165 |
+
|
| 166 |
+
# of Explanation Reasonability
|
| 167 |
+
|
| 168 |
+

|
| 169 |
+
|
| 170 |
+
# planation Reasonability
|
| 171 |
+
|
| 172 |
+

|
| 173 |
+
|
| 174 |
+
Here is the claim list:
|
| 175 |
+
|
| 176 |
+
"[claim1]: "There are three bikes parked.",
|
| 177 |
+
|
| 178 |
+
"claim2": "The bikes are outside a local restaurant."
|
| 179 |
+
|
| 180 |
+
UniHD Output: correct prediction
|
| 181 |
+
|
| 182 |
+
[{"claim1":"hallucination","reason":"The object detection model has detected five bikes in the image, not three as stated in the claim. Therefore, there's a bike in the first bike].
|
| 183 |
+
|
| 184 |
+

|
| 185 |
+
|
| 186 |
+
# Self-Check(2-shot) Output:
|
| 187 |
+
|
| 188 |
+
[,..., [‘clause3”,“hallucination”,“reason”,“The origin of Fanta being in America during World War II cannot be verified through visual information in the image,”...)]
|
| 189 |
+
|
| 190 |
+
# Self-Check(2-shot) Output:
|
| 191 |
+
|
| 192 |
+
[["claim1","hallicunination","reason","There are more than three bikes parked in the image. Therefore, there's a hallucination.,"...]]
|
| 193 |
+
|
| 194 |
+
# UniHD (a)
|
| 195 |
+
|
| 196 |
+

|
| 197 |
+
|
| 198 |
+
object detection evic
|
| 199 |
+
|
| 200 |
+
skateboard [0.461, 0.415, 0.523, 0.554] bicycle [0.76, 0.305, 0.987, 0.548]
|
| 201 |
+
skateboarder [0.355, 0.09, 0.569, 0.552] water fountain [0.002, 0.845, 0.141, 0.9 cement ramp [0.231, 0.511, 0.997, 0.99] man [0.819, 0.134, 0.983, 0.534] attribute detection evidence: scene text recognition evidence: n search engine returned evidence:
|
| 202 |
+
|
| 203 |
+
# Here is the claim list:
|
| 204 |
+
|
| 205 |
+
["claim1": "A skateboarder is performing a jump off a cement ramp.",
|
| 206 |
+
|
| 207 |
+
"claim6": "There is a water fountain nearby."
|
| 208 |
+
|
| 209 |
+
# Output:
|
| 210 |
+
|
| 211 |
+
[["claim6":"non-hallucination", wrong prediction "reason": The object detection expert model has detected a water fountain, which matches the visual information in the image.]}
|
| 212 |
+
|
| 213 |
+
# object detection evidence:
|
| 214 |
+
|
| 215 |
+
car [0.001, 0.001, 0.999, 0.999]
|
| 216 |
+
|
| 217 |
+
attribute detection evidence: none
|
| 218 |
+
|
| 219 |
+
scene text recognition evidence:
|
| 220 |
+
|
| 221 |
+
POLICA [0.156, 0.41, 0.855, 0.669]
|
| 222 |
+
|
| 223 |
+
POLICA MILITAR [0.178, 0.585, 0.862, 0.774]
|
| 224 |
+
|
| 225 |
+
search engine returned evidence: none
|
| 226 |
+
|
| 227 |
+
# Here is the claim list:
|
| 228 |
+
|
| 229 |
+
['claim1': "A police car has "POLICIA MILITAR" written on the back."
|
| 230 |
+
|
| 231 |
+
# Output:
|
| 232 |
+
|
| 233 |
+
[{"claim1": "non-hallucination", wrong prediction]
|
| 234 |
+
|
| 235 |
+
"reason": "The object detection model has identified a car in the image. According to the detection results of the scene text expert model, the text reads 'POLICA' and 'POLICA MILITAR', which is consistent with the claim that a police car has 'POLICIA MILITAR' written on the back. There is a typographical difference where 'POLICA' should be 'POLICIA', but the word 'MILITAR' confirms the intended label on the car. Therefore, there's no hallucination." ]
|
| 236 |
+
|
| 237 |
+
# UniHD (b)
|
| 238 |
+
|
| 239 |
+

|
| 240 |
+
|
| 241 |
+
for image-to-text tasks, indicating a potential bias towards reduced sensitivity to hallucinations.
|
| 242 |
+
|
| 243 |
+
UNIHD Empowered by GPT-4V: Superior Detection Across the Board. Table 2 demonstrates that UNIHD, leveraging GPT-4V, consistently outperforms other baseline detectors in image-to-text and text-to-image tasks. Despite the Self-Check (2-shot) showcasing GPT-4V and Gemini's robust in-context learning, UNIHD markedly exceeds its performance, emphasizing the benefits of integrating external tools for more robust evidence verification and reliable hallucination detection.
|
| 244 |
+
|
| 245 |
+
# 5.3 Analysis
|
| 246 |
+
|
| 247 |
+
Which Type of Hallucination Can Benefit the Most from Tool Enhancement? Figure 6 shows that UNiHD enhances the detection of scene text and factual hallucinations over Self-Check (2-shot), suggesting that GPT-4V or Gemini's inherent limitations make the evidence provided by the tool especially valuable. However, UNiHD exhibits minimal improvement in identifying attribute-level hallucinations, potentially attributed to a
|
| 248 |
+
|
| 249 |
+
lack of specialized tools for direct attribute detection, with self-reflection methods based on GPT-4V/Gemini proving to be relatively weak.
|
| 250 |
+
|
| 251 |
+
Explanation Reasonability of UNIHD. As shown in the upper portion of Figure 7, both the fact-level hallucination "Fanta originated in America during World War." and the object-level hallucination "There are three bikes parked." are accurately identified by Self-Check (2-shot) and UNIHD. Comparative analysis reveals that UNIHD excels in synthesizing evidence to provide a more credible and compelling rationale.
|
| 252 |
+
|
| 253 |
+
Failure Analysis of UNIHD. As shown in the lower part of Figure 7, we present two instances where UNIHD exhibits limitations. The left case demonstrates situations where the tool either generates incorrect evidence or fails to provide useful information, leading to erroneous judgments by the MLLM. On the right, we observe cases where the MLLM maintains its initial bias despite receiving accurate evidence, resulting in incorrect decisions. These scenarios highlight areas for further research
|
| 254 |
+
|
| 255 |
+

|
| 256 |
+
Figure 8: Comparison of claim-level hallucination ratios across MLLMs. We randomly select a set of 20 prompts from MHaluBench for each of the IC, VQA, and T2I. Responses for these prompts are generated by each of the evaluated MLLMs.
|
| 257 |
+
|
| 258 |
+
to enhance tool accuracy and to develop MLLMs dedicated to better hallucination detection.
|
| 259 |
+
|
| 260 |
+
Text-to-Image Hallucination vs. Image-to-Text Hallucination: Which is Easier to Detect? Both baselines and the GPT-4V-enhanced UNIHD show significantly improved performance in identifying hallucinations in text-to-image content over image-to-text content. This can be traced back to the structured nature of manually written user queries for text-to-image tasks, which yield more uniform images. while image-to-text confronts the complexity of natural images with background noise and content generated by MLLMs, characterized by greater diversity and fewer constraints. Consequently, it is intuitively easier to detect discrepancies between text and corresponding images in text-to-image tasks.
|
| 261 |
+
|
| 262 |
+
Explore UNIHD to Evaluate Hallucination of Modern MLLMs. We designate UNIHD powered by GPT-4V as the golden detector to assess the frequency of hallucinations in MLLMs, including GPT-4V, and Gemini, among others. The findings illustrated in Figure 8 indicate that (1) GPT-4V exhibits the lowest claim-level hallucination ratio across most tested conditions, and (2) the hallucination-based ranking of these MLLMs is generally in agreement with established leaderboards and human evaluation, demonstrating the potential of UNIHD for evaluating hallucinations.
|
| 263 |
+
|
| 264 |
+
# 6 Related Work
|
| 265 |
+
|
| 266 |
+
# 6.1 Hallucinations in MLLM
|
| 267 |
+
|
| 268 |
+
The advent of MLLMs (OpenAI, 2023; Liu et al., 2023c; Ye et al., 2023; Zhu et al., 2023) has highlighted the issue of hallucination (Hu et al., 2024;
|
| 269 |
+
|
| 270 |
+
Zhang et al., 2023b; Huang et al., 2023b; Rawte et al., 2023; Ji et al., 2023), a crucial concern impacting their dependability. Previous research has primarily focused on three areas: evaluating (Li et al., 2023b; Liu et al., 2023a; Jing et al., 2023), detecting (Wang et al., 2023c; Yang et al., 2023; Yin et al., 2023), and mitigating hallucinations (Wan et al., 2024; Liu et al., 2023b; Huang et al., 2023c; Semnani et al., 2023; Zhao et al., 2024; Leng et al., 2023; Wang et al., 2024; Deng et al., 2024). In a complementary effort, HaELM (Wang et al., 2023c) scrutinizes the challenges associated with POPE (Li et al., 2023b) and suggests training a model based on simulated hallucination samples for detecting multimodal hallucinations. Diverging from prior efforts, this paper addresses a broader problem scope for hallucination detection, introducing a unified multimodal hallucination detection framework, UNIHD, along with meta-evaluation benchmarks, MHaluBench.
|
| 271 |
+
|
| 272 |
+
# 6.2 Harnessing Tool Resources for LLMs
|
| 273 |
+
|
| 274 |
+
Addressing the limitations of LLMs (Chen, 2023; Kang et al., 2024) due to their pre-training confinement, researchers have explored augmenting them with resources like knowledge bases, search engines, and external models, to expand their functionality. Notably, Schick et al. (2023); Hao et al. (2023); Qiao et al. (2023) have developed models that leverage external tools to improve performance in downstream tasks. More recently, Shen et al. (2023); Liang et al. (2023) has unveiled frameworks integrating LLMs with diverse AI models to tackle complex challenges. Building on this, researchers (Peng et al., 2023; Chen et al., 2023) have examined the utilization of external knowledge to mitigate or evaluate hallucinations in LLMs. Adapting these enhancements for MLLMs introduces unique challenges, necessitating the selection of appropriate tools for effective oversight. Our research focuses on automating the selection of functionally diverse tools to enhance multimodal hallucination detection.
|
| 275 |
+
|
| 276 |
+
# 7 Conclusion
|
| 277 |
+
|
| 278 |
+
We introduce a unified problem formulation for multimodal hallucination detection that encompasses a diverse range of multimodal tasks and hallucination types. A fine-grained benchmark dataset, MHaluBench, is also proposed to promote this challenging direction. Alongside this, we
|
| 279 |
+
|
| 280 |
+
present the unified hallucination detection framework, UNiHD, capable of autonomously selecting external tools with capturing pertinent knowledge to support hallucination verification with rationales. Our experimental results indicate that UNiHD achieves better performance across both image-to-text and text-to-image generation tasks, confirming its universality and efficacy.
|
| 281 |
+
|
| 282 |
+
# Limitations
|
| 283 |
+
|
| 284 |
+
This paper focuses on constructing a unified hallucination detection framework for MLLMs, dubbed UNiHD. Despite the best efforts, our paper still have some limitations.
|
| 285 |
+
|
| 286 |
+
The Scope of Multimodal Tasks. This paper primarily addresses the detection of multimodal hallucinations from a unified perspective, with a focus on image-to-text tasks (such as Image Captioning and VQA) and text-to-image generation tasks. Nonetheless, it is important to recognize that our framework does not yet encompass other multimodal tasks, such as video captioning, which are also susceptible to hallucinations. Moving forward, we aim to explore the possibilities of incorporating these additional domains into our UNiHD.
|
| 287 |
+
|
| 288 |
+
Limitations of Closed-Source MLLM Pricing and Inference Speed. Our UNIHD is primarily built upon powerful closed-source models as the foundation. However, closed-source models (Liu et al., 2023c; Zhu et al., 2023; Ye et al., 2023; Bai et al., 2023) often come with a cost, which introduces operational expenses. Additionally, our UNIHD relies on several external tools to provide evidence for enhanced illusion verification, resulting in additional inference time. In the future, we will further explore training open-source dedicated illusion detection models with the tool to further improve effectiveness and reduce costs.
|
| 289 |
+
|
| 290 |
+
The Scope of Hallucination Categories. In our commitment to developing a comprehensive hallucination detection framework, referred to as UNIHD, for MLLMs, we have made efforts to incorporate various prevalent hallucination categories within MHaluBench and UNIHD, including object, attribute, scene-text, and factual aspects, among others. However, it is important to acknowledge that there are additional categories of hallucinations that have not been covered in our framework, as discussed in the existing literature (Zhang et al., 2023b; Wang et al., 2023a; Mishra et al., 2024;
|
| 291 |
+
|
| 292 |
+
Huang et al., 2023b; Rawte et al., 2023). Moving forward, our research will expand its scope to adopt a unified approach towards a wider range of hallucination categories, to strengthen the robustness of our detection mechanisms.
|
| 293 |
+
|
| 294 |
+
Preliminary Attempts at Tool Utilization. In our early endeavors, we have configured a dedicated tool for detecting a specific type of hallucination, exemplified by the assignment of the Grounded DINO model as the object detection tool of choice. However, it should be acknowledged that the current selection of tools may not represent the optimum choice. It remains imperative to rigorously explore which SOTA object detection models are best suited for the task of multimodal hallucination detection. This necessitates an extensive evaluation of available models to pinpoint the most effective tool that aligns with the nuances and complexities of detection objectives.
|
| 295 |
+
|
| 296 |
+
# Acknowledgement
|
| 297 |
+
|
| 298 |
+
We are grateful for the API services provided by OpenAI and Google, which enabled us to process data and conduct some of our experiments. Part implementation of this work are assisted and inspired by the related hallucination toolkits including FactTool (Chern et al., 2023), Woodpecker (Yin et al., 2023), and others. We follow the same license for open-sourcing and thank them for their contributions to the community. This work also benefits from the public project of mPLUGOwl<sup>6</sup>, MiniGPT-4<sup>7</sup>, LLaVA<sup>8</sup>, GroundingDINO<sup>9</sup>, and MAERec<sup>10</sup>. This work was supported by the National Natural Science Foundation of China (No. 62206246, No. NSFCU23B2055, No. NSFCU19B2027), the Fundamental Research Funds for the Central Universities (226-2023-00138), Zhejiang Provincial Natural Science Foundation of China (No. LGG22F030011), Yongjiang Talent Introduction Programme (2021A-156-G), and Information Technology Center and State Key Lab of CAD&CG, Zhejiang University. This work was supported by Ant Group and Zhejiang University - Ant Group Joint Laboratory of Knowledge Graph.
|
| 299 |
+
|
| 300 |
+
# References
|
| 301 |
+
|
| 302 |
+
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. 2023. Qwen-vl: A frontier large vision-language model with versatile abilities. CoRR, abs/2308.12966.
|
| 303 |
+
James Betker, Gabriel Goh, Li Jing, TimBrooks, Jianfeng Wang, Linjie Li, LongOuyang, JuntangZhuang, JoyceLee, YufeiGuo, WesamManassra, PrafullaDhariwal, CaseyChu, YunxinJiao, and Aditya Ramesh. 2023. Improving image generation with better captions.
|
| 304 |
+
Huajun Chen. 2023. Large knowledge model: Perspectives and challenges. CoRR, abs/2312.02706.
|
| 305 |
+
Xiang Chen, Duanzheng Song, Honghao Gui, Chengxi Wang, Ningyu Zhang, Jiang Yong, Fei Huang, Chengfei Lv, Dan Zhang, and Huajun Chen. 2023. Factchd: Benchmarking fact-conflicting hallucination detection. CoRR, abs/2310.12086.
|
| 306 |
+
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, and Pengfei Liu. 2023. Factool: Factuality detection in generative AI - A tool augmented framework for multi-task and multi-domain scenarios. CoRR, abs/2307.13528.
|
| 307 |
+
Ailin Deng, Zhirui Chen, and Bryan Hooi. 2024. Seeing is believing: Mitigating hallucination in large vision-language models via clip-guided decoding. CoRR, abs/2402.15300.
|
| 308 |
+
Zane Durante, Qiuyuan Huang, Naoki Wake, Ran Gong, Jae Sung Park, Bidipta Sarkar, Rohan Taori, Yusuke Noda, Demetri Terzopoulos, Yejin Choi, Katsushi Ikeuchi, Hoi Vo, Li Fei-Fei, and Jianfeng Gao. 2024. Agent ai: Surveying the horizons of multimodal interaction.
|
| 309 |
+
Shibo Hao, Tianyang Liu, Zhen Wang, and Zhiting Hu. 2023. Toolkengpt: Augmenting frozen language models with massive tools via tool embeddings. NeurIPS 2023.
|
| 310 |
+
Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
|
| 311 |
+
Xuming Hu, Junzhe Chen, Xiaochuan Li, Yufei Guo, Lijie Wen, Philip S. Yu, and Zhijiang Guo. 2024. Do large language models know about facts? *ICLR* 2024.
|
| 312 |
+
Kaiyi Huang, Kaiyue Sun, Enze Xie, Zhenguo Li, and Xihui Liu. 2023a. T2i-compbench: A comprehensive benchmark for open-world compositional text-to-image generation. CoRR, abs/2307.06350.
|
| 313 |
+
|
| 314 |
+
Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang, Qianglong Chen, Weihua Peng, Xiaocheng Feng, Bing Qin, and Ting Liu. 2023b. A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions. CoRR, abs/2311.05232.
|
| 315 |
+
Qidong Huang, Xiaoyi Dong, Pan Zhang, Bin Wang, Conghui He, Jiaqi Wang, Dahua Lin, Weiming Zhang, and Nenghai Yu. 2023c. OPERA: alleviating hallucination in multi-modal large language models via over-trust penalty and retrospection-allocation. CoRR, abs/2311.17911.
|
| 316 |
+
Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of hallucination in natural language generation. ACM Comput. Surv., 55(12).
|
| 317 |
+
Qing Jiang, Jiapeng Wang, Dezhi Peng, Chongyu Liu, and Lianwen Jin. 2023. Revisiting scene text recognition: A data perspective. In Proceedings of the IEEE/CVF international conference on computer vision.
|
| 318 |
+
Liqiang Jing, Ruosen Li, Yunmo Chen, Mengzhao Jia, and Xinya Du. 2023. FAITHSCORE: evaluating hallucinations in large vision-language models. CoRR, abs/2311.01477.
|
| 319 |
+
Mintong Kang, Nezihe Merve Gürel, Ning Yu, Dawn Song, and Bo Li. 2024. C-RAG: certified generation risks for retrieval-augmented language models. CoRR, abs/2402.03181.
|
| 320 |
+
Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9332-9346, Online. Association for Computational Linguistics.
|
| 321 |
+
Sicong Leng, Hang Zhang, Guanzheng Chen, Xin Li, Shijian Lu, Chunyan Miao, and Lidong Bing. 2023. Mitigating object hallucinations in large vision-language models through visual contrastive decoding. CoRR, abs/2311.16922.
|
| 322 |
+
Junyi Li, Xiaoxue Cheng, Xin Zhao, Jian-Yun Nie, and Ji-Rong Wen. 2023a. Halueval: A large-scale hallucination evaluation benchmark for large language models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages 6449-6464. Association for Computational Linguistics.
|
| 323 |
+
Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen. 2023b. Evaluating object hallucination in large vision-language models. EMNLP.
|
| 324 |
+
Yaobo Liang, Chenfei Wu, Ting Song, Wenshan Wu, Yan Xia, Yu Liu, Yang Ou, Shuai Lu, Lei Ji,
|
| 325 |
+
|
| 326 |
+
Shaoguang Mao, Yun Wang, Linjun Shou, Ming Gong, and Nan Duan. 2023. Taskmatrix.ai: Completing tasks by connecting foundation models with millions of apis. CoRR, abs/2303.16434.
|
| 327 |
+
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2023. Let's verify step by step.
|
| 328 |
+
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dólar, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In ECCV.
|
| 329 |
+
Fuxiao Liu, Tianrui Guan, Zongxia Li, Lichang Chen, Yaser Yacoob, Dinesh Manocha, and Tianyi Zhou. 2023a. Hallusionbench: You see what you think? or you think what you see? an image-context reasoning benchmark challenging for gpt-4v(ison), llava-1.5, and other multi-modality models. CoRR, abs/2310.14566.
|
| 330 |
+
Fuxiao Liu, Kevin Lin, Linjie Li, Jianfeng Wang, Yaser Yacoob, and Lijuan Wang. 2023b. Aligning large multi-modal model with robust instruction tuning. CoRR, abs/2306.14565.
|
| 331 |
+
Hanchao Liu, Wenyuan Xue, Yifei Chen, Dapeng Chen, Xiutian Zhao, Ke Wang, Liping Hou, Rongjun Li, and Wei Peng. 2024. A survey on hallucination in large vision-language models.
|
| 332 |
+
Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023c. Visual instruction tuning. CoRR, abs/2304.08485.
|
| 333 |
+
Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, and Lei Zhang. 2023d. Grounding DINO: marrying DINO with grounded pre-training for open-set object detection. CoRR, abs/2303.05499.
|
| 334 |
+
Abhika Mishra, Akari Asai, Vidhisha Balachandran, Yizhong Wang, Graham Neubig, Yulia Tsvetkov, and Hannaneh Hajishirzi. 2024. Fine-grained hallucination detection and editing for language models.
|
| 335 |
+
OpenAI. 2023. Gpt-4 technical report. OpenAI.
|
| 336 |
+
Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars Liden, Zhou Yu, Weizhu Chen, and Jianfeng Gao. 2023. Check your facts and try again: Improving large language models with external knowledge and automated feedback. CoRR, abs/2302.12813.
|
| 337 |
+
Shuofei Qiao, Honghao Gui, Huajun Chen, and Ningyu Zhang. 2023. Making language models better tool learners with execution feedback. CoRR, abs/2305.13068.
|
| 338 |
+
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical text-conditional image generation with CLIP latents. CoRR, abs/2204.06125.
|
| 339 |
+
|
| 340 |
+
Vipula Rawte, Amit P. Sheth, and Amitava Das. 2023. A survey of hallucination in large foundation models. CoRR, abs/2309.05922.
|
| 341 |
+
Chitwan Sahara, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L. Denton, Seyed Kamyar Seyed Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, Jonathan Ho, David J. Fleet, and Mohammad Norouzi. 2022. Photorealistic text-to-image diffusion models with deep language understanding. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022.
|
| 342 |
+
Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to use tools. NeurIPS 2023.
|
| 343 |
+
Sina J. Semnani, Violet Z. Yao, Heidi C. Zhang, and Monica S. Lam. 2023. Wikichat: Stopping the hallucination of large language model chatbots by few-shot grounding on wikipedia.
|
| 344 |
+
Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. Hugginggpt: Solving AI tasks with chatgpt and its friends in huggingface. NeurIPS 2023.
|
| 345 |
+
Noah Shinn, Federico Cassano, Edward Berman, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. 2023. Reflexion: Language agents with verbal reinforcement learning.
|
| 346 |
+
Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach. 2019. Towards VQA models that can read. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019, pages 8317-8326. Computer Vision Foundation / IEEE.
|
| 347 |
+
S. M Towhidul Islam Tonmoy, S M Mehedi Zaman, Vinija Jain, Anku Rani, Vipula Rawte, Aman Chadha, and Amitava Das. 2024. A comprehensive survey of hallucination mitigation techniques in large language models.
|
| 348 |
+
Fanqi Wan, Xinting Huang, Leyang Cui, Xiaojun Quan, Wei Bi, and Shuming Shi. 2024. Mitigating hallucinations of large language models via knowledge consistent alignment. CoRR, abs/2401.10768.
|
| 349 |
+
Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020. Asking and answering questions to evaluate the factual consistency of summaries. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5008-5020, Online. Association for Computational Linguistics.
|
| 350 |
+
Cunxiang Wang, Xiaoze Liu, Yuanhao Yue, Xiangru Tang, Tianhang Zhang, Jiayang Cheng, Yunzhi Yao, Wenyang Gao, Xuming Hu, Zehan Qi, Yidong Wang,
|
| 351 |
+
|
| 352 |
+
Linyi Yang, Jindong Wang, Xing Xie, Zheng Zhang, and Yue Zhang. 2023a. Survey on factuality in large language models: Knowledge, retrieval and domain-specificity. CoRR, abs/2310.07521.
|
| 353 |
+
Junyang Wang, Yuhang Wang, Guohai Xu, Jing Zhang, Yukai Gu, Haitao Jia, Ming Yan, Ji Zhang, and Jitao Sang. 2023b. An llm-free multi-dimensional benchmark for mllms hallucination evaluation. CoRR, abs/2311.07397.
|
| 354 |
+
Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, and Haoyu Tang. 2023c. Evaluation and analysis of hallucination in large vision-language models. CoRR, abs/2308.15126.
|
| 355 |
+
Xintong Wang, Jingheng Pan, Liang Ding, and Chris Biemann. 2024. Mitigating hallucinations in large vision-language models with instruction contrastive decoding. CoRR, abs/2403.18715.
|
| 356 |
+
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le, and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. In NeurIPS.
|
| 357 |
+
Junfei Wu, Qiang Liu, Ding Wang, Jinghao Zhang, Shu Wu, Liang Wang, and Tieniu Tan. 2024. Logical closed loop: Uncovering object hallucinations in large vision-language models. CoRR, abs/2402.11622.
|
| 358 |
+
Qiming Xie, Zengzhi Wang, Yi Feng, and Rui Xia. 2023. Ask again, then fail: Large language models' vacillations in judgement. CoRR, abs/2310.02174.
|
| 359 |
+
Shangyu Xing, Fei Zhao, Zhen Wu, Tuo An, Weihao Chen, Chunhui Li, Jianbing Zhang, and Xinyu Dai. 2024. EFUF: efficient fine-grained unlearning framework for mitigating hallucinations in multimodal large language models. CoRR, abs/2402.09801.
|
| 360 |
+
Xianjun Yang, Liangming Pan, Xuandong Zhao, Haifeng Chen, Linda R. Petzold, William Yang Wang, and Wei Cheng. 2023. A survey on detection of llms-generated content. CoRR, abs/2310.15654.
|
| 361 |
+
Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye, Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, Chenliang Li, Yuanhong Xu, Hehong Chen, Junfeng Tian, Qian Qi, Ji Zhang, and Fei Huang. 2023. mplug-owl: Modularization empowers large language models with multimodality. CoRR, abs/2304.14178.
|
| 362 |
+
Shukang Yin, Chaoyou Fu, Sirui Zhao, Tong Xu, Hao Wang, Dianbo Sui, Yunhang Shen, Ke Li, Xing Sun, and Enhong Chen. 2023. Woodpecker: Hallucination correction for multimodal large language models. CoRR, abs/2310.16045.
|
| 363 |
+
|
| 364 |
+
Bohan Zhai, Shijia Yang, Xiangchen Zhao, Chenfeng Xu, Sheng Shen, Dongdi Zhao, Kurt Keutzer, Manling Li, Tan Yan, and Xiangjun Fan. 2023. Halle-switch: Rethinking and controlling object existence hallucinations in large vision language models for detailed caption. CoRR, abs/2310.01779.
|
| 365 |
+
Yue Zhang, Leyang Cui, Wei Bi, and Shuming Shi. 2023a. Alleviating hallucinations of large language models through induced hallucinations. CoRR, abs/2312.15710.
|
| 366 |
+
Yue Zhang, Yafu Li, Leyang Cui, Deng Cai, Lemao Liu, Tingchen Fu, Xinting Huang, Enbo Zhao, Yu Zhang, Yulong Chen, Longyue Wang, Anh Tuan Luu, Wei Bi, Freda Shi, and Shuming Shi. 2023b. Siren's song in the AI ocean: A survey on hallucination in large language models. CoRR, abs/2309.01219.
|
| 367 |
+
Linxi Zhao, Yihe Deng, Weitong Zhang, and Quanquan Gu. 2024. Mitigating object hallucination in large vision-language models via classifier-free guidance. CoRR, abs/2402.08680.
|
| 368 |
+
Yiyang Zhou, Chenhang Cui, Jaehong Yoon, Linjun Zhang, Zhun Deng, Chelsea Finn, Mohit Bansal, and Huaxiu Yao. 2023. Analyzing and mitigating object hallucination in large vision-language models. CoRR, abs/2310.00754.
|
| 369 |
+
Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. 2023. Minigpt-4: Enhancing vision-language understanding with advanced large language models. CoRR, abs/2304.10592.
|
| 370 |
+
|
| 371 |
+
# A Prompt Templates
|
| 372 |
+
|
| 373 |
+
Within this section, we outline the prompt templates designed to guide the foundational MLLM for the autonomous query formulation (illustrated in Table 3-6) and verification of any hallucinated content (shown in Table 7-8).
|
| 374 |
+
|
| 375 |
+
# SYSTEM:
|
| 376 |
+
|
| 377 |
+
You are a brilliant object extractor.
|
| 378 |
+
|
| 379 |
+
# USER:
|
| 380 |
+
|
| 381 |
+
Given a list of claim, extract the objects from each claim for me.
|
| 382 |
+
|
| 383 |
+
Extract the common objects and summarize them as general categories without repetition, merge essentially similar objects.
|
| 384 |
+
|
| 385 |
+
Avoid extracting hypernyms, keep hyponyms!
|
| 386 |
+
|
| 387 |
+
Avoid extracting abstract or non-specific objects.
|
| 388 |
+
|
| 389 |
+
Extract object in the singular form.
|
| 390 |
+
|
| 391 |
+
Output all the extracted types of items separate each object type with a period.
|
| 392 |
+
|
| 393 |
+
If there is nothing to output, then output a single "none".
|
| 394 |
+
|
| 395 |
+
YOU MUST TO DISREGARD OBJECT WORDS THAT ARE NOT NATURAL OBJECTS, SUCH AS SCENES, AREA, SKY, GROUND, WORDS, ATMOSPHERES, COUNTRIES, NAMES, AND PLACES. IF THERE ARE NO NATURAL objects IN THE SENTENCE, RETURN 'none'.
|
| 396 |
+
|
| 397 |
+
YOU MUST RETURN THE RESULTS IN A DICTIONARY ACCORDING TO THE GIVEN ORDER OF THE LIST OF CLAIMS.
|
| 398 |
+
|
| 399 |
+
You MUST only respond in the format as described below. DO NOT RESPOND WITH ANYTHING ELSE.
|
| 400 |
+
|
| 401 |
+
response format: {{"claim1":"object1 object2 object3","claim2":"none","claim3":"object1 object2", ...}}
|
| 402 |
+
|
| 403 |
+
Here are three examples:
|
| 404 |
+
|
| 405 |
+
claim list:
|
| 406 |
+
|
| 407 |
+
claim1: The image depicts a man laying on the ground.
|
| 408 |
+
|
| 409 |
+
claim2: The man is next to a motorcycle.
|
| 410 |
+
|
| 411 |
+
claim3: The sun is shining upon the ground.
|
| 412 |
+
|
| 413 |
+
claim4: The light is very bright.
|
| 414 |
+
|
| 415 |
+
output:
|
| 416 |
+
|
| 417 |
+
{{"claim1":"man","claim2":"man.motorcycle","claim3":"none","claim4":"none"}}
|
| 418 |
+
|
| 419 |
+
claim list:
|
| 420 |
+
|
| 421 |
+
claim1: The image shows a device.
|
| 422 |
+
|
| 423 |
+
claim2: The device has the words Samsung.
|
| 424 |
+
|
| 425 |
+
claim3: Samsung is a Korean company.
|
| 426 |
+
|
| 427 |
+
output:
|
| 428 |
+
|
| 429 |
+
{{"claim1":"device","claim2":"device","claim3":"none"}}
|
| 430 |
+
|
| 431 |
+
claim list:
|
| 432 |
+
|
| 433 |
+
claim1: A man wears a green shirt.
|
| 434 |
+
|
| 435 |
+
claim2: The man's face is beaming with a smile.
|
| 436 |
+
|
| 437 |
+
claim3: The image shows the man in high spirits.
|
| 438 |
+
|
| 439 |
+
output:
|
| 440 |
+
|
| 441 |
+
{{"claim1":"manshirt","claim2":"man","claim3":"man"}}
|
| 442 |
+
|
| 443 |
+
Now complete your output with following the above rules.
|
| 444 |
+
|
| 445 |
+
claim list:
|
| 446 |
+
|
| 447 |
+
{claims}
|
| 448 |
+
|
| 449 |
+
output:
|
| 450 |
+
|
| 451 |
+
Table 3: Prompt template of query formulation (object-level) for image-to-text generation.
|
| 452 |
+
|
| 453 |
+
# SYSTEM:
|
| 454 |
+
|
| 455 |
+
You are a brilliant question generator.
|
| 456 |
+
|
| 457 |
+
# USER:
|
| 458 |
+
|
| 459 |
+
Given a list of claim and some objects(each object is connected by a period), you're required to generate questions about attributes of the given objects.
|
| 460 |
+
|
| 461 |
+
The generated questions may involve basic attributes such as colors, actions and position mentioned in the claim.
|
| 462 |
+
|
| 463 |
+
Do not ask questions involving object counts or the existence of object. Do not ask questions involving scene text.
|
| 464 |
+
|
| 465 |
+
When asking questions about attributes, try to ask simple questions that only involve one object. Ask questions that can be easily decided visually. Do not ask questions that require complex reasoning.
|
| 466 |
+
|
| 467 |
+
Do not ask semantically similar questions. Do not ask questions only about scenes or places. Do not ask questions about uncertain or conjecture parts of the claim, for example, the parts described with "maybe" or "likely", etc.
|
| 468 |
+
|
| 469 |
+
It is no need to cover all the specified objects. If there is no question to ask, simply output 'none'.
|
| 470 |
+
|
| 471 |
+
YOU MUST RETURN THE RESULTS IN A DICTIONARY ACCORDING TO THE GIVEN ORDER OF THE LIST OF CLAIMS.
|
| 472 |
+
|
| 473 |
+
You MUST only respond in the format as described below. DO NOT RESPOND WITH ANYTHING ELSE.
|
| 474 |
+
|
| 475 |
+
response format: {{"claim1": ["question1", "question2"], "claim2": ["none"], "claim3": ["question1", "question2"], ...}}
|
| 476 |
+
|
| 477 |
+
Here are three examples:
|
| 478 |
+
|
| 479 |
+
objects:
|
| 480 |
+
|
| 481 |
+
dog.cat
|
| 482 |
+
|
| 483 |
+
claim list:
|
| 484 |
+
|
| 485 |
+
claim1: There is one black dog on the left in the image.
|
| 486 |
+
|
| 487 |
+
claim2: There are two white cats on the right in the image.
|
| 488 |
+
|
| 489 |
+
output:
|
| 490 |
+
|
| 491 |
+
{{"claim1":["What color is the dog?", "Is there a dog on the left in the image?", "claim2":["What color are the cat?", "Are there two cats on the right in the image?"]}}}
|
| 492 |
+
|
| 493 |
+
objects:
|
| 494 |
+
|
| 495 |
+
man.baseball cap.wall
|
| 496 |
+
|
| 497 |
+
claim list:
|
| 498 |
+
|
| 499 |
+
claim1: The man is wearing a baseball cap.
|
| 500 |
+
|
| 501 |
+
claim2: The man appears to be smoking.
|
| 502 |
+
|
| 503 |
+
claim3: 'hello world' is written on the white wall.
|
| 504 |
+
|
| 505 |
+
output:
|
| 506 |
+
|
| 507 |
+
{{"claim1":["What is the man wearing?",claim2":["Does the man appear to be smoking?",claim3":[What color is the wall?]}}
|
| 508 |
+
|
| 509 |
+
objects:
|
| 510 |
+
|
| 511 |
+
kitchen.man.apron
|
| 512 |
+
|
| 513 |
+
claim list:
|
| 514 |
+
|
| 515 |
+
claim1: The image depicts a kitchen.
|
| 516 |
+
|
| 517 |
+
claim2: There is a man in a white apron.
|
| 518 |
+
|
| 519 |
+
claim3: The man is standing in the middle of the kitchen.
|
| 520 |
+
|
| 521 |
+
claim4: The overall atmosphere is very pleasant.
|
| 522 |
+
|
| 523 |
+
output:
|
| 524 |
+
|
| 525 |
+
"claim1": ["none"], "claim2": ["What does the man wear?", "What color is the apron?", "claim3": ["Is the man standing in the middle of the kitchen?", "claim4": ["none"]
|
| 526 |
+
|
| 527 |
+
Now complete the following with following the above rules. DO NOT RESPOND WITH ANYTHING ELSE.
|
| 528 |
+
|
| 529 |
+
objects:
|
| 530 |
+
|
| 531 |
+
{objects}
|
| 532 |
+
|
| 533 |
+
claim list:
|
| 534 |
+
|
| 535 |
+
{claims}
|
| 536 |
+
|
| 537 |
+
output:
|
| 538 |
+
|
| 539 |
+
Table 4: Prompt template of query formulation (attribute-level) for image-to-text generation.
|
| 540 |
+
|
| 541 |
+
# SYSTEM:
|
| 542 |
+
|
| 543 |
+
You are a brilliant question generator.
|
| 544 |
+
|
| 545 |
+
# USER:
|
| 546 |
+
|
| 547 |
+
Given a list of claim, you're required to generate questions about scene text to assist users in verifying the accuracy of the claim.
|
| 548 |
+
|
| 549 |
+
If the information mentioned in this claim pertains to scene text, you'll need to generate question about the scene text.
|
| 550 |
+
|
| 551 |
+
If the claim is unrelated to the scene text information in the image, such as: objects, colors, actions, position etc, simply return 'none'.
|
| 552 |
+
|
| 553 |
+
YOU MUST RETURN THE RESULTS IN A DICTIONARY ACCORDING TO THE GIVEN ORDER OF THE LIST OF CLAIMS.
|
| 554 |
+
|
| 555 |
+
You MUST only respond in the format as described below. DO NOT RESPOND WITH ANYTHING ELSE.
|
| 556 |
+
|
| 557 |
+
response format: {{"claim1": ["question1", "question2"], "claim2": ["none"], "claim3": ["question1", "question2"], ...}}
|
| 558 |
+
|
| 559 |
+
Here are three examples:
|
| 560 |
+
|
| 561 |
+
claim list:
|
| 562 |
+
|
| 563 |
+
claim1: There is a black device in the image.
|
| 564 |
+
|
| 565 |
+
claim2: The device is a brand of smartphones produced by Samsung Electronics.
|
| 566 |
+
|
| 567 |
+
output: {{"claim1": ["none"], "claim2": ["What is the brand of the device in the image?"]}}}
|
| 568 |
+
|
| 569 |
+
claim list:
|
| 570 |
+
|
| 571 |
+
claim1: A stop sign is on the left.
|
| 572 |
+
|
| 573 |
+
claim2: The stop sign says stop eating animals.
|
| 574 |
+
|
| 575 |
+
output: {{"claim1":"none"], "claim2":"What does the stop sign say in the image?"}}
|
| 576 |
+
|
| 577 |
+
claim list:
|
| 578 |
+
|
| 579 |
+
claim1: The words 'Hello World' are written on the car.
|
| 580 |
+
|
| 581 |
+
claim2: A man is standing beside the car.
|
| 582 |
+
|
| 583 |
+
output: {{"claim1": ["What are written on the car?", "claim2": ["none"]}
|
| 584 |
+
|
| 585 |
+
Now complete the following with following the above rules. DO NOT RESPOND WITH ANYTHING ELSE.
|
| 586 |
+
|
| 587 |
+
claim list:
|
| 588 |
+
|
| 589 |
+
{claims}
|
| 590 |
+
|
| 591 |
+
output:
|
| 592 |
+
|
| 593 |
+
Table 5: Prompt template of query formulation (scene-text-level) for image-to-text generation.
|
| 594 |
+
|
| 595 |
+
# SYSTEM:
|
| 596 |
+
|
| 597 |
+
You are a brilliant question generator.
|
| 598 |
+
|
| 599 |
+
# USER:
|
| 600 |
+
|
| 601 |
+
Given a list of claim, you're required to generate questions about related to factual visual information.
|
| 602 |
+
|
| 603 |
+
For a claim based on factual knowledge, Your primary task is to generate a Python list of two effective and skeptical search engine questions.
|
| 604 |
+
|
| 605 |
+
These questions should assist users in critically evaluating the factuality of a provided claim using search engines.
|
| 606 |
+
|
| 607 |
+
If a claim is not based on factual knowledge, simply return 'none'.
|
| 608 |
+
|
| 609 |
+
YOU MUST RETURN THE RESULTS IN A DICTIONARY ACCORDING TO THE GIVEN ORDER OF THE LIST OF CLAIMS.
|
| 610 |
+
|
| 611 |
+
You MUST only respond in the format as described below. DO NOT RESPOND WITH ANYTHING ELSE.
|
| 612 |
+
|
| 613 |
+
response format: {{"claim1": ["question1", "question2"], "claim2": ["none"], "claim3": ["question1", "question2"], ...}}
|
| 614 |
+
|
| 615 |
+
Here are three examples:
|
| 616 |
+
|
| 617 |
+
claim list:
|
| 618 |
+
|
| 619 |
+
claim1: The image shows a black phone.
|
| 620 |
+
|
| 621 |
+
claim2: This black phone is manufactured by Huawei.
|
| 622 |
+
|
| 623 |
+
claim3: Huawei is a company located in Shenzhen, China.
|
| 624 |
+
|
| 625 |
+
output:
|
| 626 |
+
|
| 627 |
+
{{"claim1":["none"], "claim2":["none"], "claim3":["Where is Huawei headquartered?", "Huawei company"]}}
|
| 628 |
+
|
| 629 |
+
claim list:
|
| 630 |
+
|
| 631 |
+
claim1: The image shows an app of twitter.
|
| 632 |
+
|
| 633 |
+
claim2: The CEO of twitter is Bill Gates.
|
| 634 |
+
|
| 635 |
+
output: {{"claim1":"none"], "claim2":"Who is the CEO of twitter?", "CEO Twitter"]}
|
| 636 |
+
|
| 637 |
+
claim list:
|
| 638 |
+
|
| 639 |
+
claim1: The man is playing baseball.
|
| 640 |
+
|
| 641 |
+
claim2: The man is wearing a colorful shirt.
|
| 642 |
+
|
| 643 |
+
output: {{"claim1":"none"], "claim2":"none"]}
|
| 644 |
+
|
| 645 |
+
Now complete the following with following the above rules. DO NOT RESPOND WITH ANYTHING ELSE.
|
| 646 |
+
|
| 647 |
+
claim list:
|
| 648 |
+
|
| 649 |
+
{claims}
|
| 650 |
+
|
| 651 |
+
output:
|
| 652 |
+
|
| 653 |
+
Table 6: Prompt template of query formulation (fact-level) for image-to-text generation.
|
| 654 |
+
|
| 655 |
+
# SYSTEM:
|
| 656 |
+
|
| 657 |
+
You are a brilliant hallucination judge.
|
| 658 |
+
|
| 659 |
+
# USER:
|
| 660 |
+
|
| 661 |
+
Given a list of claims from Multimodal Large Language Models and an image, you are required to judge whether each claim in the list by the Multimodal Large Language Model model conflicts with the image, following these rules:
|
| 662 |
+
|
| 663 |
+
1. You must carefully judge from four aspects, including the object, attributes, scene text and fact. Here are specific descriptions of the four aspects for you to review:
|
| 664 |
+
"Object" specifically refers to whether the objects in the image exist and if the quantity of objects conflicts with the object information in the claims;
|
| 665 |
+
"Attributes" specifically refer to whether the color, position, action of objects in the image conflict with the attribute information in the claims;
|
| 666 |
+
"Scene Text" specifically refers to whether the textual information in the scene of the image conflicts with the required textual information in the claims.
|
| 667 |
+
"Fact" specifically refers to relevant factual knowledge obtained by querying a search engine. You can verify the factual accuracy of the claims based on the provided external knowledge.
|
| 668 |
+
2. You'll also receive detection results from the expert model. The object detection expert model will provide detected entity names along with their bounding box information in the image. When deriving position relationships between entity instances, try to also use the bounding boxes information, which are represented as [x1, y1, x2, y2] with floating numbers ranging from 0 to 1. These values correspond to the top left x1, top left y1, bottom right x2, and bottom right y2. The scene text expert model will provide detected specific text along with their bounding box information in the image. As long as there is a conflict between a single letter in the scene text and the text information required in the claim, it's considered a hallucination.
|
| 669 |
+
3. You must carefully judge whether the visual information in the image conflicts with each claim. If there is a conflict, the result for that statement is labeled as 'hallucination'; otherwise, it is labeled as 'non-hallucination'.
|
| 670 |
+
4. Finally, YOU MUST RETURN THE JUDGMENT RESULTS IN A DICTIONARY ACCORDING TO THE GIVEN ORDER OF THE LIST OF CLAIMS. You MUST only respond in the format as described below. DO NOT RESPOND WITH ANYTHING ELSE. response format: ["claim1":"hallucination", "reason":"The reason for your judgment.", "claim2":"non-hallucination", "reason":"The reason for your judgment.", "claim3":"hallucination", "reason":"The reason for your judgment.", ...]
|
| 671 |
+
|
| 672 |
+
[Begin of Example] (Image Entered)
|
| 673 |
+
|
| 674 |
+
Here is the object detection expert model's result: people [0.345, 0.424, 0.408, 0.509]; people [0.197, 0.44, 0.28, 0.514] people [0.517, 0.315, 0.561, 0.401]; people [0.441, 0.356, 0.47, 0.405] chair [0.398, 0.595, 0.637, 0.901]; chair [0.621, 0.592, 0.789, 0.889] umbrella [0.501, 0.334, 0.968, 0.88]
|
| 675 |
+
|
| 676 |
+
Here is the attribute detection expert model's result: none information
|
| 677 |
+
|
| 678 |
+
Here is the scene text recognition expert model's result: none information
|
| 679 |
+
|
| 680 |
+
Here is the external knowledge: none information
|
| 681 |
+
|
| 682 |
+
Here is the claim list:
|
| 683 |
+
|
| 684 |
+
claim1: The picture shows five people swimming.
|
| 685 |
+
claim2: On the beach, there is a chair, a umbrella, and a surfboard.
|
| 686 |
+
claim3: The green umbrella is on the right side of the chair.
|
| 687 |
+
|
| 688 |
+
Output: ["claim1":"hallucination","reason":"The object detection expert model identified four people, not five people. Based on the image information, they might be swimming. Therefore, there's a hallucination.", "claim2":"hallucination","reason":"According to the results of the object detection expert model and my judgment, there are two chairs and an umbrella in the picture, but there is no surfboard. Therefore, there's a hallucination.", "claim3":"non-hallucination","reason":"Based on the positional information of the bounding boxes and my judgment, the umbrella is to the right of the chairs. The umbrella is green. Therefore, there's no hallucination." ]
|
| 689 |
+
|
| 690 |
+
[End of Example]
|
| 691 |
+
|
| 692 |
+
<Input>:
|
| 693 |
+
<Output>
|
| 694 |
+
|
| 695 |
+
# SYSTEM:
|
| 696 |
+
|
| 697 |
+
You are a brilliant hallucination judge.
|
| 698 |
+
|
| 699 |
+
# USER:
|
| 700 |
+
|
| 701 |
+
Given a list of claims from human prompts, an image generated by the text-to-image model, you are required to judge whether the image conflicts with human-provided prompts, following these rules:
|
| 702 |
+
|
| 703 |
+
1. You must carefully judge from four aspects, including the object, attributes, scene text and fact. Here are specific descriptions of the four aspects for you to review:
|
| 704 |
+
"Object" specifically refers to whether the objects in the image exist and if the quantity of objects conflicts with the object information in the claims;
|
| 705 |
+
"Attributes" specifically refer to whether the color, position, action of objects in the image conflict with the attribute information in the claims;
|
| 706 |
+
"Scene Text" specifically refers to whether the textual information in the scene of the image conflicts with the required textual information in the claims.
|
| 707 |
+
"Fact" specifically refers to relevant factual knowledge obtained by querying a search engine. You can verify the factual accuracy of the claims based on the provided external knowledge.
|
| 708 |
+
2. You'll also receive detection results from the expert model. The object detection expert model will provide detected entity names along with their bounding box information in the image. When deriving position relationships between entity instances, try to also use the bounding boxes information, which are represented as [x1, y1, x2, y2] with floating numbers ranging from 0 to 1. These values correspond to the top left x1, top left y1, bottom right x2, and bottom right y2. The scene text expert model will provide detected specific text along with their bounding box information in the image. As long as there is a conflict between a single letter in the scene text and the text information required in the claim, it's considered a hallucination.
|
| 709 |
+
3. You must carefully judge whether the visual information in the image conflicts with each claim. If there is a conflict, the result for that statement is labeled as 'hallucination'; otherwise, it is labeled as 'non-hallucination'.
|
| 710 |
+
4. Finally, YOU MUST RETURN THE JUDGMENT RESULTS IN A DICTIONARY ACCORDING TO THE GIVEN ORDER OF THE LIST OF CLAIMS. You MUST only respond in the format as described below. DO NOT RESPOND WITH ANYTHING ELSE. response format: ["claim1":"hallucination", "reason":"The reason for your judgment.", "claim2":"non-hallucination", "reason":"The reason for your judgment.", "claim3":"hallucination", "reason":"The reason for your judgment.", ...]
|
| 711 |
+
|
| 712 |
+
[Begin of Example] (Image Entered)
|
| 713 |
+
|
| 714 |
+
Here is the object detection expert model's result:
|
| 715 |
+
|
| 716 |
+
basketball [0.741, 0.179, 0.848, 0.285]
|
| 717 |
+
|
| 718 |
+
boy [0.773, 0.299, 0.98, 0.828]
|
| 719 |
+
|
| 720 |
+
car [0.001, 0.304, 0.992, 0.854]
|
| 721 |
+
|
| 722 |
+
Here is the attribute detection expert model's result: none information
|
| 723 |
+
|
| 724 |
+
Here is the scene text recognition expert model's result:
|
| 725 |
+
|
| 726 |
+
worldd [0.405, 0.504, 0.726, 0.7]
|
| 727 |
+
|
| 728 |
+
Here is the external knowledge: none information
|
| 729 |
+
|
| 730 |
+
Here is the claim list:
|
| 731 |
+
|
| 732 |
+
claim1: The side of the car reads 'Hello World'
|
| 733 |
+
|
| 734 |
+
claim2: A boy is playing a yellow basketball beside a plant.
|
| 735 |
+
|
| 736 |
+
Output: ["claim1":"hallucination", "reason":"The object detection model has identified a car in the image. However, based on the detection results of the scene text expert model and my judgment, the text in the image is 'hello worldl'd not 'hello world'. Therefore, there's a hallucination.", "claim2":"hallucination", "reason":"The object detection model has identified a boy and a basketball in the image. And the boy is visible in the image playing with a yellow basketball. But according to the detection results of the object detection expert model and my judgment, there's no plant. Therefore, there's a hallucination."]
|
| 737 |
+
|
| 738 |
+
[End of Example]
|
| 739 |
+
|
| 740 |
+
<Input>:
|
| 741 |
+
|
| 742 |
+
<Output>:
|
| 743 |
+
|
| 744 |
+
Table 8: Prompt template of hallucination verification for text-to-image generation.
|
2024/Unified Hallucination Detection for Multimodal Large Language Models/images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:f7ac2f12a4ec9831d543c8a167b236e1030f90b9a4983a35edb8f6fddff8a2f1
|
| 3 |
+
size 646413
|
2024/Unified Hallucination Detection for Multimodal Large Language Models/layout.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2024/Unintended Impacts of LLM Alignment on Global Representation/cd8d179d-1771-4da7-8090-05c4b67830c2_content_list.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|