TWICE: What Advantages Can Low-Resource Domain-Specific Embedding Model Bring? - A Case Study on Korea Financial Texts Paper • 2502.07131 • Published 9 days ago
HAE-RAE Bench: Evaluation of Korean Knowledge in Language Models Paper • 2309.02706 • Published Sep 6, 2023 • 2
KMMLU: Measuring Massive Multitask Language Understanding in Korean Paper • 2402.11548 • Published Feb 18, 2024
Removing Non-Stationary Knowledge From Pre-Trained Language Models for Entity-Level Sentiment Classification in Finance Paper • 2301.03136 • Published Jan 9, 2023
EaSyGuide : ESG Issue Identification Framework leveraging Abilities of Generative Large Language Models Paper • 2306.06662 • Published Jun 11, 2023
LLM-as-a-Judge & Reward Model: What They Can and Cannot Do Paper • 2409.11239 • Published Sep 17, 2024 • 1
Understand, Solve and Translate: Bridging the Multilingual Mathematical Reasoning Gap Paper • 2501.02448 • Published Jan 5