KITAB: Evaluating LLMs on Constraint Satisfaction for Information Retrieval Paper • 2310.15511 • Published Oct 24, 2023 • 4
HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(ision), LLaVA-1.5, and Other Multi-modality Models Paper • 2310.14566 • Published Oct 23, 2023 • 23
SmartPlay : A Benchmark for LLMs as Intelligent Agents Paper • 2310.01557 • Published Oct 2, 2023 • 12
FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation Paper • 2310.03214 • Published Oct 5, 2023 • 14
CrossCodeEval: A Diverse and Multilingual Benchmark for Cross-File Code Completion Paper • 2310.11248 • Published Oct 17, 2023 • 3
SWE-bench: Can Language Models Resolve Real-World GitHub Issues? Paper • 2310.06770 • Published Oct 10, 2023 • 3
LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding Paper • 2308.14508 • Published Aug 28, 2023 • 2
JudgeLM: Fine-tuned Large Language Models are Scalable Judges Paper • 2310.17631 • Published Oct 26, 2023 • 31
L-Eval: Instituting Standardized Evaluation for Long Context Language Models Paper • 2307.11088 • Published Jul 20, 2023 • 4
Evaluating Instruction-Tuned Large Language Models on Code Comprehension and Generation Paper • 2308.01240 • Published Aug 2, 2023 • 1
AGIBench: A Multi-granularity, Multimodal, Human-referenced, Auto-scoring Benchmark for Large Language Models Paper • 2309.06495 • Published Sep 5, 2023 • 1
RAGAS: Automated Evaluation of Retrieval Augmented Generation Paper • 2309.15217 • Published Sep 26, 2023 • 3
EvalCrafter: Benchmarking and Evaluating Large Video Generation Models Paper • 2310.11440 • Published Oct 17, 2023 • 13
Benchmarking Large Language Models in Retrieval-Augmented Generation Paper • 2309.01431 • Published Sep 4, 2023 • 1
PandaLM: An Automatic Evaluation Benchmark for LLM Instruction Tuning Optimization Paper • 2306.05087 • Published Jun 8, 2023 • 5
INSTRUCTEVAL: Towards Holistic Evaluation of Instruction-Tuned Large Language Models Paper • 2306.04757 • Published Jun 7, 2023 • 4
PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts Paper • 2306.04528 • Published Jun 7, 2023 • 3
PIXIU: A Large Language Model, Instruction Data and Evaluation Benchmark for Finance Paper • 2306.05443 • Published Jun 8, 2023 • 3
ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation Paper • 2308.01861 • Published Aug 3, 2023 • 1
Out of the BLEU: how should we assess quality of the Code Generation models? Paper • 2208.03133 • Published Aug 5, 2022 • 2
CodeApex: A Bilingual Programming Evaluation Benchmark for Large Language Models Paper • 2309.01940 • Published Sep 5, 2023 • 1
COPEN: Probing Conceptual Knowledge in Pre-trained Language Models Paper • 2211.04079 • Published Nov 8, 2022 • 1
Benchmarking Language Models for Code Syntax Understanding Paper • 2210.14473 • Published Oct 26, 2022 • 1
CCT-Code: Cross-Consistency Training for Multilingual Clone Detection and Code Search Paper • 2305.11626 • Published May 19, 2023 • 1
AutoMLBench: A Comprehensive Experimental Evaluation of Automated Machine Learning Frameworks Paper • 2204.08358 • Published Apr 18, 2022 • 1
Continual evaluation for lifelong learning: Identifying the stability gap Paper • 2205.13452 • Published May 26, 2022 • 1
MEGAVERSE: Benchmarking Large Language Models Across Languages, Modalities, Models and Tasks Paper • 2311.07463 • Published Nov 13, 2023 • 13
RaLLe: A Framework for Developing and Evaluating Retrieval-Augmented Large Language Models Paper • 2308.10633 • Published Aug 21, 2023 • 1
ToolTalk: Evaluating Tool-Usage in a Conversational Setting Paper • 2311.10775 • Published Nov 15, 2023 • 7
MetaTool Benchmark for Large Language Models: Deciding Whether to Use Tools and Which to Use Paper • 2310.03128 • Published Oct 4, 2023 • 1
RETA-LLM: A Retrieval-Augmented Large Language Model Toolkit Paper • 2306.05212 • Published Jun 8, 2023 • 1
ML-Bench: Large Language Models Leverage Open-source Libraries for Machine Learning Tasks Paper • 2311.09835 • Published Nov 16, 2023 • 7
CRUXEval: A Benchmark for Code Reasoning, Understanding and Execution Paper • 2401.03065 • Published Jan 5 • 10
CodeFuse-13B: A Pretrained Multi-lingual Code Large Language Model Paper • 2310.06266 • Published Oct 10, 2023 • 1
ConTextual: Evaluating Context-Sensitive Text-Rich Visual Reasoning in Large Multimodal Models Paper • 2401.13311 • Published Jan 24 • 8
Text2KGBench: A Benchmark for Ontology-Driven Knowledge Graph Generation from Text Paper • 2308.02357 • Published Aug 4, 2023 • 1
Have LLMs Advanced Enough? A Challenging Problem Solving Benchmark For Large Language Models Paper • 2305.15074 • Published May 24, 2023 • 1
Copilot Evaluation Harness: Evaluating LLM-Guided Software Programming Paper • 2402.14261 • Published Feb 22 • 10
The FinBen: An Holistic Financial Benchmark for Large Language Models Paper • 2402.12659 • Published Feb 20 • 13
Multi-Task Inference: Can Large Language Models Follow Multiple Instructions at Once? Paper • 2402.11597 • Published Feb 18
ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios Paper • 2401.00741 • Published Jan 1