Papers
arxiv:2406.11402

Evaluating Open Language Models Across Task Types, Application Domains, and Reasoning Types: An In-Depth Experimental Analysis

Published on Jun 17
· Submitted by amanchadha on Jun 18
Authors:
,

Abstract

The rapid rise of Language Models (LMs) has expanded their use in several applications. Yet, due to constraints of model size, associated cost, or proprietary restrictions, utilizing state-of-the-art (SOTA) LLMs is not always feasible. With open, smaller LMs emerging, more applications can leverage their capabilities, but selecting the right LM can be challenging. This work conducts an in-depth experimental analysis of the semantic correctness of outputs of 10 smaller, open LMs across three aspects: task types, application domains and reasoning types, using diverse prompt styles. We demonstrate that most effective models and prompt styles vary depending on the specific requirements. Our analysis provides a comparative assessment of LMs and prompt styles using a proposed three-tier schema of aspects for their strategic selection based on use-case and other constraints. We also show that if utilized appropriately, these LMs can compete with, and sometimes outperform, SOTA LLMs like DeepSeek-v2, GPT-3.5-Turbo, and GPT-4o.

Community

Paper author Paper submitter

This paper conducts an in-depth experimental analysis of the semantic correctness of outputs from ten open, small-scale language models (LMs) across various task types, application domains, and reasoning types, providing a framework for selecting the best model and prompt style for specific needs.

  • Three-Tier Schema and Analysis: Proposes a three-tier schema for evaluating LMs' performance across task types, application domains, and reasoning types, offering a structured analysis framework.
  • Experimental Evaluation: Conducts comprehensive experiments with ten open LMs, demonstrating that smaller models (2B–11B parameters) can effectively compete with, and sometimes outperform, larger state-of-the-art models like GPT-3.5-Turbo and GPT-4o when appropriately selected and prompted.
  • Guidance and Insights: Provides practical guidelines for selecting LMs and prompt styles based on specific use cases and constraints, highlighting the trade-offs in performance and robustness to prompt variation

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.11402 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.11402 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.11402 in a Space README.md to link it from this page.

Collections including this paper 2