Papers
arxiv:2503.05085

S2S-Arena, Evaluating Speech2Speech Protocols on Instruction Following with Paralinguistic Information

Published on Mar 7
· Submitted by liuxuan320 on Mar 10
Authors:
Fan Bu ,
,

Abstract

The rapid development of large language models (LLMs) has brought significant attention to speech models, particularly recent progress in speech2speech protocols supporting speech input and output. However, the existing benchmarks adopt automatic text-based evaluators for evaluating the instruction following ability of these models lack consideration for paralinguistic information in both speech understanding and generation. To address these issues, we introduce S2S-Arena, a novel arena-style S2S benchmark that evaluates instruction-following capabilities with paralinguistic information in both speech-in and speech-out across real-world tasks. We design 154 samples that fused TTS and live recordings in four domains with 21 tasks and manually evaluate existing popular speech models in an arena-style manner. The experimental results show that: (1) in addition to the superior performance of GPT-4o, the speech model of cascaded ASR, LLM, and TTS outperforms the jointly trained model after text-speech alignment in speech2speech protocols; (2) considering paralinguistic information, the knowledgeability of the speech model mainly depends on the LLM backbone, and the multilingual support of that is limited by the speech module; (3) excellent speech models can already understand the paralinguistic information in speech input, but generating appropriate audio with paralinguistic information is still a challenge.

Community

Paper author Paper submitter

In this paper, we propose the S2S-Arena for benchmarking speech models, a speech2speech evaluation protocol on instruction following ability with paralinguistic information. We collect 154 TTS and human-recording samples from four domains (Education, Social Companionship, Entertainment, and Medical Consultation) to compare existing speech models (GPT-4o-realtime, FunaudioLLM, speechGPT, etc.). We also give four findings based on our arena-style comparison. Everyone can try it in our huggingface space.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2503.05085 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 1

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.