chukewang commited on
Commit
5edbe17
ยท
1 Parent(s): d7af179

Init: add images via LFS

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -15,30 +15,30 @@ metrics:
15
  - accuracy
16
  ---
17
 
18
- # TimeAudio: Bridging Temporal Gaps in Large Audio-Language
19
-
20
- ๐Ÿš€๐Ÿš€ The repo of **TimeAudio**!
21
-
22
- Recent Large Audio-Language Models (LALMs) exhibit impressive capabilities in understanding audio content for conversational QA tasks. However, these models struggle to accurately understand timestamps for temporal localization (e.g., Temporal Audio Grounding) and are restricted to short audio perception, leading to constrained capabilities on fine-grained tasks. We identify three key aspects that limit their temporal localization and long audio understanding: (i) timestamp representation, (ii) architecture, and (iii) data. To address this, we introduce TimeAudio, a novel method that empowers LALMs to connect their understanding of audio content with precise temporal perception. Specifically, we incorporate unique temporal markers to improve time-sensitive reasoning and apply an absolute time-aware encoding that explicitly grounds the acoustic features with absolute time information. Moreover, to achieve end-to-end long audio understanding, we introduce a segment-level token merging module to substantially reduce audio token redundancy and enhance the efficiency of information extraction. Due to the lack of suitable datasets and evaluation metrics, we consolidate existing audio datasets into a new dataset focused on temporal tasks and establish a series of metrics to evaluate the fine-grained performance. Evaluations show strong performance across a variety of fine-grained tasks, such as dense captioning, temporal grounding, and timeline speech summarization, demonstrating TimeAudio's robust temporal localization and reasoning capabilities.
23
 
24
  <div style='display:flex; gap: 0.25rem; '>
25
  <a href='https://arxiv.org/pdf/.pdf'><img src='https://img.shields.io/badge/paper-PDF-green'></a>
26
  <a href='https://huggingface.co/lysanderism/TimeAudio'><img src='https://img.shields.io/badge/huggingface-checkpoint-yellow'></a>
27
  </div>
28
 
 
 
 
 
29
 
30
- ## Structure
31
 
32
  TimeAudio is based on the fundamental architecture of SALMONN. Specifically, TimeAudio is consists of four components: a sliding audio encoder, a window Q-former, a segment-level token merging module, and an LLM to process raw audio. The sliding audio encoder first divides long audio into shorter segments and combines the BEATs and the Whisper encoder to extract features for each segments independently. Then, the window Q-former projects these encoded audio tokens into the language space and applies a segment-level token merging mechanism based on attention scores to filter out unimportant acoustic information.
33
  Finally, the audio embeddings and the textual token embeddings of user prompts are fed into the LLM to generate response.
34
 
35
- <div align=center><img src="img/overview.png" height="100%" width="75%"/></div>
36
 
37
- ## Demos
38
 
39
  Compared with traditional speech and audio processing tasks such as speech recognition and audio caption, Example of failed cases by Qwen2-Audio and Qwen2-Audio-R1 on fine-grained tasks that require both semantics and timestamps as output.
40
 
41
- <div align=center><img src="img/case.png" height="100%" width="75%"/></div>
42
 
43
  ## How to inference in CLI
44
 
 
15
  - accuracy
16
  ---
17
 
18
+ ## ๐Ÿš€๐Ÿš€TimeAudio: Bridging Temporal Gaps in Large Audio-Language
 
 
 
 
19
 
20
  <div style='display:flex; gap: 0.25rem; '>
21
  <a href='https://arxiv.org/pdf/.pdf'><img src='https://img.shields.io/badge/paper-PDF-green'></a>
22
  <a href='https://huggingface.co/lysanderism/TimeAudio'><img src='https://img.shields.io/badge/huggingface-checkpoint-yellow'></a>
23
  </div>
24
 
25
+ Recent Large Audio-Language Models (LALMs) exhibit impressive capabilities in understanding audio content for conversational QA tasks. However, these models struggle to accurately understand timestamps for temporal localization (e.g., Temporal Audio Grounding) and are restricted to short audio perception, leading to constrained capabilities on fine-grained tasks. We identify three key aspects that limit their temporal localization and long audio understanding: (i) timestamp representation, (ii) architecture, and (iii) data.
26
+
27
+ To address this, we introduce TimeAudio, a novel method that empowers LALMs to connect their understanding of audio content with precise temporal perception. Specifically, we incorporate unique temporal markers to improve time-sensitive reasoning and apply an absolute time-aware encoding that explicitly grounds the acoustic features with absolute time information. Moreover, to achieve end-to-end long audio understanding, we introduce a segment-level token merging module to substantially reduce audio token redundancy and enhance the efficiency of information extraction. Due to the lack of suitable datasets and evaluation metrics, we consolidate existing audio datasets into a new dataset focused on temporal tasks and establish a series of metrics to evaluate the fine-grained performance. Evaluations show strong performance across a variety of fine-grained tasks, such as dense captioning, temporal grounding, and timeline speech summarization, demonstrating TimeAudio's robust temporal localization and reasoning capabilities.
28
+
29
 
30
+ ## Method
31
 
32
  TimeAudio is based on the fundamental architecture of SALMONN. Specifically, TimeAudio is consists of four components: a sliding audio encoder, a window Q-former, a segment-level token merging module, and an LLM to process raw audio. The sliding audio encoder first divides long audio into shorter segments and combines the BEATs and the Whisper encoder to extract features for each segments independently. Then, the window Q-former projects these encoded audio tokens into the language space and applies a segment-level token merging mechanism based on attention scores to filter out unimportant acoustic information.
33
  Finally, the audio embeddings and the textual token embeddings of user prompts are fed into the LLM to generate response.
34
 
35
+ <div align=center><img src="img/overview.png" height="100%" width="90%"/></div>
36
 
37
+ ## Compare
38
 
39
  Compared with traditional speech and audio processing tasks such as speech recognition and audio caption, Example of failed cases by Qwen2-Audio and Qwen2-Audio-R1 on fine-grained tasks that require both semantics and timestamps as output.
40
 
41
+ <div align=center><img src="img/case.png" height="100%" width="90%"/></div>
42
 
43
  ## How to inference in CLI
44