File size: 6,888 Bytes
5c8867a 9deb2ea 5c8867a 40a072d 5c8867a 40a072d 53cad53 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 482c8d0 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 8eaa3b2 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 482c8d0 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea 5c8867a 9deb2ea |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 |
---
datasets:
- homebrewltd/instruction-speech-whispervq-v2
language:
- en
license: apache-2.0
tags:
- sound language model
---
## Model Details
We have developed and released the family [Ichigo-llama3s](https://huggingface.co/collections/homebrew-research/llama3-s-669df2139f0576abc6eb7405). This family is natively understanding audio and text input.
We expand the Semantic tokens experiment with WhisperVQ as a tokenizer for audio files from [homebrewltd/Ichigo-llama3.1-s-base-v0.3](https://huggingface.co/homebrewltd/Ichigo-llama3.1-s-base-v0.3) with nearly 1B tokens from [Instruction Speech WhisperVQ v3](homebrewltd/mixed-instruction-speech-whispervq-v3-full) dataset.
This is the model checkpoint from step 7000. Due to some noise in the training data, it has an artificially higher score on the Speech Instruction benchmark.
**Model developers** Homebrew Research.
**Input** Text and sound.
**Output** Text.
**Model Architecture** Llama-3.
**Language(s):** English.
## Intended Use
**Intended Use Cases** This family is primarily intended for research applications. This version aims to further improve the LLM on sound understanding capabilities.
**Out-of-scope** The use of llama3-s in any manner that violates applicable laws or regulations is strictly prohibited.
## How to Get Started with the Model
Try this model using [Google Colab Notebook](https://colab.research.google.com/drive/18IiwN0AzBZaox5o0iidXqWD1xKq11XbZ?usp=sharing).
First, we need to convert the audio file to sound tokens
```python
device = "cuda" if torch.cuda.is_available() else "cpu"
if not os.path.exists("whisper-vq-stoks-medium-en+pl-fixed.model"):
hf_hub_download(
repo_id="jan-hq/WhisperVQ",
filename="whisper-vq-stoks-medium-en+pl-fixed.model",
local_dir=".",
)
vq_model = RQBottleneckTransformer.load_model(
"whisper-vq-stoks-medium-en+pl-fixed.model"
).to(device)
vq_model.ensure_whisper(device)
def audio_to_sound_tokens(audio_path, target_bandwidth=1.5, device=device):
wav, sr = torchaudio.load(audio_path)
if sr != 16000:
wav = torchaudio.functional.resample(wav, sr, 16000)
with torch.no_grad():
codes = vq_model.encode_audio(wav.to(device))
codes = codes[0].cpu().tolist()
result = ''.join(f'<|sound_{num:04d}|>' for num in codes)
return f'<|sound_start|>{result}<|sound_end|>'
```
Then, we can inference the model the same as any other LLM.
```python
def setup_pipeline(model_path, use_4bit=False, use_8bit=False):
tokenizer = AutoTokenizer.from_pretrained(model_path)
model_kwargs = {"device_map": "auto"}
if use_4bit:
model_kwargs["quantization_config"] = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
)
elif use_8bit:
model_kwargs["quantization_config"] = BitsAndBytesConfig(
load_in_8bit=True,
bnb_8bit_compute_dtype=torch.bfloat16,
bnb_8bit_use_double_quant=True,
)
else:
model_kwargs["torch_dtype"] = torch.bfloat16
model = AutoModelForCausalLM.from_pretrained(model_path, **model_kwargs)
return pipeline("text-generation", model=model, tokenizer=tokenizer)
def generate_text(pipe, messages, max_new_tokens=64, temperature=0.0, do_sample=False):
generation_args = {
"max_new_tokens": max_new_tokens,
"return_full_text": False,
"temperature": temperature,
"do_sample": do_sample,
}
output = pipe(messages, **generation_args)
return output[0]['generated_text']
# Usage
llm_path = "homebrewltd/llama3.1-s-instruct-v0.2"
pipe = setup_pipeline(llm_path, use_8bit=True)
```
## Training process
**Training Metrics Image**: Below is a snapshot of the training loss curve visualized.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65713d70f56f9538679e5a56/DmZOYY_-NQtNS610HXR8L.png)
**[MMLU](https://huggingface.co/datasets/cais/mmlu)**:
| Model | MMLU Score |
| --- | --- |
| llama3.5-instruct-8b | 69.40 |
| ichigo-llama3.1-s-v0.3: phase 3 | 63.79 |
| ichigo-llama3.1-s-v0.3: phase 2 | **63.08** |
| ichigo-llama3.1-s-base-v0.3 | 42.11 |
| llama3.5-instruct-v0.2 | 50.27 |
**[AudioBench](https://arxiv.org/abs/2406.16020) Eval**:
| Model Bench | [Open-hermes Instruction Audio](https://huggingface.co/datasets/AudioLLMs/openhermes_instruction_test) (GPT-4-O judge 0:5) | [Alpaca Instruction Audio](https://huggingface.co/datasets/AudioLLMs/alpaca_audio_test) (GPT-4-O judge 0:5) |
| --- | --- | --- |
| [Llama3.1-s-v2](https://huggingface.co/homebrewltd/llama3-s-instruct-v0.2) | 3.45 | 3.53 |
| [Ichigo-llama3.1-s v0.3-phase2 -cp7000](https://huggingface.co/homebrewltd/Ichigo-llama3.1-s-instruct-v0.3-phase-2) | **3.42** | **3.62** |
| [Ichigo-llama3.1-s v0.3-phase2-cplast](https://huggingface.co/jan-hq/llama3-s-instruct-v0.3-checkpoint-last) | 3.31 | 3.6 |
| [Ichigo-llama3.1-s v0.3-phase3](https://huggingface.co/homebrewltd/Ichigo-llama3.1-s-instruct-v0.3-phase-3) | 3.64 | 3.68 |
| [Qwen2-audio-7B](https://huggingface.co/Qwen/Qwen2-Audio-7B) | 2.63 | 2.24 |
### Hardware
**GPU Configuration**: Cluster of 8x NVIDIA H100-SXM-80GB.
**GPU Usage**:
- **Continual Training**: 12 hours.
### Training Arguments
We utilize [torchtune](https://github.com/pytorch/torchtune) library for the latest FSDP2 training code implementation.
| Parameter | Instruction Fine-tuning |
|----------------------------|-------------------------|
| **Epoch** | 1 |
| **Global batch size** | 256 |
| **Learning Rate** | 7e-5 |
| **Learning Scheduler** | Cosine with warmup |
| **Optimizer** | Adam torch fused |
| **Warmup Ratio** | 0.01 |
| **Weight Decay** | 0.005 |
| **Max Sequence Length** | 4096 |
## Examples
1. Good example:
<details>
<summary>Click to toggle Example 1</summary>
```
```
</details>
<details>
<summary>Click to toggle Example 2</summary>
```
```
</details>
2. Misunderstanding example:
<details>
<summary>Click to toggle Example 3</summary>
```
```
</details>
3. Off-tracked example:
<details>
<summary>Click to toggle Example 4</summary>
```
```
</details>
## Citation Information
**BibTeX:**
```
@article{Llama3-S: Sound Instruction Language Model 2024,
title={Llama3-S},
author={Homebrew Research},
year=2024,
month=August},
url={https://huggingface.co/homebrewltd/llama3.1-s-2024-08-20}
```
## Acknowledgement
- **[WhisperSpeech](https://github.com/collabora/WhisperSpeech)**
- **[Meta-Llama-3.1-8B-Instruct ](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)** |