File size: 4,271 Bytes
8208268
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5311526
33c7197
d5b72b0
 
ac4f479
33c7197
ac4f479
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d5b72b0
ac4f479
 
d5b72b0
ac4f479
 
d5b72b0
 
ac4f479
d5b72b0
ac4f479
 
 
d5b72b0
 
ac4f479
d5b72b0
ac4f479
 
 
 
 
 
 
 
 
 
 
d5b72b0
 
 
ac4f479
 
 
 
 
 
 
 
 
 
 
d5b72b0
 
 
ac4f479
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d5b72b0
8208268
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
license: apache-2.0
datasets:
- prithivMLmods/Song-Catalogue-Long-Thought
language:
- en
base_model:
- meta-llama/Llama-3.2-3B-Instruct
pipeline_tag: text-generation
library_name: transformers
tags:
- safetensors
- Llama3.2
- 3B
- Extended-Stream
- text-generation-inference
- Instruct
---
### **Llama-Song-Stream-3B-Instruct Model Card**

The **Llama-Song-Stream-3B-Instruct** is a fine-tuned language model specializing in generating music-related text, such as song lyrics, compositions, and musical thoughts. Built upon the **meta-llama/Llama-3.2-3B-Instruct** base, it has been trained with a custom dataset focused on song lyrics and music compositions to produce context-aware, creative, and stylized music output.

| **File Name**                  | **Size**   | **Description**                                 |
|---------------------------------|------------|-------------------------------------------------|
| `.gitattributes`                | 1.57 kB    | LFS tracking file to manage large model files.  |
| `README.md`                     | 282 Bytes  | Documentation with model details and usage.    |
| `config.json`                   | 1.03 kB   | Model configuration settings.                   |
| `generation_config.json`        | 248 Bytes  | Generation parameters like max sequence length. |
| `pytorch_model-00001-of-00002.bin` | 4.97 GB  | Primary weights (part 1 of 2).                |
| `pytorch_model-00002-of-00002.bin` | 1.46 GB  | Primary weights (part 2 of 2).                |
| `pytorch_model.bin.index.json`  | 21.2 kB   | Index file mapping the checkpoint layers.     |
| `special_tokens_map.json`       | 477 Bytes  | Defines special tokens for tokenization.      |
| `tokenizer.json`                | 17.2 MB    | Tokenizer data for text generation.           |
| `tokenizer_config.json`         | 57.4 kB   | Configuration settings for tokenization.      |

### **Key Features**

1. **Song Generation:**  
   - Generates full song lyrics based on user input, maintaining rhyme, meter, and thematic consistency.

2. **Music Context Understanding:**  
   - Trained on lyrics and song patterns to mimic and generate song-like content.

3. **Fine-tuned Creativity:**  
   - Fine-tuned using *Song-Catalogue-Long-Thought* for coherent lyric generation over extended prompts.

4. **Interactive Text Generation:**  
   - Designed for use cases like generating lyrical ideas, creating drafts for songwriters, or exploring themes musically.

---
### **Training Details**

- **Base Model:** [meta-llama/Llama-3.2-3B-Instruct](#)  
- **Finetuning Dataset:** [prithivMLmods/Song-Catalogue-Long-Thought](#)  
  - This dataset comprises 57.7k examples of lyrical patterns, song fragments, and themes.

---
### **Applications**

1. **Songwriting AI Tools:**  
   - Generate lyrics for genres like pop, rock, rap, classical, and others.

2. **Creative Writing Assistance:**  
   - Assist songwriters by suggesting lyric variations and song drafts.

3. **Storytelling via Music:**  
   - Create song narratives using custom themes and moods.

4. **Entertainment AI Integration:**  
   - Build virtual musicians or interactive lyric-based content generators.

---

### **Example Usage**

#### **Setup**
First, load the Llama-Song-Stream model:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "prithivMLmods/Llama-Song-Stream-3B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
```

---

#### **Generate Lyrics Example**
```python
prompt = "Write a song about freedom and the open sky"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, temperature=0.7, num_return_sequences=1)

generated_lyrics = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_lyrics)
```

---

### **Deployment Notes**

1. **Serverless vs. Dedicated Endpoints:**  
   The model currently does not have enough usage for a serverless endpoint. Options include:
   - **Dedicated inference endpoints** for faster responses.
   - **Custom integrations via Hugging Face inference tools.**

2. **Resource Requirements:**  
   Ensure sufficient GPU memory and compute for large PyTorch model weights.

---