|
--- |
|
language: |
|
- en |
|
license: cc |
|
library_name: adapter-transformers |
|
tags: |
|
- music |
|
- art |
|
datasets: |
|
- SpartanCinder/artist-lyrics-dataset |
|
- SpartanCinder/song-lyrics-artist-classifier |
|
metrics: |
|
- accuracy |
|
--- |
|
# GPT2 Pretrained Lyric Generation Model |
|
|
|
This repository contains a pretrained GPT2 model fine-tuned for lyric generation. The model was trained using the Hugging Face's Transformers library. |
|
|
|
## Model Details |
|
|
|
- **Model architecture:** GPT2 |
|
- **Training data:** The datasets were created using the Genius API and are linked in the Model's tags. |
|
- **Training duration:** [Mention how long the model was trained] |
|
|
|
## Usage |
|
|
|
The model can be used to generate lyrics. |
|
It uses nucleus sampling with a probability threshold of 0.9 for generating the lyrics, |
|
which helps in generating more diverse and less repetitive text. |
|
|
|
Here is a basic usage example: |
|
|
|
```python |
|
from transformers import GPT2LMHeadModel, GPT2Tokenizer |
|
|
|
tokenizer = GPT2Tokenizer.from_pretrained("SpartanCinder/GPT2-pretrained-lyric-generation") |
|
model = GPT2LMHeadModel.from_pretrained("SpartanCinder/GPT2-pretrained-lyric-generation") |
|
|
|
input_ids = tokenizer.encode("Once upon a time", return_tensors='pt') |
|
output = model.generate(input_ids, max_length=100, num_return_sequences=5, do_sample=True, top_p=0.9) |
|
print(tokenizer.decode(output[0], skip_special_tokens=True)) |