The dataset preview is not available for this subset.
Couldn't get the size of external files in `_split_generators` because a request failed: 403 Client Error: Forbidden for url: Please consider moving your data files in this dataset repository instead (e.g. inside a data/ folder).
Exception:    HTTPError
Message:      403 Client Error: Forbidden for url:
Traceback:    The previous step failed, the error is copied to this step:  kind='config-info' dataset='huggingartists/headie-one' config='default' split=None---The previous step failed, the error is copied to this step:  kind='config-parquet-and-info' dataset='huggingartists/headie-one' config='default' split=None---Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/", line 624, in raise_if_too_big_from_external_data_files
                  for i, size in enumerate(pool.imap_unordered(get_size, ext_data_files)):
                File "/usr/local/lib/python3.9/multiprocessing/", line 870, in next
                  raise value
                File "/usr/local/lib/python3.9/multiprocessing/", line 125, in worker
                  result = (True, func(*args, **kwds))
                File "/src/services/worker/src/worker/job_runners/config/", line 520, in _request_size
                File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/", line 1021, in raise_for_status
                  raise HTTPError(http_error_msg, response=self)
              requests.exceptions.HTTPError: 403 Client Error: Forbidden for url:

Need help to make the dataset viewer work? Open an discussion for direct support.

Dataset Card for "huggingartists/headie-one"

Dataset Summary

The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists. Model is available here.

Supported Tasks and Leaderboards

More Information Needed



How to use

How to load this dataset directly with the datasets library:

from datasets import load_dataset

dataset = load_dataset("huggingartists/headie-one")

Dataset Structure

An example of 'train' looks as follows.

This example was too long and was cropped:

    "text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."

Data Fields

The data fields are the same among all splits.

  • text: a string feature.

Data Splits

train validation test
224 - -

'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:

from datasets import load_dataset, Dataset, DatasetDict
import numpy as np

datasets = load_dataset("huggingartists/headie-one")

train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03

train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])

datasets = DatasetDict(
        'train': Dataset.from_dict({'text': list(train)}),
        'validation': Dataset.from_dict({'text': list(validation)}),
        'test': Dataset.from_dict({'text': list(test)})

Dataset Creation

Curation Rationale

More Information Needed

Source Data

Initial Data Collection and Normalization

More Information Needed

Who are the source language producers?

More Information Needed


Annotation process

More Information Needed

Who are the annotators?

More Information Needed

Personal and Sensitive Information

More Information Needed

Considerations for Using the Data

Social Impact of Dataset

More Information Needed

Discussion of Biases

More Information Needed

Other Known Limitations

More Information Needed

Additional Information

Dataset Curators

More Information Needed

Licensing Information

More Information Needed

Citation Information

    author={Aleksey Korshuk}


Built by Aleksey Korshuk




For more details, visit the project repository.

GitHub stars

Downloads last month

Models trained or fine-tuned on huggingartists/headie-one