
Dataset Preview
Go to dataset viewer
Viewer
The dataset preview is not available for this subset.
Couldn't get the size of external files in `_split_generators` because a request failed:
403 Client Error: Forbidden for url: https://s3.us-east-1.amazonaws.com/lfs.huggingface.co/repos/4f/9a/4f9ab43d5638e5fdbcfa67e473e5a08aed92eebd5aaaa398e99c906fa3f7a902/f970be15a7108a2fb555158e3f43b4f0019f7e87f90d1899e4da7b5dcc7430fe?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230504%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230504T025308Z&X-Amz-Expires=259200&X-Amz-Signature=d0b294ef50da8185ca6418f7943c7284583004a53f9334de2713dab69bb741da&X-Amz-SignedHeaders=host&response-content-disposition=attachment%3B%20filename%2A%3DUTF-8%27%27datasets.json%3B%20filename%3D%22datasets.json%22%3B&response-content-type=application%2Fjson&x-id=GetObject
Please consider moving your data files in this dataset repository instead (e.g. inside a data/ folder).
Exception: HTTPError Message: 403 Client Error: Forbidden for url: https://s3.us-east-1.amazonaws.com/lfs.huggingface.co/repos/4f/9a/4f9ab43d5638e5fdbcfa67e473e5a08aed92eebd5aaaa398e99c906fa3f7a902/f970be15a7108a2fb555158e3f43b4f0019f7e87f90d1899e4da7b5dcc7430fe?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230504%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230504T025308Z&X-Amz-Expires=259200&X-Amz-Signature=d0b294ef50da8185ca6418f7943c7284583004a53f9334de2713dab69bb741da&X-Amz-SignedHeaders=host&response-content-disposition=attachment%3B%20filename%2A%3DUTF-8%27%27datasets.json%3B%20filename%3D%22datasets.json%22%3B&response-content-type=application%2Fjson&x-id=GetObject Traceback: The previous step failed, the error is copied to this step: kind='config-info' dataset='huggingartists/headie-one' config='default' split=None---The previous step failed, the error is copied to this step: kind='config-parquet-and-info' dataset='huggingartists/headie-one' config='default' split=None---Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 624, in raise_if_too_big_from_external_data_files for i, size in enumerate(pool.imap_unordered(get_size, ext_data_files)): File "/usr/local/lib/python3.9/multiprocessing/pool.py", line 870, in next raise value File "/usr/local/lib/python3.9/multiprocessing/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 520, in _request_size response.raise_for_status() File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://s3.us-east-1.amazonaws.com/lfs.huggingface.co/repos/4f/9a/4f9ab43d5638e5fdbcfa67e473e5a08aed92eebd5aaaa398e99c906fa3f7a902/f970be15a7108a2fb555158e3f43b4f0019f7e87f90d1899e4da7b5dcc7430fe?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230504%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230504T025308Z&X-Amz-Expires=259200&X-Amz-Signature=d0b294ef50da8185ca6418f7943c7284583004a53f9334de2713dab69bb741da&X-Amz-SignedHeaders=host&response-content-disposition=attachment%3B%20filename%2A%3DUTF-8%27%27datasets.json%3B%20filename%3D%22datasets.json%22%3B&response-content-type=application%2Fjson&x-id=GetObject
Need help to make the dataset viewer work? Open an discussion for direct support.
Dataset Card for "huggingartists/headie-one"
Dataset Summary
The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists. Model is available here.
Supported Tasks and Leaderboards
Languages
en
How to use
How to load this dataset directly with the datasets library:
from datasets import load_dataset
dataset = load_dataset("huggingartists/headie-one")
Dataset Structure
An example of 'train' looks as follows.
This example was too long and was cropped:
{
"text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
}
Data Fields
The data fields are the same among all splits.
text
: astring
feature.
Data Splits
train | validation | test |
---|---|---|
224 | - | - |
'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
from datasets import load_dataset, Dataset, DatasetDict
import numpy as np
datasets = load_dataset("huggingartists/headie-one")
train_percentage = 0.9
validation_percentage = 0.07
test_percentage = 0.03
train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
datasets = DatasetDict(
{
'train': Dataset.from_dict({'text': list(train)}),
'validation': Dataset.from_dict({'text': list(validation)}),
'test': Dataset.from_dict({'text': list(test)})
}
)
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
Discussion of Biases
Other Known Limitations
Additional Information
Dataset Curators
Licensing Information
Citation Information
@InProceedings{huggingartists,
author={Aleksey Korshuk}
year=2022
}
About
Built by Aleksey Korshuk
For more details, visit the project repository.
- Downloads last month
- 2
Models trained or fine-tuned on huggingartists/headie-one

•
Updated
•
3