Datasets:
language:
- en
pretty_name: sf-nexus-extracted-features
Dataset Card for SF Nexus Extracted Features
Dataset Description
- Homepage: https://sfnexus.io/
- Repository: https://github.com/SF-Nexus/extracted-features/tree/main
- Paper:
- Leaderboard:
- Point of Contact: Alex Wermer-Colan
Dataset Summary
The SF Nexus Extracted Features dataset contains text and metadata from 403 mid-twentieth century science fiction books, originally digitized from Temple University Libraries' Paskow Science Fiction Collection. After digitization, the books were cleaned using Abbyy FineReader. Because this is a collection of copyrighted fiction, the books have been disaggregated. To improve performance of topic modeling and other nlp tasks, each book has also been split into chapters and then into chunks of approx. 1000 words. Each row of this dataset contains one "chunk" of text as well as metadata about that text's title, author and publication.
About the SF Nexus Corpus
The Paskow Science Fiction collection contains primarily materials from post-WWII, especially mass-market works of the New Wave era (often dated to 1964-1980). The digitized texts have also been ingested into HathiTrust's repository for preservation and data curation; they are now viewable on HathiTrust's Temple page for non-consumptive research. For more information on the project to digitize and curate a corpus of "New Wave" science fiction, see Alex Wermer-Colan's post on the Temple University Scholars Studio blog, "Building a New Wave Science Fiction Corpus.".
Languages
English
Dataset Structure
This dataset contains disaggregated "chunks" of text from mid-twentieth century science fiction books and associated metadata. For example:
{'ID': 7299,
'Title': 'MILLENNIUM,
'Author': 'VARLEY',
'Pub Year': '1983',
'Chapter': 'None',
'Chunk': '105',
'Text': '. . . . . . / 1958 1976 249 A Ambler, And As Baker Ben Berkley Bogart Bova Bova, By Casablanca DEMON Eric Eugene, Eugene, Goes HOTLINE, Herman Humphrey Hupfeld It John John MILLENNIUM MacQuitty, Millennium, Mister Night OF OPHIUCHI One Oregon Oregon...” Organisation, PERSISTENCE Rank Remember Roy Sam THE THE TITAN, The The Time Titanic, VISION, Varley Varley WIZARD, William a about acknowledgement: also an and and and asked author be began bestselling by by by by by by completed continued course, directed do excellent film final had in in in is is is is, name nothing novel novel novel, octopus of of of of of of pet play produced published rain-shrouded s screenplay sinking song soon that the the the the the the the this time title title to to to to travel trilogy was was with with with with written written ’ “ ”'
'Clean Text': 'a ambler and as baker ben berkley bogart bova bova by casablanca demon eric eugene eugene goes hotline herman humphrey hupfeld it john john millennium macquitty millennium mister night of ophiuchi one oregon oregon organisation persistence rank remember roy sam the the titan the the time titanic vision varley varley wizard william a about acknowledgement also an and and and asked author be began bestselling by by by by by by completed continued course directed do excellent film final had in in in is is is is name nothing novel novel novel octopus of of of of of of pet play produced published rain shrouded s screenplay sinking song soon that the the the the the the the this time title title to to to to travel trilogy was was with with with with written written ''
'Chunk Word Count': '948',
}
Data Fields
- id: int A unique id for the text
- Title: str The title of the book from which the text has been extracted
- Author: str The author of the book from which the text has been extracted
- Pub Year: str The date on which the book was published (first printing)
- Chapter: int The chapter in the book from which the text has been extracted
- Chunk: int Number indicating which "chunk" of text has been extracted (chunks are numbered per book; each book was split by chapter and then into n chunks of approx. 1000 words)
- Text: str The chunk of text extracted from the book
- Clean Text: str The chunk of text extracted from the book with lowercasing performed and punctuation, numbers and extra spaces removed
- Chunk Word Count: int The number of words the chunk of text contains
To Be Added:
- summary: str A brief summary of the book, if extracted from library records
- pub_date: int The date on which the book was published (first printing)
- pub_city: int The city in which the book was published (first printing)
- lcgft_category: str Information from the Library of Congress Genre/Form Terms for Library and Archival Materials, if known
Loading the Dataset
Use the following code to load the dataset in a Python environment (note: does not work with repo set to private)
from datasets import load_dataset
# If the dataset is gated/private, make sure you have run huggingface-cli login
dataset = load_dataset("SF-Corpus/extracted_features")
Or just clone the dataset repo
git lfs install
git clone https://huggingface.co/datasets/SF-Corpus/extracted_features
# if you want to clone without large files – just their pointers
# prepend your git clone with the following env var:
GIT_LFS_SKIP_SMUDGE=1
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Initial Data Collection and Normalization
[More Information Needed]
Who are the source language producers?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
[More Information Needed]
Licensing Information
[More Information Needed]
Citation Information
[More Information Needed]
Contributions
[More Information Needed]