Datasets:
File size: 6,303 Bytes
43a9eb0 b474690 43a9eb0 491cd14 98b3e30 491cd14 43a9eb0 d7c2a38 469867b d7c2a38 491cd14 63e0f26 43a9eb0 63e0f26 43a9eb0 469867b 63e0f26 64dbcd0 63e0f26 43a9eb0 469867b 64dbcd0 469867b 63e0f26 469867b 43a9eb0 b474690 4533fa8 870dc69 4533fa8 870dc69 4533fa8 b474690 43a9eb0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 |
---
language:
- en
pretty_name: sf-nexus-extracted-features
---
# Dataset Card for SF Nexus Extracted Features
## Dataset Description
- **Homepage: https://sfnexus.io/**
- **Repository: https://github.com/SF-Nexus/extracted-features/tree/main**
- **Paper:**
- **Leaderboard:**
- **Point of Contact: Alex Wermer-Colan**
### Dataset Summary
The SF Nexus Extracted Features dataset contains text and metadata from 403 mid-twentieth century science fiction books, originally digitized from Temple University Libraries' Paskow Science Fiction Collection.
After digitization, the books were cleaned using Abbyy FineReader.
Because this is a collection of copyrighted fiction, the books have been disaggregated.
To improve performance of topic modeling and other nlp tasks, each book has also been split into chapters and then into chunks of approx. 1000 words.
Each row of this dataset contains one "chunk" of text as well as metadata about that text's title, author and publication.
### About the SF Nexus Corpus
The Paskow Science Fiction collection contains primarily materials from post-WWII, especially mass-market works of the New Wave era (often dated to 1964-1980).
The digitized texts have also been ingested into HathiTrust's repository for preservation and data curation; they are now viewable on HathiTrust's [Temple page](https://babel.hathitrust.org/cgi/ls?field1=ocr;q1=%2A;a=srchls;facet=htsource%3A%22Temple%20University%22;pn=4) for non-consumptive research.
For more information on the project to digitize and curate a corpus of "New Wave" science fiction, see Alex Wermer-Colan's post on the Temple University Scholars Studio blog, ["Building a New Wave Science Fiction Corpus."](https://sites.temple.edu/tudsc/2017/12/20/building-new-wave-science-fiction-corpus/).
### Languages
English
## Dataset Structure
This dataset contains disaggregated "chunks" of text from mid-twentieth century science fiction books and associated metadata. For example:
```
{'ID': 7299,
'Title': 'MILLENNIUM,
'Author': 'VARLEY',
'Pub Year': '1983',
'Chapter': 'None',
'Chunk': '105',
'Text': '. . . . . . / 1958 1976 249 A Ambler, And As Baker Ben Berkley Bogart Bova Bova, By Casablanca DEMON Eric Eugene, Eugene, Goes HOTLINE, Herman Humphrey Hupfeld It John John MILLENNIUM MacQuitty, Millennium, Mister Night OF OPHIUCHI One Oregon Oregon...” Organisation, PERSISTENCE Rank Remember Roy Sam THE THE TITAN, The The Time Titanic, VISION, Varley Varley WIZARD, William a about acknowledgement: also an and and and asked author be began bestselling by by by by by by completed continued course, directed do excellent film final had in in in is is is is, name nothing novel novel novel, octopus of of of of of of pet play produced published rain-shrouded s screenplay sinking song soon that the the the the the the the this time title title to to to to travel trilogy was was with with with with written written ’ “ ”'
'Clean Text': 'a ambler and as baker ben berkley bogart bova bova by casablanca demon eric eugene eugene goes hotline herman humphrey hupfeld it john john millennium macquitty millennium mister night of ophiuchi one oregon oregon organisation persistence rank remember roy sam the the titan the the time titanic vision varley varley wizard william a about acknowledgement also an and and and asked author be began bestselling by by by by by by completed continued course directed do excellent film final had in in in is is is is name nothing novel novel novel octopus of of of of of of pet play produced published rain shrouded s screenplay sinking song soon that the the the the the the the this time title title to to to to travel trilogy was was with with with with written written ''
'Chunk Word Count': '948',
}
```
### Data Fields
- **id: int** A unique id for the text
- **Title: str** The title of the book from which the text has been extracted
- **Author: str** The author of the book from which the text has been extracted
- **Pub Year: str** The date on which the book was published (first printing)
- **Chapter: int** The chapter in the book from which the text has been extracted
- **Chunk: int** Number indicating which "chunk" of text has been extracted (chunks are numbered per book; each book was split by chapter and then into n chunks of approx. 1000 words)
- **Text: str** The chunk of text extracted from the book
- **Clean Text: str** The chunk of text extracted from the book with lowercasing performed and punctuation, numbers and extra spaces removed
- **Chunk Word Count: int** The number of words the chunk of text contains
To Be Added:
- **summary: str** A brief summary of the book, if extracted from library records
- **pub_date: int** The date on which the book was published (first printing)
- **pub_city: int** The city in which the book was published (first printing)
- **lcgft_category: str** Information from the Library of Congress Genre/Form Terms for Library and Archival Materials, if known
### Loading the Dataset
Use the following code to load the dataset in a Python environment (note: does not work with repo set to private)
```
from datasets import load_dataset
# If the dataset is gated/private, make sure you have run huggingface-cli login
dataset = load_dataset("SF-Corpus/extracted_features")
```
Or just clone the dataset repo
```
git lfs install
git clone https://huggingface.co/datasets/SF-Corpus/extracted_features
# if you want to clone without large files – just their pointers
# prepend your git clone with the following env var:
GIT_LFS_SKIP_SMUDGE=1
```
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] |