Dataset Viewer
View in Dataset Viewer
Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code: ConfigNamesError Exception: FileNotFoundError Message: Couldn't find a dataset script at /src/services/worker/calmgoose/book-embeddings/book-embeddings.py or any data file in the same directory. Couldn't find 'calmgoose/book-embeddings' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in calmgoose/book-embeddings. Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 55, in compute_config_names_response for config in sorted(get_dataset_config_names(path=dataset, token=hf_token)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1508, in dataset_module_factory raise FileNotFoundError( FileNotFoundError: Couldn't find a dataset script at /src/services/worker/calmgoose/book-embeddings/book-embeddings.py or any data file in the same directory. Couldn't find 'calmgoose/book-embeddings' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in calmgoose/book-embeddings.
Need help to make the dataset viewer work? Open a discussion for direct support.
YAML Metadata
Warning:
The task_categories "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, text2text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, other
Vector store of embeddings for books
- "1984" by George Orwell
- "The Almanac of Naval Ravikant" by Eric Jorgenson
This is a faiss vector store created with instructor embeddings using LangChain . Use it for similarity search, question answering or anything else that leverages embeddings! π
Creating these embeddings can take a while so here's a convenient, downloadable one π€
How to use
- Specify the book from one of the following:
"1984"
"The Almanac of Naval Ravikant"
- Download data
- Load to use with LangChain
pip install -qqq langchain InstructorEmbedding sentence_transformers faiss-cpu huggingface_hub
import os
from langchain.embeddings import HuggingFaceInstructEmbeddings
from langchain.vectorstores.faiss import FAISS
from huggingface_hub import snapshot_download
# download the vectorstore for the book you want
BOOK="1984"
cache_dir=f"{book}_cache"
vectorstore = snapshot_download(repo_id="calmgoose/book-embeddings",
repo_type="dataset",
revision="main",
allow_patterns=f"books/{BOOK}/*", # to download only the one book
cache_dir=cache_dir,
)
# get path to the `vectorstore` folder that you just downloaded
# we'll look inside the `cache_dir` for the folder we want
target_dir = BOOK
# Walk through the directory tree recursively
for root, dirs, files in os.walk(cache_dir):
# Check if the target directory is in the list of directories
if target_dir in dirs:
# Get the full path of the target directory
target_path = os.path.join(root, target_dir)
# load embeddings
# this is what was used to create embeddings for the book
embeddings = HuggingFaceInstructEmbeddings(
embed_instruction="Represent the book passage for retrieval: ",
query_instruction="Represent the question for retrieving supporting texts from the book passage: "
)
# load vector store to use with langchain
docsearch = FAISS.load_local(folder_path=target_path, embeddings=embeddings)
# similarity search
question = "Who is big brother?"
search = docsearch.similarity_search(question, k=4)
for item in search:
print(item.page_content)
print(f"From page: {item.metadata['page']}")
print("---")
- Downloads last month
- 1