airbnb_embeddings / README.md
ajosh0504's picture
Update README.md
957b185 verified
|
raw
history blame
No virus
4.52 kB
---
license: apache-2.0
task_categories:
- question-answering
- text-retrieval
language:
- en
tags:
- vector search
- semantic search
- retrieval augmented generation
size_categories:
- 1K<n<10K
---
## Overview
This dataset consists of AirBnB listings with property descriptions, reviews and other metadata.
We also provide embeddings (using OpenAI's **text-embedding-3-small** model) of the property descriptions so you can use this dataset for building Search and RAG applications.
## Dataset Structure
Here is a full list of fields contained in the dataset. Some noteworthy fields have been highlighted:
- _id: Unique identifier for the listing
- listing_url: URL for the listing on AirBnB
- **name**: Title or name of the listing
- **summary**: Short overview of listing
- **space**: Short description of the space, amenities etc.
- **description**: Full listing description
- neighborhood_overview: Description of surrounding area
- notes: Special instructions or notes
- transit: Nearby public transportation options
- access: How to access the property. Door codes etc.
- interaction: Host's preferred interaction medium
- house_rules: Rules guests must follow
- **property_type**: Type of property
- room_type: Listing's room category
- bed_type: Type of bed provided
- minimum_nights: Minimum stay required
- maximum_nights: Maximum stay allowed
- cancellation_policy: Terms for cancelling booking
- first_review: Date of first review
- last_review: Date of latest review
- **accommodates**: Number of guests accommodated
- **bedrooms**: Number of bedrooms available
- **beds**: Number of beds available
- number_of_reviews: Total reviews received
- bathrooms: Number of bathrooms available
- **amenities**: List of amenities offered
- **price**: Nightly price for listing
- security_deposit: Required security deposit amount
- cleaning_fee: Additional cleaning fee charged
- extra_people: Fee for additional guests
- guests_included: Number of guests included in the base price
- **images**: Links to listing images
- host: Information about the host
- **address**: Physical address of listing
- **availability**: Availability dates for listing
- **review_scores**: Aggregate review scores
- reviews: Individual guest reviews
- weekly_price: Discounted price for week
- monthly_price: Discounted price for month
- reviews_per_month: Average monthly review count
- **space_embeddings**: Embeddings of the property description in the **space** field
## Usage
This dataset can be useful for:
- Building Hybrid Search applications. Use the embeddings provided for vector search and the metadata fields for pre-filtering and/or full-text search.
- Building Multimodal Search applications. Some listings have images associated with them. Use a model like [CLIP](https://huggingface.co/openai/clip-vit-base-patch32) to generate image and text emebeddings.
- Building RAG applications
## Ingest Data
To experiment with this dataset using MongoDB Atlas, first [create a MongoDB Atlas account](https://www.mongodb.com/cloud/atlas/register?utm_campaign=devrel&utm_source=community&utm_medium=organic_social&utm_content=Hugging%20Face%20Dataset&utm_term=apoorva.joshi).
You can then use the following script to load this dataset into your MongoDB Atlas cluster:
```
import os
from pymongo import MongoClient
import datasets
from datasets import load_dataset
from bson import json_util
# MongoDB Atlas URI and client setup
uri = os.environ.get('MONGODB_ATLAS_URI')
client = MongoClient(uri)
# Change to the appropriate database and collection names
db_name = 'your_database_name' # Change this to your actual database name
collection_name = 'airbnb_embeddings' # Change this to your actual collection name
collection = client[db_name][collection_name]
# Load the "airbnb_embeddings" dataset from Hugging Face
dataset = load_dataset("MongoDB/airbnb_embeddings")
insert_data = []
# Iterate through the dataset and prepare the documents for insertion
# The script below ingests 1000 records into the database at a time
for item in dataset['train']:
# Convert the dataset item to MongoDB document format
doc_item = json_util.loads(json_util.dumps(item))
insert_data.append(doc_item)
# Insert in batches of 1000 documents
if len(insert_data) == 1000:
collection.insert_many(insert_data)
print("1000 records ingested")
insert_data = []
# Insert any remaining documents
if len(insert_data) > 0:
collection.insert_many(insert_data)
print("Data Ingested")
```