airbnb_embeddings / README.md
ajosh0504's picture
Update README.md
d57ec91 verified
metadata
license: apache-2.0
task_categories:
  - question-answering
  - text-retrieval
  - text-to-image
language:
  - en
tags:
  - vector search
  - multimodal
  - retrieval augmented generation
size_categories:
  - 1K<n<10K

Overview

This dataset consists of AirBnB listings with property descriptions, reviews, and other metadata.

It also contains text embeddings of the property descriptions as well as image embeddings of the listing image. The text embeddings were created using OpenAI's text-embedding-3-small model and the image embeddings using OpenAI's clip-vit-base-patch32 model available on Hugging Face.

The text embeddings have 1536 dimensions, while the image embeddings have 512 dimensions.

Dataset Structure

Here is a full list of fields contained in the dataset. Some noteworthy fields have been highlighted:

  • _id: Unique identifier for the listing
  • listing_url: URL for the listing on AirBnB
  • name: Title or name of the listing
  • summary: Short overview of listing
  • space: Short description of the space, amenities etc.
  • description: Full listing description
  • neighborhood_overview: Description of surrounding area
  • notes: Special instructions or notes
  • transit: Nearby public transportation options
  • access: How to access the property. Door codes etc.
  • interaction: Host's preferred interaction medium
  • house_rules: Rules guests must follow
  • property_type: Type of property
  • room_type: Listing's room category
  • bed_type: Type of bed provided
  • minimum_nights: Minimum stay required
  • maximum_nights: Maximum stay allowed
  • cancellation_policy: Terms for cancelling booking
  • first_review: Date of first review
  • last_review: Date of latest review
  • accommodates: Number of guests accommodated
  • bedrooms: Number of bedrooms available
  • beds: Number of beds available
  • number_of_reviews: Total reviews received
  • bathrooms: Number of bathrooms available
  • amenities: List of amenities offered
  • price: Nightly price for listing
  • security_deposit: Required security deposit amount
  • cleaning_fee: Additional cleaning fee charged
  • extra_people: Fee for additional guests
  • guests_included: Number of guests included in the base price
  • images: Links to listing images
  • host: Information about the host
  • address: Physical address of listing
  • availability: Availability dates for listing
  • review_scores: Aggregate review scores
  • reviews: Individual guest reviews
  • weekly_price: Discounted price for week
  • monthly_price: Discounted price for month
  • text_embeddings: Embeddings of the property description in the space field
  • image_embeddings: Embeddings of the picture_url in the images field

Usage

This dataset can be useful for:

  • Building Multimodal Search applications. Embed text queries using the CLIP model, and retrieve relevant images using the image embeddings provided.
  • Building Hybrid Search applications. Use the embeddings provided for vector search and the metadata fields for pre-filtering and/or full-text search.
  • Building RAG applications

Ingest Data

To experiment with this dataset using MongoDB Atlas, first create a MongoDB Atlas account.

You can then use the following script to load this dataset into your MongoDB Atlas cluster:

import os
from pymongo import MongoClient
import datasets
from datasets import load_dataset
from bson import json_util

# MongoDB Atlas URI and client setup
uri = os.environ.get('MONGODB_ATLAS_URI')
client = MongoClient(uri)

# Change to the appropriate database and collection names
db_name = 'your_database_name'  # Change this to your actual database name
collection_name = 'airbnb_embeddings'  # Change this to your actual collection name

collection = client[db_name][collection_name]

# Load the "airbnb_embeddings" dataset from Hugging Face
dataset = load_dataset("MongoDB/airbnb_embeddings")

insert_data = []

# Iterate through the dataset and prepare the documents for insertion
# The script below ingests 1000 records into the database at a time
for item in dataset['train']:
    # Convert the dataset item to MongoDB document format
    doc_item = json_util.loads(json_util.dumps(item))
    insert_data.append(doc_item)

    # Insert in batches of 1000 documents
    if len(insert_data) == 1000:
        collection.insert_many(insert_data)
        print("1000 records ingested")
        insert_data = []

# Insert any remaining documents
if len(insert_data) > 0:
    collection.insert_many(insert_data)
    print("{} records ingested".format(len(insert_data)))

print("All records ingested successfully!")