movie_descriptors / README.md
mt0rm0's picture
Upload 2 files
8b12b40
metadata
license: cc0-1.0
task_categories:
  - sentence-similarity
language:
  - en
pretty_name: '"Movie descriptors for Semantic Search"'
size_categories:
  - 10K<n<100K
tags:
  - movies
  - embeddings
  - semantic search
  - films
  - hpi
  - workshop

Dataset Card

This dataset is a subset from Kaggle's The Movie Dataset that contains only name, release year and overview for every film in the original dataset that has that information complete. It is intended as a toy dataset for learning about embeddings in a workshop from the AI Service Center Berlin-Brandenburg at the Hasso Plattner Institute.

This dataset has a smaller version here.

Dataset Details

Dataset Description

The dataset has 44435 rows and 3 columns:

  • 'name': includes the title of the movies
  • 'release_year': indicates the year of release
  • 'overview': provides a brief description of each movie, used for advertisement.

Curated by: Mario Tormo Romero

Language(s) (NLP): English

License: cc0-1.0

Dataset Sources

This Dataset is a subset of Kaggle's The Movie Dataset. We have only used the movies_metadata.csv file and extracted some features (see Dataset Description) and dropped the rows that didn't were complete.

The original Dataset has a cc0-1.0 License and we have maintained it.

Uses

This is a toy dataset created for pegagogical purposes, and is used in the Working with embeddings Workshop created and organized by the AI Service Center Berlin-Brandenburg at the Hasso Plattner Institute.

Dataset Creation

Curation Rationale

We want to provide with this dataset a fast way of obtaining the required data for our workshops without having to download huge datasets that contain just way too much information.

Source Data

Our source is Kaggle's The Movie Dataset., so the information comes from the MovieLens Dataset. The dataset consists of movies released on or before July 2017.

Data Collection and Processing

The data was downloaded from Kaggle as a zip file. The file movies_metadata.csv was then extracted.

The data was processed with the following code:

import pandas as pd

# load the csv file
df = pd.read_csv("movies_metadata.csv", low_memory=False)

# select the required columns, drop rows with missing values and
# reset the index
df = df.loc[:, ['title', 'release_date', 'overview']]
df = df.dropna(axis=0).reset_index(drop=True)

# make a new column with the release year
df.loc[:, 'release_year'] = pd.to_datetime(df.release_date).dt.year

# select the columns in the desired order
df = df.loc[:, ['title', 'release_year', 'overview']]

# save the data to parquet
df.to_parquet('descriptors_data.parquet')

Who are the source data producers?

This dataset is an ensemble of data collected by Rounak Banik from TMDB and GroupLens. In particular, the movies metadata has been collected from the TMDB Open API, but the source dataset is not endorsed or certified by TMDb.