--- license: cc0-1.0 task_categories: - sentence-similarity language: - en pretty_name: '"Movie descriptors for Semantic Search"' size_categories: - 10Kmovies_metadata.csv file and extracted some features (see Dataset Description) and dropped the rows that didn't were complete. The original Dataset has a cc0-1.0 License and we have maintained it. ## Uses This is a toy dataset created for pegagogical purposes, and is used in the **Working with embeddings** Workshop created and organized by the [AI Service Center Berlin-Brandenburg](https://hpi.de/kisz/) at the [Hasso Plattner Institute](https://hpi.de/). ## Dataset Creation ### Curation Rationale We want to provide with this dataset a fast way of obtaining the required data for our workshops without having to download huge datasets that contain just way too much information. ### Source Data Our source is Kaggle's The Movie Dataset., so the information comes from the MovieLens Dataset. The dataset consists of movies released on or before July 2017. #### Data Collection and Processing The data was downloaded from [Kaggle](https://www.kaggle.com/datasets/rounakbanik/the-movies-dataset) as a zip file. The file movies_metadata.csv was then extracted. The data was processed with the following code: ```python import pandas as pd # load the csv file df = pd.read_csv("movies_metadata.csv", low_memory=False) # select the required columns, drop rows with missing values and # reset the index df = df.loc[:, ['title', 'release_date', 'overview']] df = df.dropna(axis=0).reset_index(drop=True) # make a new column with the release year df.loc[:, 'release_year'] = pd.to_datetime(df.release_date).dt.year # select the columns in the desired order df = df.loc[:, ['title', 'release_year', 'overview']] # save the data to parquet df.to_parquet('descriptors_data.parquet') ``` #### Who are the source data producers? This dataset is an ensemble of data collected by [Rounak Banik](https://www.kaggle.com/rounakbanik) from TMDB and GroupLens. In particular, the movies metadata has been collected from the TMDB Open API, but the source dataset is not endorsed or certified by TMDb.