--- license: cc-by-sa-4.0 language: en pretty_name: Enriched Movie Dataset with Multimodal Embeddings tags: - recommender-systems - multimodal - embeddings - movies size_categories: - 10k-100k --- # Enriched Movie Dataset with Multimodal Embeddings ## Dataset Description This dataset provides rich metadata for over 44,000 movies, with a primary focus on providing a pre-computed, high-quality **multimodal content embedding** for each film. It was created by fusing two popular Kaggle datasets: "The Movies Dataset" and the "IMDB Multimodal Vision & NLP Genre Classification" dataset. It has been further enriched with parsed text features and a unique 512-dimensional vector representation for each movie. These embeddings were generated by a deep learning model that fuses text (plot, genres, cast, crew) and image (poster) data. The model was trained using a multi-task triplet loss framework to understand genre, director, and actor similarity simultaneously, making the embeddings robust and suitable for a wide range of recommendation and content analysis tasks. ## Supported Tasks This dataset is primarily designed for **recommender systems** research and development. The pre-computed embeddings can be used to quickly build and prototype: * Content-based filtering models * Collaborative filtering models (by joining with user ratings) * Hybrid recommendation models ## Dataset Structure The dataset is provided as a single Parquet file. ### Data Fields The dataset contains numerous columns, but the key fields are: * `tmdb_id`: A unique integer identifier for each movie from The Movie Database (TMDB). * `title`: The title of the movie (string). * `plot_description`: A text summary of the movie's plot (string). * `genres`: A list of dictionaries containing the genre names and IDs (list of dicts). * `directors`: A list of director names (list of strings). * `main_actors`: A list of the primary actors (list of strings). * `poster_byte`: The raw byte data for the movie poster image (bytes). This is only available for ~5,000 movies. * `content_embedding`: **(Primary Feature)** A 512-element list of floats representing the multimodal embedding for the movie (list of floats). ### Data Splits The dataset is not pre-split and is provided as a single file. ## Dataset Creation ### Curation Rationale This dataset was created to bridge the gap between raw movie metadata and modern embedding-based recommendation techniques. By providing high-quality, pre-computed embeddings that capture multimodal information, it allows researchers and developers to rapidly prototype and build sophisticated recommendation systems without the need for extensive feature engineering and model training from scratch. ### Source Data This dataset is a derivative work created by fusing and enriching the following two public datasets: * [The Movies Dataset](https://www.kaggle.com/datasets/rounakbanik/the-movies-dataset) by Rounak Banik. * [IMDB Multimodal Vision & NLP Genre Classification](https://www.kaggle.com/datasets/zulkarnainsaurav/imdb-multimodal-vision-and-nlp-genre-classification) by Zulkarnain Saurav. ### Embedding Generation Process The primary `content_embedding` column was generated through a multi-step process: 1. **Initial Feature Extraction:** For each movie, initial embeddings were generated from different modalities using powerful pre-trained models. * **Text Embeddings:** Plot descriptions, taglines, and cast/crew information were passed through a `sentence-transformers/all-MiniLM-L6-v2` model. * **Image Embeddings:** Movie posters were passed through the image encoder of the `openai/clip-vit-base-patch32` model. For movies without a poster, a zero vector was used as a neutral placeholder. 2. **Fusion Model Training:** These separate, high-dimensional vectors were concatenated and fed into a custom fusion model (a Multi-Layer Perceptron). This fusion model was then trained using a multi-task triplet loss objective based on genre, director, and actor similarity. 3. **Final Embedding Generation:** The `content_embedding` in this dataset is the final 512-dimensional output of this trained fusion model, representing a rich, learned combination of all input modalities. ## Citation Information If you use this dataset in your research, please cite it as follows: ```bibtex @misc{jibhkate2025enriched, author = {Ujwal Jibhkate}, title = {Enriched Movie Dataset with Multimodal Embeddings}, year = {2025}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{[https://huggingface.co/datasets/ujwal-jibhkate/enriched-movie-dataset-with-multimodal-embeddings](https://huggingface.co/datasets/ujwal-jibhkate/enriched-movie-dataset-with-multimodal-embeddings)}}, }