--- pretty_name: LMD Deduplication Supplements dataset_name: lmd-dedup-supplements language: - en license: cc-by-4.0 size_categories: - 100K Pre-computed embeddings (CAugBERT, CLaMP-1024) for duplicate detection on the Lakh MIDI Dataset (LMD-clean and LMD-full). Each folder contains `embeddings.pt` and `refs.txt` aligned by row. citation: | @inproceedings{lmd_dedup_2025, author = {Eunjin Choi and Hyerin Kim and Jiwoo Ryu and Juhan Nam and Dasaem Jeong}, title = {On the De-duplication of the Lakh {MIDI} Dataset}, booktitle = {Proceedings of the International Society for Music Information Retrieval Conference (ISMIR)}, year = {2025} } --- # LMD Deduplication Supplements This repository provides pre-computed embedding files extracted from the **Lakh MIDI Dataset (LMD-clean and LMD-full)** using the CAugBERT and CLaMP-1024 models. These embeddings were used in our paper: **"On the De-duplication of the Lakh MIDI Dataset" (ISMIR 2025)** [[Paper]](https://ismir2025program.ismir.net/poster_188.html) | [[GitHub Code]](https://github.com/jech2/LMD_Deduplication) --- ## Contents Each folder includes: - `embeddings.pt`: Torch tensor of embeddings (shape: *N × D*) - `refs.txt`: List of MIDI filenames corresponding to each embedding row --- ## Usage ```python import torch # Load embeddings embeddings_dir = './lmd_full_to_lmd_full_0.95__caugbert_embedding.npy/' emb = torch.load(embeddings_dir + "embeddings.pt") # Load references (MIDI filenames) with open(embeddings_dir + "refs.txt") as f: refs = [line.strip() for line in f] print(emb.shape, len(refs)) # Example: torch.Size([168662, 512]) 168662 ``` For details on how these embeddings are used in duplicate detection and evaluation, please refer to the `Evaluate` and `De-duplication` sections of the main repository. # Note Due to storage limits, we only provide embeddings from CAugBERT and CLaMP-1024 here. If you need embeddings from other models (e.g., CLaMP-512, MusicBERT, etc.), please contact: Eunjin Choi (jech@kaist.ac.kr)