Item-EMB / README.md
AL-GR's picture
Update README.md
5235db3 verified
|
raw
history blame
3.82 kB
metadata
license: apache-2.0
language:
  - en
  - zh
pretty_name: AL-GR Item Embeddings
tags:
  - multimodal
  - embedding
  - computer-vision
  - recommendation
  - e-commerce
dataset_info:
  - config_name: default
    splits:
      - name: train
        num_examples: 507000000

AL-GR/Item-EMB: Multi-modal Item Embeddings

Dataset Summary

This repository, AL-GR/Item-EMB, is a companion dataset to the main AL-GR generative recommendation dataset. It contains the 512-dimensional multi-modal embeddings for over 500 million items that appear in the AL-GR sequences.

Each item is represented by a unique ID (base62_string) and its corresponding vector embedding. To ensure compatibility with text-based formats like CSV, the float32 vectors have been encoded into a Base64 string.

This dataset allows users to:

  • Initialize item embedding layers in traditional or multi-modal recommendation models.
  • Analyze the semantic space of items (e.g., through clustering or visualization).
  • Link the abstract semantic IDs from the AL-GR dataset to their rich, underlying vector representations.

How to Use

The core task when using this dataset is to decode the feature string back into a NumPy vector. Below is a complete example of how to load the data and perform the decoding.

import base64
import numpy as np
from datasets import load_dataset

def decode_embedding(base64_string: str) -> np.ndarray:
    """Decodes a Base64 string into a 512-dimensional numpy vector."""
    # Decode from Base64, interpret as a buffer of float32, and reshape.
    return np.frombuffer(
        base64.b64decode(base64_string),
        dtype=np.float32
    ).reshape(-1)

# 1. Load the dataset from the Hugging Face Hub
# NOTE: Replace [your-username] with the actual username
dataset = load_dataset("AL-GR/Item-EMB")

# 2. Get a sample from the dataset
sample = dataset['train'][0]
item_id = sample['base62_string']
encoded_feature = sample['feature']

print(f"Item ID: {item_id}")
print(f"Encoded Feature (first 50 chars): {encoded_feature[:50]}...")

# 3. Decode the feature string into a vector
embedding_vector = decode_embedding(encoded_feature)

# 4. Verify the result
print(f"Decoded Vector Shape: {embedding_vector.shape}")
print(f"Decoded Vector Dtype: {embedding_vector.dtype}")
print(f"First 5 elements of the vector: {embedding_vector[:5]}")

# Expected output:
# Item ID: OvgEI
# Encoded Feature (first 50 chars): BHP0ugrXIz3gLZC8bjQAVwnjCD3g1t27FCLgvF66yT14C6S9Aw...
# Decoded Vector Shape: (512,)
# Decoded Vector Dtype: float32
# First 5 elements of the vector: [ ...numpy array values... ]

Dataset Structure

Data Fields

  • base62_string (string): A unique identifier for the item. This ID corresponds to the semantic item IDs used in the AL-GR generative recommendation dataset.
  • feature (string): The Base64 encoded string representation of the item's 512-dimensional multi-modal embedding.

Data Splits

Split Number of Samples
train ~507,000,000

Citation

If you use this dataset in your research, please cite:

@misc{fu2025forgeformingsemanticidentifiers,
      title={FORGE: Forming Semantic Identifiers for Generative Retrieval in Industrial Datasets}, 
      author={Kairui Fu and Tao Zhang and Shuwen Xiao and Ziyang Wang and Xinming Zhang and Chenchi Zhang and Yuliang Yan and Junjun Zheng and Yu Li and Zhihong Chen and Jian Wu and Xiangheng Kong and Shengyu Zhang and Kun Kuang and Yuning Jiang and Bo Zheng},
      year={2025},
      eprint={2509.20904},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2509.20904}, 
}

License

This dataset is licensed under the [e.g., Apache License 2.0].