File size: 3,821 Bytes
715c2c2
fca9dca
715c2c2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fca9dca
715c2c2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5235db3
 
 
 
 
 
 
 
715c2c2
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
---
license: apache-2.0
language:
  - en
  - zh
pretty_name: "AL-GR Item Embeddings"
tags:
  - multimodal
  - embedding
  - computer-vision
  - recommendation
  - e-commerce
dataset_info:
  - config_name: default
    splits:
      - name: train
        num_examples: 507000000
---


# AL-GR/Item-EMB: Multi-modal Item Embeddings

## Dataset Summary

This repository, `AL-GR/Item-EMB`, is a companion dataset to the main `AL-GR` generative recommendation dataset. It contains the **512-dimensional multi-modal embeddings** for over 500 million items that appear in the `AL-GR` sequences.

Each item is represented by a unique ID (`base62_string`) and its corresponding vector embedding. To ensure compatibility with text-based formats like CSV, the `float32` vectors have been encoded into a **Base64 string**.

This dataset allows users to:
- Initialize item embedding layers in traditional or multi-modal recommendation models.
- Analyze the semantic space of items (e.g., through clustering or visualization).
- Link the abstract semantic IDs from the `AL-GR` dataset to their rich, underlying vector representations.

## How to Use

The core task when using this dataset is to decode the `feature` string back into a NumPy vector. Below is a complete example of how to load the data and perform the decoding.

```python
import base64
import numpy as np
from datasets import load_dataset

def decode_embedding(base64_string: str) -> np.ndarray:
    """Decodes a Base64 string into a 512-dimensional numpy vector."""
    # Decode from Base64, interpret as a buffer of float32, and reshape.
    return np.frombuffer(
        base64.b64decode(base64_string),
        dtype=np.float32
    ).reshape(-1)

# 1. Load the dataset from the Hugging Face Hub
# NOTE: Replace [your-username] with the actual username
dataset = load_dataset("AL-GR/Item-EMB")

# 2. Get a sample from the dataset
sample = dataset['train'][0]
item_id = sample['base62_string']
encoded_feature = sample['feature']

print(f"Item ID: {item_id}")
print(f"Encoded Feature (first 50 chars): {encoded_feature[:50]}...")

# 3. Decode the feature string into a vector
embedding_vector = decode_embedding(encoded_feature)

# 4. Verify the result
print(f"Decoded Vector Shape: {embedding_vector.shape}")
print(f"Decoded Vector Dtype: {embedding_vector.dtype}")
print(f"First 5 elements of the vector: {embedding_vector[:5]}")

# Expected output:
# Item ID: OvgEI
# Encoded Feature (first 50 chars): BHP0ugrXIz3gLZC8bjQAVwnjCD3g1t27FCLgvF66yT14C6S9Aw...
# Decoded Vector Shape: (512,)
# Decoded Vector Dtype: float32
# First 5 elements of the vector: [ ...numpy array values... ]
```

## Dataset Structure

### Data Fields

- `base62_string` (string): A unique identifier for the item. This ID corresponds to the semantic item IDs used in the `AL-GR` generative recommendation dataset.
- `feature` (string): The **Base64 encoded** string representation of the item's 512-dimensional multi-modal embedding.

### Data Splits

| Split      | Number of Samples      |
|------------|------------------------|
| `train`    | ~507,000,000           |


## Citation

If you use this dataset in your research, please cite:

```bibtex
@misc{fu2025forgeformingsemanticidentifiers,
      title={FORGE: Forming Semantic Identifiers for Generative Retrieval in Industrial Datasets}, 
      author={Kairui Fu and Tao Zhang and Shuwen Xiao and Ziyang Wang and Xinming Zhang and Chenchi Zhang and Yuliang Yan and Junjun Zheng and Yu Li and Zhihong Chen and Jian Wu and Xiangheng Kong and Shengyu Zhang and Kun Kuang and Yuning Jiang and Bo Zheng},
      year={2025},
      eprint={2509.20904},
      archivePrefix={arXiv},
      primaryClass={cs.IR},
      url={https://arxiv.org/abs/2509.20904}, 
}
```

## License

This dataset is licensed under the [e.g., Apache License 2.0].