Datasets:

ArXiv:
License:
Ethan7 commited on
Commit
0bdd573
·
verified ·
1 Parent(s): 9c12d93

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +94 -92
README.md CHANGED
@@ -1,92 +1,94 @@
1
- # Large-Scale Search Recommendation Dataset for Temporal Distribution Shift 🔥🔥🔥
2
-
3
- ## Background
4
- Temporal Distribution Shift (TDS) in real-world recommender systems refers to the phenomenon where the data distribution changes over time, driven by internal interventions (e.g., promotions, product launches) and external shocks (e.g., seasonality, media), degrading model generalization if trained under IID assumption.
5
-
6
- In our work, we propose ELBO_TDS [[paper](https://arxiv.org/abs/2511.21032)|[code](https://github.com/FuCongResearchSquad/ELBO4TDS)], a lightweight and theoretical grounded framework addressing TDS in real-world recommender systems.
7
-
8
- To investigate TDS problem at industrial scale, we release a production-level search recommendation dataset spanning consecutive days and covering heterogeneous feature types (categorical, numerical, sequential) with multiple implicit labels.
9
-
10
-
11
- ## Dataset Use Cases
12
- - Temporal Distribution Shift (TDS): temporal-wise (e.g., day-wise) shards with strict temporal validation/testing protocol
13
- - General large-scale recommendation tasks: Supports CTR/CVR prediction, multi-task learning, and list-wise ranking
14
-
15
-
16
- ## Dataset Summary
17
- - Domain: user interaction logs in search scenario
18
- - Time span: 13 consecutive days
19
- - Scale: ~50M samples/day, ~650M samples total
20
- - Unit: Session-level records with a list of item interactions per session
21
- - Intended uses: CTR/ranking models
22
-
23
- ## Dataset Structure
24
- - Storage format: Parquet
25
- - Row and session:
26
- - Row: an user(request)-item pair, with their corresponding implicit labels (Click, Add-to-Cart, Purchase).
27
- - Session: an user(request) consists of 12 rows stored consecutively in the Parquet file. User(request) features are identical within each session (items are random sampled to form session with the same length 12)
28
- - Features:
29
- - 101 user feature fields (all feature are converted to discrete ID)
30
- - 105 item feature fields (all feature are converted to discrete ID)
31
- - Feature Fields Type:
32
- - Categorical: (e.g., item_id, user_id, etc.) the original categorical features are encrypted, and the semantic of each field will not be revealed
33
- - Numerical: (e.g., click_count, order_count, etc.) the numerical features is converted categorical features by bucketization, the bucket id is given, indexed from 1, the larger bucket id indicates larger numerical feature, and the semantic of each field will not be revealed
34
- - Sequential: (e.g., user's last interacted items, etc.) the elements in the sequence are all categorical, encrypted, and in reverse chronological order, and the semantic of each field will not be revealed
35
- - Labels
36
- - Click: implicit label, the positive rate is ~14.4%, `1` denotes positive while `0` denotes negative
37
- - Add-to-Cart: implicit label, the positive rate is ~1.4%, `1` denotes positive while `0` denotes negative
38
- - Purchase: implicit label, the positive rate is ~0.4%, `1` denotes positive while `0` denotes negative
39
- - Data Type: each feature is stored in an array of `int64`, for categorical and numerical features, the length of array is 1
40
-
41
-
42
- Table1: The summary of overall feature fields. The u in the notation `f_u_c_1` indicate user, c indicate categorical feature, 1 is the number indicating the index of the feature.
43
- similarly, s in `f_u_s_1` indicates sequential feature. n in `f_u_n_1` indicates numerical features.
44
- i in `f_i_c_1` indicate item features
45
-
46
- | Feature Type | Field Type | Field Name |
47
- |--------------|--------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
48
- | User Feature | Categorical | `f_u_c_1`, `f_u_c_2`, `f_u_c_3`, `f_u_c_4`, `f_u_c_5`, `f_u_c_6`, `f_u_c_7` |
49
- | | Numerical | `f_u_n_1`, `f_u_n_2`, `f_u_n_3`, `f_u_n_4`, `f_u_n_5`, `f_u_n_6`, `f_u_n_7`, `f_u_n_8`, `f_u_n_9`, `f_u_n_10`, `f_u_n_11`, `f_u_n_12`, `f_u_n_13`, `f_u_n_14`, `f_u_n_15`, `f_u_n_16`, `f_u_n_17`, `f_u_n_18`, `f_u_n_19`, `f_u_n_20`, `f_u_n_21`, `f_u_n_22`, `f_u_n_23`, `f_u_n_24`, `f_u_n_25`, `f_u_n_26`, `f_u_n_27`, `f_u_n_28`, `f_u_n_29`, `f_u_n_30`, `f_u_n_31`, `f_u_n_32`, `f_u_n_33`, `f_u_n_34`, `f_u_n_35`, `f_u_n_36`, `f_u_n_37`, `f_u_n_38`, `f_u_n_39`, `f_u_n_40`, `f_u_n_41`, `f_u_n_42`, `f_u_n_43`, `f_u_n_44`, `f_u_n_45`, `f_u_n_46`, `f_u_n_47`, `f_u_n_48`, `f_u_n_49`, `f_u_n_50`, `f_u_n_51`, `f_u_n_52`, `f_u_n_53`, `f_u_n_54`, `f_u_n_55`, `f_u_n_56`, `f_u_n_57`, `f_u_n_58`, `f_u_n_59` |
50
- | | Sequential | `f_u_s_1`, `f_u_s_2`, `f_u_s_3`, `f_u_s_4`, `f_u_s_5`, `f_u_s_6`, `f_u_s_7`, `f_u_s_8`, `f_u_s_9`, `f_u_s_10`, `f_u_s_11`, `f_u_s_12`, `f_u_s_13`, `f_u_s_14`, `f_u_s_15`, `f_u_s_16`, `f_u_s_17`, `f_u_s_18`, `f_u_s_19`, `f_u_s_20`, `f_u_s_21`, `f_u_s_22`, `f_u_s_23`, `f_u_s_24`, `f_u_s_25`, `f_u_s_26`, `f_u_s_27`, `f_u_s_28`, `f_u_s_29`, `f_u_s_30`, `f_u_s_31`, `f_u_s_32`, `f_u_s_33`, `f_u_s_34`, `f_u_s_35` |
51
- | Item Feature | Categorical | `f_i_c_1`, `f_i_c_2`, `f_i_c_3`, `f_i_c_4`, `f_i_c_5`, `f_i_c_6`, `f_i_c_7`, `f_i_c_8`, `f_i_c_9`, `f_i_c_10`, `f_i_c_11` |
52
- | | Numerical | `f_i_n_1`, `f_i_n_2`, `f_i_n_3`, `f_i_n_4`, `f_i_n_5`, `f_i_n_6`, `f_i_n_7`, `f_i_n_8`, `f_i_n_9`, `f_i_n_10`, `f_i_n_11`, `f_i_n_12`, `f_i_n_13`, `f_i_n_14`, `f_i_n_15`, `f_i_n_16`, `f_i_n_17`, `f_i_n_18`, `f_i_n_19`, `f_i_n_20`, `f_i_n_21`, `f_i_n_22`, `f_i_n_23`, `f_i_n_24`, `f_i_n_25`, `f_i_n_26`, `f_i_n_27`, `f_i_n_28`, `f_i_n_29`, `f_i_n_30`, `f_i_n_31`, `f_i_n_32`, `f_i_n_33`, `f_i_n_34`, `f_i_n_35`, `f_i_n_36`, `f_i_n_37`, `f_i_n_38`, `f_i_n_39`, `f_i_n_40`, `f_i_n_41`, `f_i_n_42`, `f_i_n_43`, `f_i_n_44`, `f_i_n_45`, `f_i_n_46`, `f_i_n_47`, `f_i_n_48`, `f_i_n_49`, `f_i_n_50`, `f_i_n_51`, `f_i_n_52`, `f_i_n_53`, `f_i_n_54`, `f_i_n_55`, `f_i_n_56`, `f_i_n_57`, `f_i_n_58`, `f_i_n_59`, `f_i_n_60`, `f_i_n_61`, `f_i_n_62`, `f_i_n_63`, `f_i_n_64`, `f_i_n_65`, `f_i_n_66`, `f_i_n_67`, `f_i_n_68`, `f_i_n_69`, `f_i_n_70`, `f_i_n_71`, `f_i_n_72`, `f_i_n_73`, `f_i_n_74`, `f_i_n_75`, `f_i_n_76`, `f_i_n_77`, `f_i_n_78`, `f_i_n_79`, `f_i_n_80`, `f_i_n_81`, `f_i_n_82`, `f_i_n_83`, `f_i_n_84`, `f_i_n_85`, `f_i_n_86`, `f_i_n_87`, `f_i_n_88`, `f_i_n_89`, `f_i_n_90`, `f_i_n_91`, `f_i_n_92`, `f_i_n_93` |
53
- | | Sequential | `f_i_s_1` |
54
-
55
-
56
- ## Data Splits
57
- - By day: 13 shards
58
- - Recommended:
59
- - Temporal: train = first N−2 days; valid = N−1; test = N
60
- - Shuffle: only if temporal leakage is acceptable
61
-
62
- ## Data Loading
63
- The data is stored in Parquet format, which could be read by library such as `pandas` and `datasets`, for example,
64
-
65
- ``` python
66
- import pandas as pd
67
-
68
- df = pd.read_parquet('data/day1/', engine='pyarrow')
69
- ```
70
-
71
- ``` python
72
- import datasets as ds
73
-
74
- dataset = ds.dataset('data/day1/', format="parquet")
75
- ```
76
-
77
-
78
-
79
- ## Citation
80
-
81
- If you use this dataset, please cite:
82
- ```bibtex
83
- @misc{zhu2025probabilisticframeworktemporaldistribution,
84
- title={A Probabilistic Framework for Temporal Distribution Generalization in Industry-Scale Recommender Systems},
85
- author={Yuxuan Zhu and Cong Fu and Yabo Ni and Anxiang Zeng and Yuan Fang},
86
- year={2025},
87
- eprint={2511.21032},
88
- archivePrefix={arXiv},
89
- primaryClass={cs.LG},
90
- url={https://arxiv.org/abs/2511.21032},
91
- }
92
-
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ # Large-Scale Search Recommendation Dataset for Temporal Distribution Shift 🔥🔥🔥
5
+
6
+ ## Background
7
+ Temporal Distribution Shift (TDS) in real-world recommender systems refers to the phenomenon where the data distribution changes over time, driven by internal interventions (e.g., promotions, product launches) and external shocks (e.g., seasonality, media), degrading model generalization if trained under IID assumption.
8
+
9
+ In our work, we propose ELBO_TDS [[paper](https://arxiv.org/abs/2511.21032)|[code](https://github.com/FuCongResearchSquad/ELBO4TDS)], a lightweight and theoretical grounded framework addressing TDS in real-world recommender systems.
10
+
11
+ To investigate TDS problem at industrial scale, we release a production-level search recommendation dataset spanning consecutive days and covering heterogeneous feature types (categorical, numerical, sequential) with multiple implicit labels.
12
+
13
+
14
+ ## Dataset Use Cases
15
+ - Temporal Distribution Shift (TDS): temporal-wise (e.g., day-wise) shards with strict temporal validation/testing protocol
16
+ - General large-scale recommendation tasks: Supports CTR/CVR prediction, multi-task learning, and list-wise ranking
17
+
18
+
19
+ ## Dataset Summary
20
+ - Domain: user interaction logs in search scenario
21
+ - Time span: 13 consecutive days
22
+ - Scale: ~50M samples/day, ~650M samples total
23
+ - Unit: Session-level records with a list of item interactions per session
24
+ - Intended uses: CTR/ranking models
25
+
26
+ ## Dataset Structure
27
+ - Storage format: Parquet
28
+ - Row and session:
29
+ - Row: an user(request)-item pair, with their corresponding implicit labels (Click, Add-to-Cart, Purchase).
30
+ - Session: an user(request) consists of 12 rows stored consecutively in the Parquet file. User(request) features are identical within each session (items are random sampled to form session with the same length 12)
31
+ - Features:
32
+ - 101 user feature fields (all feature are converted to discrete ID)
33
+ - 105 item feature fields (all feature are converted to discrete ID)
34
+ - Feature Fields Type:
35
+ - Categorical: (e.g., item_id, user_id, etc.) the original categorical features are encrypted, and the semantic of each field will not be revealed
36
+ - Numerical: (e.g., click_count, order_count, etc.) the numerical features is converted categorical features by bucketization, the bucket id is given, indexed from 1, the larger bucket id indicates larger numerical feature, and the semantic of each field will not be revealed
37
+ - Sequential: (e.g., user's last interacted items, etc.) the elements in the sequence are all categorical, encrypted, and in reverse chronological order, and the semantic of each field will not be revealed
38
+ - Labels
39
+ - Click: implicit label, the positive rate is ~14.4%, `1` denotes positive while `0` denotes negative
40
+ - Add-to-Cart: implicit label, the positive rate is ~1.4%, `1` denotes positive while `0` denotes negative
41
+ - Purchase: implicit label, the positive rate is ~0.4%, `1` denotes positive while `0` denotes negative
42
+ - Data Type: each feature is stored in an array of `int64`, for categorical and numerical features, the length of array is 1
43
+
44
+
45
+ Table1: The summary of overall feature fields. The u in the notation `f_u_c_1` indicate user, c indicate categorical feature, 1 is the number indicating the index of the feature.
46
+ similarly, s in `f_u_s_1` indicates sequential feature. n in `f_u_n_1` indicates numerical features.
47
+ i in `f_i_c_1` indicate item features
48
+
49
+ | Feature Type | Field Type | Field Name |
50
+ |--------------|--------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
51
+ | User Feature | Categorical | `f_u_c_1`, `f_u_c_2`, `f_u_c_3`, `f_u_c_4`, `f_u_c_5`, `f_u_c_6`, `f_u_c_7` |
52
+ | | Numerical | `f_u_n_1`, `f_u_n_2`, `f_u_n_3`, `f_u_n_4`, `f_u_n_5`, `f_u_n_6`, `f_u_n_7`, `f_u_n_8`, `f_u_n_9`, `f_u_n_10`, `f_u_n_11`, `f_u_n_12`, `f_u_n_13`, `f_u_n_14`, `f_u_n_15`, `f_u_n_16`, `f_u_n_17`, `f_u_n_18`, `f_u_n_19`, `f_u_n_20`, `f_u_n_21`, `f_u_n_22`, `f_u_n_23`, `f_u_n_24`, `f_u_n_25`, `f_u_n_26`, `f_u_n_27`, `f_u_n_28`, `f_u_n_29`, `f_u_n_30`, `f_u_n_31`, `f_u_n_32`, `f_u_n_33`, `f_u_n_34`, `f_u_n_35`, `f_u_n_36`, `f_u_n_37`, `f_u_n_38`, `f_u_n_39`, `f_u_n_40`, `f_u_n_41`, `f_u_n_42`, `f_u_n_43`, `f_u_n_44`, `f_u_n_45`, `f_u_n_46`, `f_u_n_47`, `f_u_n_48`, `f_u_n_49`, `f_u_n_50`, `f_u_n_51`, `f_u_n_52`, `f_u_n_53`, `f_u_n_54`, `f_u_n_55`, `f_u_n_56`, `f_u_n_57`, `f_u_n_58`, `f_u_n_59` |
53
+ | | Sequential | `f_u_s_1`, `f_u_s_2`, `f_u_s_3`, `f_u_s_4`, `f_u_s_5`, `f_u_s_6`, `f_u_s_7`, `f_u_s_8`, `f_u_s_9`, `f_u_s_10`, `f_u_s_11`, `f_u_s_12`, `f_u_s_13`, `f_u_s_14`, `f_u_s_15`, `f_u_s_16`, `f_u_s_17`, `f_u_s_18`, `f_u_s_19`, `f_u_s_20`, `f_u_s_21`, `f_u_s_22`, `f_u_s_23`, `f_u_s_24`, `f_u_s_25`, `f_u_s_26`, `f_u_s_27`, `f_u_s_28`, `f_u_s_29`, `f_u_s_30`, `f_u_s_31`, `f_u_s_32`, `f_u_s_33`, `f_u_s_34`, `f_u_s_35` |
54
+ | Item Feature | Categorical | `f_i_c_1`, `f_i_c_2`, `f_i_c_3`, `f_i_c_4`, `f_i_c_5`, `f_i_c_6`, `f_i_c_7`, `f_i_c_8`, `f_i_c_9`, `f_i_c_10`, `f_i_c_11` |
55
+ | | Numerical | `f_i_n_1`, `f_i_n_2`, `f_i_n_3`, `f_i_n_4`, `f_i_n_5`, `f_i_n_6`, `f_i_n_7`, `f_i_n_8`, `f_i_n_9`, `f_i_n_10`, `f_i_n_11`, `f_i_n_12`, `f_i_n_13`, `f_i_n_14`, `f_i_n_15`, `f_i_n_16`, `f_i_n_17`, `f_i_n_18`, `f_i_n_19`, `f_i_n_20`, `f_i_n_21`, `f_i_n_22`, `f_i_n_23`, `f_i_n_24`, `f_i_n_25`, `f_i_n_26`, `f_i_n_27`, `f_i_n_28`, `f_i_n_29`, `f_i_n_30`, `f_i_n_31`, `f_i_n_32`, `f_i_n_33`, `f_i_n_34`, `f_i_n_35`, `f_i_n_36`, `f_i_n_37`, `f_i_n_38`, `f_i_n_39`, `f_i_n_40`, `f_i_n_41`, `f_i_n_42`, `f_i_n_43`, `f_i_n_44`, `f_i_n_45`, `f_i_n_46`, `f_i_n_47`, `f_i_n_48`, `f_i_n_49`, `f_i_n_50`, `f_i_n_51`, `f_i_n_52`, `f_i_n_53`, `f_i_n_54`, `f_i_n_55`, `f_i_n_56`, `f_i_n_57`, `f_i_n_58`, `f_i_n_59`, `f_i_n_60`, `f_i_n_61`, `f_i_n_62`, `f_i_n_63`, `f_i_n_64`, `f_i_n_65`, `f_i_n_66`, `f_i_n_67`, `f_i_n_68`, `f_i_n_69`, `f_i_n_70`, `f_i_n_71`, `f_i_n_72`, `f_i_n_73`, `f_i_n_74`, `f_i_n_75`, `f_i_n_76`, `f_i_n_77`, `f_i_n_78`, `f_i_n_79`, `f_i_n_80`, `f_i_n_81`, `f_i_n_82`, `f_i_n_83`, `f_i_n_84`, `f_i_n_85`, `f_i_n_86`, `f_i_n_87`, `f_i_n_88`, `f_i_n_89`, `f_i_n_90`, `f_i_n_91`, `f_i_n_92`, `f_i_n_93` |
56
+ | | Sequential | `f_i_s_1` |
57
+
58
+
59
+ ## Data Splits
60
+ - By day: 13 shards
61
+ - Recommended:
62
+ - Temporal: train = first N−2 days; valid = N−1; test = N
63
+ - Shuffle: only if temporal leakage is acceptable
64
+
65
+ ## Data Loading
66
+ The data is stored in Parquet format, which could be read by library such as `pandas` and `datasets`, for example,
67
+
68
+ ``` python
69
+ import pandas as pd
70
+
71
+ df = pd.read_parquet('data/day1/', engine='pyarrow')
72
+ ```
73
+
74
+ ``` python
75
+ import datasets as ds
76
+
77
+ dataset = ds.dataset('data/day1/', format="parquet")
78
+ ```
79
+
80
+
81
+
82
+ ## Citation
83
+
84
+ If you use this dataset, please cite:
85
+ ```bibtex
86
+ @misc{zhu2025probabilisticframeworktemporaldistribution,
87
+ title={A Probabilistic Framework for Temporal Distribution Generalization in Industry-Scale Recommender Systems},
88
+ author={Yuxuan Zhu and Cong Fu and Yabo Ni and Anxiang Zeng and Yuan Fang},
89
+ year={2025},
90
+ eprint={2511.21032},
91
+ archivePrefix={arXiv},
92
+ primaryClass={cs.LG},
93
+ url={https://arxiv.org/abs/2511.21032},
94
+ }