super-rikka nielsr HF Staff commited on
Commit
1458a5d
·
verified ·
1 Parent(s): f8480af

Improve dataset card: Add description, links, task categories, tags, and sample usage (#2)

Browse files

- Improve dataset card: Add description, links, task categories, tags, and sample usage (a395bf857317c72de19e95326ac724808d8837d3)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +105 -3
README.md CHANGED
@@ -1,3 +1,105 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - image-text-to-text
5
+ language:
6
+ - en
7
+ tags:
8
+ - gui-automation
9
+ - mobile-agent
10
+ - multimodal
11
+ - android
12
+ - benchmark
13
+ ---
14
+
15
+ # AndroidDaily Dataset
16
+
17
+ This repository hosts the **AndroidDaily** dataset, a benchmark grounded in real-world mobile usage patterns, introduced in the paper [Step-GUI Technical Report](https://huggingface.co/papers/2512.15431).
18
+
19
+ The AndroidDaily benchmark comprises 3146 static actions and 235 end-to-end tasks across high-frequency daily scenarios. It is specifically designed to assess whether GUI agents can handle authentic everyday usage, providing a robust evaluation for GUI automation capabilities.
20
+
21
+ * **Project Page:** https://opengelab.github.io/
22
+ * **Code:** https://github.com/stepfun-ai/gelab-zero
23
+
24
+ ## Sample Usage
25
+
26
+ The AndroidDaily dataset is primarily used for training and evaluating GUI agents, particularly the GELab-Zero agent. The following steps, adapted from the [GELab-Zero GitHub repository](https://github.com/stepfun-ai/gelab-zero), demonstrate how to set up the environment and run a single task.
27
+
28
+ ### 1. LLM Inference Environment Setup (Ollama)
29
+
30
+ First, install Ollama (refer to https://ollama.com/ for full instructions). Then, download and import the `gelab-zero-4b-preview` model:
31
+
32
+ ```bash
33
+ # If huggingface cli is not installed yet, execute this command first
34
+ pip install huggingface_hub
35
+
36
+ # Download the gelab-zero-4b-preview model weights from huggingface
37
+ hf download --no-force-download stepfun-ai/GELab-Zero-4B-preview --local-dir gelab-zero-4b-preview
38
+
39
+ # Import the model into ollama
40
+ cd gelab-zero-4b-preview
41
+ ollama create gelab-zero-4b-preview -f Modelfile
42
+ ```
43
+
44
+ ### 2. Android Device Execution Environment Setup
45
+
46
+ Ensure your Android device has Developer Mode and USB Debugging enabled. Install ADB tools and connect your device. You can verify the connection:
47
+
48
+ ```bash
49
+ adb devices
50
+ ```
51
+ Expected output:
52
+ ```bash
53
+ List of devices attached
54
+ AN2CVB4C28000731 device
55
+ ```
56
+
57
+ ### 3. GELab-Zero Agent Runtime Environment Setup and Run a Task
58
+
59
+ Clone the GELab-Zero repository, install dependencies, and run a single task:
60
+
61
+ ```bash
62
+ # Clone the repository
63
+ git clone https://github.com/stepfun-ai/gelab-zero
64
+ cd gelab-zero
65
+
66
+ # Install dependencies
67
+ pip install -r requirements.txt
68
+
69
+ # To inference a single task
70
+ python examples/run_single_task.py
71
+ ```
72
+
73
+ This will execute an example task, saving trajectories to the `running_log/server_log/os-copilot-local-eval-logs/` directory.
74
+
75
+ ## Citation
76
+
77
+ If you find the AndroidDaily dataset or GELab-Zero useful for your research, please consider citing our work:
78
+
79
+ ```bibtex
80
+ @misc{yan2025stepguitechnicalreport,
81
+ title={Step-GUI Technical Report},
82
+ author={Haolong Yan and Jia Wang and Xin Huang and Yeqing Shen and Ziyang Meng and Zhimin Fan and Kaijun Tan and Jin Gao and Lieyu Shi and Mi Yang and Shiliang Yang and Zhirui Wang and Brian Li and Kang An and Chenyang Li and Lei Lei and Mengmeng Duan and Danxun Liang and Guodong Liu and Hang Cheng and Hao Wu and Jie Dong and Junhao Huang and Mei Chen and Renjie Yu and Shunshan Li and Xu Zhou and Yiting Dai and Yineng Deng and Yingdan Liang and Zelin Chen and Wen Sun and Chengxu Yan and Chunqin Xu and Dong Li and Fengqiong Xiao and Guanghao Fan and Guopeng Li and Guozhen Peng and Hongbing Li and Hang Li and Hongming Chen and Jingjing Xie and Jianyong Li and Jingyang Zhang and Jiaju Ren and Jiayu Yuan and Jianpeng Yin and Kai Cao and Liang Zhao and Liguo Tan and Liying Shi and Mengqiang Ren and Min Xu and Manjiao Liu and Mao Luo and Mingxin Wan and Na Wang and Nan Wu and Ning Wang and Peiyao Ma and Qingzhou Zhang and Qiao Wang and Qinlin Zeng and Qiong Gao and Qiongyao Li and Shangwu Zhong and Shuli Gao and Shaofan Liu and Shisi Gao and Shuang Luo and Xingbin Liu and Xiaojia Liu and Xiaojie Hou and Xin Liu and Xuanti Feng and Xuedan Cai and Xuan Wen and Xianwei Zhu and Xin Liang and Xin Liu and Xin Zhou and Yingxiu Zhao and Yukang Shi and Yunfang Xu and Yuqing Zeng and Yixun Zhang and Zejia Weng and Zhonghao Yan and Zhiguo Huang and Zhuoyu Wang and Zheng Ge and Jing Li and Yibo Zhu and Binxing Jiao and Xiangyu Zhang and Daxin Jiang},
83
+ year={2025},
84
+ eprint={2512.15431},
85
+ archivePrefix={arXiv},
86
+ primaryClass={cs.CV},
87
+ url={https://arxiv.org/abs/2512.15431},
88
+ }
89
+
90
+ @software{gelab_zero_2025,
91
+ title={GELab-Zero: An Advanced Mobile Agent Inference System},
92
+ author={GELab Team},
93
+ year={2025},
94
+ url={https://github.com/stepfun-ai/gelab-zero}
95
+ }
96
+
97
+ @misc{gelab_engine,
98
+ title={GUI Exploration Lab: Enhancing Screen Navigation in Agents via Multi-Turn Reinforcement Learning},
99
+ author={Haolong Yan and Yeqing Shen and Xin Huang and Jia Wang and Kaijun Tan and Zhixuan Liang and Hongxin Li and Zheng Ge and Osamu Yoshie and Si Li and Xiangyu Zhang and Daxin Jiang},
100
+ year={2025},
101
+ eprint={2512.02423},
102
+ archivePrefix={arXiv},
103
+ primaryClass={cs.CV},
104
+ url={https://arxiv.org/abs/2512.02423},
105
+ }