UniVLA
					Collection
				
All you need to get started with UniVLA
					β’ 
				9 items
				β’ 
				Updated
					
				β’
					
					1
This is the official checkpoint of our RSS 2025 work: Learning to Act Anywhere with Task-centric Latent Actions
This is the UniVLA pre-trained on our full data collection (OpenX + Ego4D). For finetuning on simulation benchmarks or your customized dataset, please visit our official repo.
If you find our code or models useful in your work, please cite our paper:
@article{bu2025univla,
  title={Univla: Learning to act anywhere with task-centric latent actions},
  author={Bu, Qingwen and Yang, Yanting and Cai, Jisong and Gao, Shenyuan and Ren, Guanghui and Yao, Maoqing and Luo, Ping and Li, Hongyang},
  journal={arXiv preprint arXiv:2505.06111},
  year={2025}
}