A Research Lab for the Future of Robotics
RIL-LAB was founded by Prof. Qiang Nie and is part of the Robotics and Autonomous Systems Thrust in the Systems Hub at HKUST(GZ). We work to tightly connect intelligence, learning, understanding, and manipulation for robots that can continuously learn and collaborate in real environments.
Our Mission
We aim to build a research platform with both academic depth and strong system-level implementation, emphasizing the co-development of algorithms, systems, hardware, and real-world applications. We encourage independent thinking, long-term vision, and hands-on execution.
Research Directions
Our research spans key components of robotic intelligence, from perception and decision-making to control, learning, and manipulation, forming a cross-disciplinary matrix for next-generation autonomous robots.
Robot Design and Control
We explore structural innovation, dynamics modeling, and advanced control algorithms to build smarter, more agile, and more reliable robotic platforms.
Multimodal Computer Vision
We study unified understanding and generation across vision, language, and other sensor modalities to improve robot perception and adaptation in complex environments.
Intelligent Manipulation
We develop manipulation methods for real-world complexity, integrating perception, decision-making, and execution.
Human-Robot Interaction
We investigate behavior understanding, pose estimation, motion prediction, and collaborative decision-making for more natural and efficient human-robot teamwork.
Learning from Video
Through video understanding, imitation, and skill transfer, robots learn generalizable capabilities from large-scale visual data.
AI Agent
We build agents with autonomous decision-making and continual improvement to enable higher-level autonomy for robots in real environments.
Lab Publications
This section lists all public papers and research outputs from the lab, grouped by year.
2026
3D Dynamics-Aware Manipulation: Endowing Manipulation Policies with 3D Foresight
ICRA 2026
Towards Exploratory and Focused Manipulation with Bimanual Active Perception: A New Problem, Benchmark and Strategy
ICRA 2026
EmbryoDiff: A Conditional Diffusion Framework with Multi-Focal Feature Fusion for Fine-Grained Embryo Developmental Stage Recognition
AAAI 2026
RoboAct-CLIP: Video-Driven Atomic Action Understanding for Robotic Manipulation
2025
Time-Lapse Video-Based Embryo Grading via Complementary Spatial-Temporal Pattern Mining
MICCAI 2025
GenM$^3$: Generative Pretrained Multi-path Motion Model for Text Conditional Human Motion Generation
ICCV 2025
Demo Videos and Project Showcases
Dual-Arm Robot Opens a Multi-Drawer Storage Box
This video shows a dual-arm robotic system interacting with a multi-drawer storage box.
Text-Annotated Human Motion Demonstration
This video presents a collection of human motion examples paired with short text descriptions. A 3D human character performs various acti...
Humanoid Robot Following Human Instructions
This video shows a humanoid robot interacting with a person in a laboratory setting. The robot responds to instructions by performing han...
Reflex Feedback Improves Robotic Grasp Stability
This video presents a comparison experiment of a robotic grasping system with reflex feedback turned off and on.
Team Overview
This section introduces the PI and team members. The member-card area scrolls independently, and the directory entry expands more detailed profiles.
Zhiyuan Zhang
Large language models, vision-language-action models, reinforcement learning
Junyu Shi
Human-computer interaction, human motion generation and understanding
Danqi Yan
Human-Computer Interaction, Learning from Human Videos
Lijiang Liu
Exoskeleton Robots, Pose Estimation
Yiqin Ma
Motion control, reinforcement learning
View Full Members
Expand the complete directory for all 14 members, including education, profile links, and contact information.
Complete Member Information
This section shows members whose profiles are more complete.
Zhiyuan Zhang
Large language models, vision-language-action models, reinforcement learning
- PhD - The Hong Kong University of Science and Technology (Guangzhou) (2024-present)
- M.S. - Institute of Automation, Chinese Academy of Sciences (2021-2024)
- B.S. - University of Chinese Academy of Sciences (2017-2021)
Yong Sun
Artificial intelligence for biomedicine
- PhD - The Hong Kong University of Science and Technology (Guangzhou) (2024-present)
- M.S. - Tianjin University (2021-2024)
- B.S. - Northwestern Polytechnical University (2017-2021)
Junyu Shi
Human-computer interaction, human motion generation and understanding
- PhD - The Hong Kong University of Science and Technology (Guangzhou) (2024-present)
Danqi Yan
Human-Computer Interaction, Learning from Human Videos
- PhD - The Hong Kong University of Science and Technology (Guangzhou) (2024-present)
Zhengjie Zhang
Embodied Multimodal Large Model
- PhD - The Hong Kong University of Science and Technology (Guangzhou) (2025-present)
- M.S. - Shanghai Jiao Tong University (2021-2025)
- B.S. - Xi'an Jiaotong University (2017-2021)
Yuxin He
Video learning, embodied world models
- M.S. - Harbin Institute of Technology, Shenzhen (2021-2024)
- B.S. - Sun Yat-sen University (2017-2021)
Lijiang Liu
Exoskeleton Robots, Pose Estimation
- M.S. - The Hong Kong University of Science and Technology (Guangzhou) (2023-2025)
Xinyi Dai
Robotics, human pose estimation
- M.S. - The Hong Kong University of Science and Technology (Guangzhou) (2025-present)
- B.S. - China University of Geosciences (Wuhan) (2021-2025)
Cheng Liu
Legged robot locomotion
- M.S. - The Hong Kong University of Science and Technology (Guangzhou) (2024-present)
Ziyan Feng
Robotics, embedded systems
- M.S. - The Hong Kong University of Science and Technology (Guangzhou) (2025-present)
- B.S. - Southwest University (2021-2025)
Yiqin Ma
Motion control, reinforcement learning
- M.S. - The Hong Kong University of Science and Technology (Guangzhou) (2025-present)
- B.S. - Shanghai Jiao Tong University (2021-2025)
Haoting Wu
Exoskeletons
- M.S. - The Hong Kong University of Science and Technology (Guangzhou) (2025-present)
- B.S. - Shandong University (2021-2025)
Baoyang Hua
Jiacheng Li
Join Our Team
We continuously recruit PhD students, Master's students, research assistants, and interns who are passionate about robotics and AI. Here you will find rigorous mentorship, open collaboration, and impactful research problems.
Who We Are Looking For
- Strong interest in robotics, AI, multimodal intelligence, or related fields.
- Solid mathematical and programming foundation with independent problem-solving ability.
- Prior research/project experience is a plus (design, control, vision, HRI, etc.).
- PhD applicants should meet admission requirements; talented candidates are welcome for all tracks.
What You Will Get
- Frontier interdisciplinary research connected to real robot systems.
- Consistent mentorship with one-on-one meetings and frequent research discussions.
- An international environment in the Greater Bay Area linking academia and industry.
- Competitive scholarship/support packages depending on role and performance.
How to Apply
The official process is now in the online application portal. You may also contact the PI by email first with your background and interests.
- Review open positions and research directions.
- Prepare your CV, research experience, and supporting materials.
- Submit online, or start with an email inquiry.