HKUST(GZ) · Systems Hub · Robotics and Autonomous Systems

RIL-LAB Robotic Intelligence and Learning Laboratory

We focus on the deep integration of robotics and AI to build next-generation robots that can learn and adapt in the real world, with key directions in robot design and control, multimodal perception, human-robot interaction, learning from video, intelligent manipulation, and embodied intelligence.

Humanoid Robotics Multimodal Perception Human-Robot Interaction Embodied AI
Lab News Latest 3

Latest Updates

Research progress, recruiting news, and event updates from WeChat and RedNote.

About the Lab

A Research Lab for the Future of Robotics

RIL-LAB was founded by Prof. Qiang Nie and is part of the Robotics and Autonomous Systems Thrust in the Systems Hub at HKUST(GZ). We work to tightly connect intelligence, learning, understanding, and manipulation for robots that can continuously learn and collaborate in real environments.

Our Mission

We aim to build a research platform with both academic depth and strong system-level implementation, emphasizing the co-development of algorithms, systems, hardware, and real-world applications. We encourage independent thinking, long-term vision, and hands-on execution.

Founded
2024, in the fast-growing robotics research ecosystem at HKUST(GZ).
Affiliation
HKUST(GZ) · Systems Hub · Robotics and Autonomous Systems.
Collaboration Style
Close mentorship, open collaboration, and long-term research value with cross-domain exploration.
6+ Core Research Directions
AI × Robotics Multimodal, Control, Manipulation, Learning
Open Recruitment PhD / Master / RA / Intern
Continuously Updated Research, people, and lab updates are maintained in real time
Research Pillars

Research Directions

Our research spans key components of robotic intelligence, from perception and decision-making to control, learning, and manipulation, forming a cross-disciplinary matrix for next-generation autonomous robots.

Robot Design and Control
Robot Design

Robot Design and Control

We explore structural innovation, dynamics modeling, and advanced control algorithms to build smarter, more agile, and more reliable robotic platforms.

Structure · Control · Motion
Multimodal Computer Vision
Perception

Multimodal Computer Vision

We study unified understanding and generation across vision, language, and other sensor modalities to improve robot perception and adaptation in complex environments.

Vision · Language · Multimodal Models
Intelligent Manipulation
Manipulation

Intelligent Manipulation

We develop manipulation methods for real-world complexity, integrating perception, decision-making, and execution.

Manipulation · Planning · Skill Learning
Human-Robot Interaction
HRI

Human-Robot Interaction

We investigate behavior understanding, pose estimation, motion prediction, and collaborative decision-making for more natural and efficient human-robot teamwork.

Pose · Action · Collaboration
Learning from Video
Learning from Video

Learning from Video

Through video understanding, imitation, and skill transfer, robots learn generalizable capabilities from large-scale visual data.

Imitation · Video Understanding · World Models
AI Agent
Embodied Agent

AI Agent

We build agents with autonomous decision-making and continual improvement to enable higher-level autonomy for robots in real environments.

Reasoning · Decision-Making · Embodied Intelligence
Lab Publications

Lab Publications

This section lists all public papers and research outputs from the lab, grouped by year.

2026

Accepted by ICRA 2026

3D Dynamics-Aware Manipulation: Endowing Manipulation Policies with 3D Foresight

Yuxin He, Ruihao Zhang, Xianzu Wu, Zhiyuan Zhang, Cheng Ding, Qiang Nie

ICRA 2026

PDF arXiv
3D World-Action Models Robot Manipulation
Accepted by ICRA 2026

Towards Exploratory and Focused Manipulation with Bimanual Active Perception: A New Problem, Benchmark and Strategy

Yuxin He, Ruihao Zhang, Tianao Shen, Cheng Liu, Qiang Nie

ICRA 2026

PDF arXiv
Active Perception Force Sensing Robot Manipulation
Accepted by AAAI 2026

EmbryoDiff: A Conditional Diffusion Framework with Multi-Focal Feature Fusion for Fine-Grained Embryo Developmental Stage Recognition

Yong Sun, Zhengjie Zhang, Junyu Shi, Zhiyuan Zhang, Lijiang Liu, Qiang Nie

AAAI 2026

PDF arXiv
Diffusion Model Video Understanding Embryo Stage Classification
Submitted to IEEE RAL

RoboAct-CLIP: Video-Driven Atomic Action Understanding for Robotic Manipulation

Zhiyuan Zhang, Yuxin He, Yong Sun, Junyu Shi, Lijiang Liu, Qiang Nie

PDF
Vision–Language–Action Atomic Action Under- standing Representation Learning Robotic Manipulation

2025

Accepted by MICCAI 2025

Time-Lapse Video-Based Embryo Grading via Complementary Spatial-Temporal Pattern Mining

Yong Sun, Yipeng Wang, Junyu Shi, Zhiyuan Zhang, Yanmei Xiao, Lei Zhu, Manxi Jiang, and Qiang Nie

MICCAI 2025

PDF arXiv
Video Understanding Embryo Grading
Accepted by ICCV 2025

GenM$^3$: Generative Pretrained Multi-path Motion Model for Text Conditional Human Motion Generation

Junyu Shi, Lijiang Liu, Yong Sun, Zhiyuan Zhang, Jinni Zhou, Qiang Nie

ICCV 2025

arXiv
Motion Generation
Demo Showcase

Demo Videos and Project Showcases

Project Page

Robot Dancing

Robot Dancing By RIL-LAB

Local Upload Robot Dancing 2026.04.09
Project Page

Dual-Arm Robot Opens a Multi-Drawer Storage Box

This video shows a dual-arm robotic system interacting with a multi-drawer storage box.

Local Upload 2026.04.08
Other

Text-Annotated Human Motion Demonstration

This video presents a collection of human motion examples paired with short text descriptions. A 3D human character performs various acti...

Local Upload 2026.04.02
Other

Humanoid Robot Following Human Instructions

This video shows a humanoid robot interacting with a person in a laboratory setting. The robot responds to instructions by performing han...

Local Upload 2026.04.01
Project Page

Reflex Feedback Improves Robotic Grasp Stability

This video presents a comparison experiment of a robotic grasping system with reflex feedback turned off and on.

Local Upload 2026.03.29
People

Team Overview

This section introduces the PI and team members. The member-card area scrolls independently, and the directory entry expands more detailed profiles.

Prof. Qiang Nie
Principal Investigator

Prof. Qiang Nie

Assistant Professor in the Robotics and Autonomous Systems Thrust, Systems Hub, HKUST(GZ). His research includes robot design and control, human-robot interaction, multimodal perception, learning from video, intelligent manipulation, and AI agents.

We care about long-term problems, open discussion, and close mentorship. Students who are serious about robotics and AI are always welcome.

Zhiyuan Zhang
PhD Student

Zhiyuan Zhang

Research Focus

Large language models, vision-language-action models, reinforcement learning

Junyu Shi
PhD Student

Junyu Shi

Research Focus

Human-computer interaction, human motion generation and understanding

Danqi Yan
PhD Student

Danqi Yan

Research Focus

Human-Computer Interaction, Learning from Human Videos

Full Directory

View Full Members

More Member Details

Expand the complete directory for all 14 members, including education, profile links, and contact information.

Join RIL-LAB

Join Our Team

We continuously recruit PhD students, Master's students, research assistants, and interns who are passionate about robotics and AI. Here you will find rigorous mentorship, open collaboration, and impactful research problems.

2026 Recruitment Ongoing
Priority Directions: VLA / Reinforcement Learning / Robotic Manipulation

If you have project or research experience in Vision-Language-Action, RL, imitation learning, manipulation, or embodied AI, we strongly encourage you to contact us.

Who We Are Looking For

  • Strong interest in robotics, AI, multimodal intelligence, or related fields.
  • Solid mathematical and programming foundation with independent problem-solving ability.
  • Prior research/project experience is a plus (design, control, vision, HRI, etc.).
  • PhD applicants should meet admission requirements; talented candidates are welcome for all tracks.

What You Will Get

  • Frontier interdisciplinary research connected to real robot systems.
  • Consistent mentorship with one-on-one meetings and frequent research discussions.
  • An international environment in the Greater Bay Area linking academia and industry.
  • Competitive scholarship/support packages depending on role and performance.

How to Apply

The official process is now in the online application portal. You may also contact the PI by email first with your background and interests.

  1. Review open positions and research directions.
  2. Prepare your CV, research experience, and supporting materials.
  3. Submit online, or start with an email inquiry.
Open Application Portal Email Inquiry