Research
I'm interested in the intersection of computer vision, deep learning, generative AI, and robotics. My research focuses on understanding the physical world — including shape, motion, depth, and appearance — from sensor observations, with applications in robotic design and human-robot interaction.
|
|
|
Eye, Robot: Learning Realistic Robot Gaze From Human Motion Data
Under Review
Realistic Gaze Transformer (RGT), a Transformer-VAE framework that learns full-head gaze dynamics, including head rotations and eyelid movements, from human motion capture data.
|
|
|
AutoURDF: Unsupervised Robot Modeling from 4D Point Cloud
Jiong Lin,
Lechen Zhang,
Kwansoo Lee,
Jialong Ning,
Judah Goldfeder,
Hod Lipson
CVPR 2025 (Acceptance Rate: 22.1%)
project page
/
arXiv
An unsupervised approach for understanding robot motion and constructing description files for unseen robots from point cloud frames.
|
|
|
MoD-SLAM: Monocular Dense Mapping for Unbounded 3D Scene Reconstruction
Heng Zhou,
Zhetao Guo,
Yuxiang Ren,
Shuhong Liu,
Lechen Zhang,
Kaidi Zhang,
Mingrui Li,
IEEE Robotics and Automation Letters (RA-L)
arXiv
Monocular SLAM with metric depth estimation and Gaussian-based unbounded scene representation.
|
|
|
Soft Robot Neural Evolution with LLMs Supervision
Lechen Zhang
ICRA 2024, Workshop on Co-design in Robotics, Oral
project page
/
arXiv
A computational framework for automatically designing soft robot morphologies using large language models for design guidance and physics simulation for evaluation.
|
The website template is from source code by Jon Barron.
|
|