Chen Tang

CS Postdoc at UT Austin | Incoming Assistant Professor in New Mobility at UCLA

prof_pic.jpg

Hi! I’m Chen, a Postdoctoral Fellow in Computer Science at UT Austin, advised by Prof. Peter Stone in the Learning Agents Research Group (LARG). Prior to this, I was a Postdoctoral Scholar in Mechanical Engineering at UC Berkeley, advised by Prof. Masayoshi Tomizuka. I led the behavior-related research activities for autonomous driving at Mechanical Systems Control (MSC) Lab. I obtained my Ph.D. in Mechanical Engineering at UC Berkeley and my Bachelor’s degree in Mechanical Engineering from the Hong Kong University of Science and Technology (HKUST). I received the ASME DSCD Rising Star Awards in 2022 and was selected as an RSS Pioneer in 2023.

My research interest lies at the intersection of control, robotics, and learning. I aim to develop embodied AI agents operating in human-centered environments. To tackle this research challenge, I am dedicated to exploring principled approaches to integrate data-driven approaches (e.g., deep learning, generative models, reinforcement learning, and imitation learning) with control theory, explainable AI, and causality. My past and current research focused on applications in autonomous driving and robot navigation. My long-term research vision is to facilitate trustworthy and human-centered autonomy, aiming to expedite their integration into everyday life to yield substantial societal benefits.

I will be joining the Civil and Environmental Engineering Department (CEE) at UCLA as an Assistant Professor in New Mobility! I'm recruiting PhD students for Spring 2025, Winter 2026, and Fall 2026 to work on autonomous driving and robotics. If you're interested, please feel free to reach out. I'm also looking for postdocs, master's, and undergraduate students to join my lab. For more details, please check the Prospective Students page.

news

Jan 22, 2025 Our paper “Residual-MPPI: Online Policy Customization for Continuous Control” is accepted for ICLR 2025! Checkout our paper, code, and website.
Aug 07, 2024 Checkout our survey paper “Deep reinforcement learning for Robotics: A Survey of Real-World Successes”! It will appear on Annual Review of Control, Robotics, and Autonomous Systems 2025.
Jul 03, 2024 Our paper “Optimizing diffusion models for joint trajectory prediction and controllable generation” is accepted for ECCV 2024! Check out our website and code!
Jun 30, 2024 Our paper “Pre-training on synthetic driving data for trajectory prediction” is accepted for IROS 2024! And our RA-L paper “Skill-Critic: Refining learned skills for hierarchical reinforcement learning” is accepted for oral presentation at IROS 2024 (check out our website and code)!
Jun 27, 2024 Our paper “Active exploration in iterative gaussian process regression for uncertainty modeling in autonomous racing” is accepted for IEEE Transactions on Control Systems Technology (T-CST)!
Jun 10, 2024 Our paper “Learning online belief prediction for efficient POMDP planning in autonomous driving” is accepted for RA-L!
Jun 06, 2024 Our paper “Grounded relational inference: Domain knowledge-driven explainable autonomous driving” is accepted for IEEE Transactions on Intelligent Transportation Systems (T-ITS)!
Jun 02, 2024 Our paper “BeTAIL: Behavior transformer adversarial imitation learning from human racing gameplay” is accepted for RA-L. Check out our website and code!