Chen Tang

CS Postdoc at UT Austin -- Previously ME Ph.D. from UC Berkeley.

prof_pic.jpg

Hi! I’m Chen, a Postdoctoral Fellow in Computer Science at UT Austin, advised by Prof. Peter Stone in the Learning Agents Research Group (LARG). Prior to this, I was a Postdoctoral Scholar in Mechanical Engineering at UC Berkeley, advised by Prof. Masayoshi Tomizuka. I led the behavior-related research activities for autonomous driving at Mechanical Systems Control (MSC) Lab. I obtained my Ph.D. in Mechanical Engineering at UC Berkeley and my Bachelor’s degree in Mechanical Engineering from the Hong Kong University of Science and Technology (HKUST). I have also spent time in Honda Research Institute and Waymo as interns.

My research interest lies at the intersection of control, robotics, and learning. I am interested in developing trustworthy autonomous systems interacting with humans. To tackle this research challenge, I am dedicated to exploring principled approaches to integrate data-driven approaches (e.g., deep learning, reinforcement learning, and imitation learning) with techniques from control, explainable AI, and casualty. My past and current research primarily focused on applications in autonomous driving and robot navigation. My long-term research vision is to facilitate trustworthy and human-centered autonomy, aiming to expedite their integration into everyday life to yield substantial societal benefits.

news

Aug 07, 2024 Checkout our survey paper “Deep reinforcement learning for Robotics: A Survey of Real-World Successes”! It will appear on Annual Review of Control, Robotics, and Autonomous Systems 2025.
Jul 03, 2024 Our paper “Optimizing diffusion models for joint trajectory prediction and controllable generation” is accepted for ECCV 2024! Check out our website and code!
Jun 30, 2024 Our paper “Pre-training on synthetic driving data for trajectory prediction” is accepted for IROS 2024! And our RA-L paper “Skill-Critic: Refining learned skills for hierarchical reinforcement learning” is accepted for oral presentation at IROS 2024 (check out our website and code)!
Jun 27, 2024 Our paper “Active exploration in iterative gaussian process regression for uncertainty modeling in autonomous racing” is accepted for IEEE Transactions on Control Systems Technology (T-CST)!
Jun 10, 2024 Our paper “Learning online belief prediction for efficient POMDP planning in autonomous driving” is accepted for RA-L!
Jun 06, 2024 Our paper “Grounded relational inference: Domain knowledge-driven explainable autonomous driving” is accepted for IEEE Transactions on Intelligent Transportation Systems (T-ITS)!
Jun 02, 2024 Our paper “BeTAIL: Behavior transformer adversarial imitation learning from human racing gameplay” is accepted for RA-L. Check out our website and code!
May 14, 2024 Our paper “Quantifying interaction level between agents helps cost-efficient generalization in multi-agent reinforcement learning” is accepted for RLC 2024! Checkout our code and paper!
Jan 29, 2024 Our paper “Guided Online Distillation: Promoting Safe Reinforcement Learning by Offline Demonstration” is accepted for ICRA 2024, and our RA-L paper “Editing Driver Character: Socially-Controllable Behavior Generation for Interactive Traffic Simulation” will also be presented at ICRA 2024!
Oct 16, 2023 We organized the Workshop on Scenario and Behavior Diversity in Simulation for Autonomous Vehicle Validation at IEEE IAVVC 2023!