Jingxiang Guo  |  ιƒ­δΊ¬ηΏ”

I am a Master of Computing (Artificial Intelligence) student at the National University of Singapore. I have research collaborations with NUS CLeAR Lab (advised by Prof. Harold Soh), SJTU ScaleLab (advised by Prof. Yao Mu), NUS LinS Lab (advised by Prof. Lin Shao) and HITsz RLGroup (advised by Prof. Yanjie Li). I received my B.Eng. in Automation from Harbin Institute of Technology, Shenzhen.

Email  /  CV  /  GitHub  /  Google Scholar  /  More Academic Links  /  WeChat

profile photo

News

  • [2025/06] πŸŽ‰ MetaFold was accepted to IROS 2025 as Oral Presentation!
  • [2025/05] πŸ… TelePreview won the Best Paper Award at ICRA 2025 Workshop on Human-Centric Multilateral Teleoperation!
  • [2025/05] πŸ… $\mathcal{D(R, O)}$ Grasp won the ICRA 2025 Best Paper Award on Robot Manipulation and Locomotion!
  • [2025/04] πŸŽ‰ Manual2Skill was accepted to RSS 2025!
  • [2024/12] πŸŽ‰ $\mathcal{D(R, O)}$ Grasp was accepted to ICRA 2025!
  • [2024/11] πŸ… $\mathcal{D(R, O)}$ Grasp won the Best Robotics Paper Award at CoRL 2024 MAPoDeL Workshop!

Research

My research interests lie in πŸ€– robot multi-modal learning, 🦾 dexterous manipulation, and 🀝 human-robot interaction. My long-term goal is to create truly conscious robotic life, pushing the boundaries of what's possible with machines. I'm open to collaborations on robotics-related projects! Whether you're a researcher looking for a partner, feel free to reach out to meπŸ‘‹ @ Schedule time with me .

Research Map

Research Map

Publications

Papers sorted by recency. Representative papers are highlighted.

InsertScale: A Benchmark towards Foundation Visuo-Tactile Policy Scalable Learning for Insertion Task
Congsheng Xu*, Jingxiang Guo*, Baijun Chen*, Liuhaichen Yang, Zhen Zou, Yuzhang Li, Jieji Ren, Yiming Wang, Yichao Yan, Yao Mu, Xiaokang Yang
Poster  /  arXiv  /  Code
In Submission
TL;DR: Propose a visuo-tactile benchmark and scalable policy learning framework for insertion tasks, providing standardized tasks, metrics, and multi-modal datasets to evaluate and train foundation policies.
Benchmarking Generalizable Bimanual Manipulation: RoboTwin Dual-Arm Collaboration Challenge at CVPR 2025 MEIS Workshop
Robotwin Challenge Organizers, RoboTwin Challenge Volunteers (Jingxiang Guo, etc.), RoboTwin Challenge Participants
Website  /  arXiv  /  Media (量子位)
Tech Report
TL;DR: Official Technical Report of RoboTwin Dual-Arm Collaboration Challenge @ CVPR 2025 MEIS Workshop
World4Omni: A Zero-Shot Framework from Image Generation World Model to Robotic Manipulation
Haonan Chen*, Bangjun Wang*, Jingxiang Guo*, Tianrui Zhang, Yiwen Hou, Xuchuan Huang, Chenrui Tie, Lin Shao
Website  /  Early Idea  /  arXiv  /  Code
ICML 2025 Workshop @ Building Physically Plausible World Models
TL;DR: Propose a novel framework that leverages a pre-trained multimodal image-generation model as a world model to guide policy learning.
DexSinGrasp: Learning a Unified Policy for Dexterous Object Singulation and Grasping in Cluttered Environments
Lixin Xu, Zixuan Liu, Zhewei Gui, Jingxiang Guo, Zeyu Jiang, Zhixuan Xu, Chongkai Gao, Lin Shao
Website  /  arXiv  /  Code
Spotlight Presentation, ICRA 2025 @ Handy Moves: Dexterity in Multi-Fingered Hands
TL;DR: Implement a unified policy for dexterous object singulation and grasping in cluttered environments, enabling robots to handle complex manipulation tasks with high success rates.
Manual2Skill: Learning to Read Manuals and Acquire Robotic Skills for Furniture Assembly Using Vision-Language Models
Chenrui Tie, Shengxiang Sun, Jinxuan Zhu, Yiwei Liu, Jingxiang Guo, Yue Hu, Haonan Chen, Junting Chen, Ruihai Wu, Lin Shao
Website  /  arXiv  /  Code
RSS 2025  Robotics: Science and Systems
Oral Presentation, CVPR 2025 @ 3D Vision Language Models for Robotic Manipulation
TL;DR: Propose a novel approach that leverages vision-language models to interpret assembly manuals and translate them into executable robotic skills for furniture assembly tasks.
TelePreview: A User-Friendly Teleoperation System with Virtual Arm Assistance for Enhanced Effectiveness
Jingxiang Guo*, Jiayu Luo*, Zhenyu Wei*, Yiwen Hou, Zhixuan Xu, Xiaoyi Lin, Chongkai Gao, Lin Shao
Website  /  arXiv  /  Code (Coming soon)
TL;DR: Implement a low-cost teleoperation system utilizing data gloves and IMU sensors, paired with an assistant module that improves data collection process by visualizing future robot operations through visual previews.
MetaFold: A Closed-loop Pipeline for Universal Clothing Folding via End-to-end Point Cloud Trajectory Generation
Haonan Chen, Junxiao Li, Chongkai Gao, Zhixuan Xu, Chenting Wang, Yiwen Hou, Jingxiang Guo, Shensi Xu, Jiaqi Huang, Weidong Wang, Lin Shao
Website  /  arXiv  /  Dataset
IROS 2025  International Conference on Intelligent Robots and Systems
Oral Presentation @ IROS 2025
TL;DR: Propose a closed-loop pipeline for universal clothing folding using end-to-end point cloud trajectory generation, enabling robots to handle various types of clothing with high precision.
$\mathcal{D(R,O)}$ Grasp: A Unified Representation of Robot and Object Interaction for Cross-Embodiment Dexterous Grasping
Zhenyu Wei*, Zhixuan Xu*, Jingxiang Guo, Yiwen Hou, Chongkai Gao, Zhehao Cai, Jiayu Luo, Lin Shao
Website  /  arXiv  /  Code  /  Media (ζœΊε™¨δΉ‹εΏƒ)
ICRA 2025  International Conference on Robotics and Automation
ICRA 2025 Best Paper Award on Robot Manipulation and Locomotion
Best Robotics Paper Award, CoRL 2024 @ MAPoDeL
Oral Presentation, CoRL 2024 @ MAPoDeL
Spotlight Presentation, CoRL 2024 @ LFDM
TL;DR: Introduce $\mathcal{D(R,O)}$, a novel interaction-centric representation for dexterous grasping tasks that goes beyond traditional robot-centric and object-centric approaches, enabling robust generalization across diverse robotic hands and objects.
MASQ: Multi-Agent Reinforcement Learning for Single Quadruped Robot Locomotion
Qi Liu*, Jingxiang Guo*, Sixu Lin, Shuaikang Ma, Jinxuan Zhu, Yanjie Li
arXiv  /  Video  /  Press
ICML 2025 Workshop @ NewInML
TL;DR: Introduce MASQ, a novel approach using multi-agent reinforcement learning (MARL) for single quadruped robot locomotion. By treating each leg as an independent agent, MASQ accelerates learning and boosts real-world robustness, surpassing traditional methods.
Multi-Agent Target Assignment and Path Finding for Intelligent Warehouse: A Cooperative Multi-Agent Deep Reinforcement Learning Perspective
Qi Liu, Jianqi Gao, Dongjie Zhu, Zhongjian Qiao, Jingxiang Guo, Pengbin Chen, Yanjie Li
arXiv  /  Media
In submission
TL;DR: Develop a cooperative multi-agent deep reinforcement learning approach for intelligent warehouse systems, focusing on efficient target assignment and path finding for multiple robots.
Logarithmic Function Matters Policy Gradient Deep Reinforcement Learning
Qi Liu, Jingxiang Guo, Zhongjian Qiao, Pengbin Chen, Yanjie Li
PDF  /  Code
DAI 2024  Distributed AI (DAI) conference
Oral Presentation @ DAI 2024
TL;DR: Investigate the impact of logarithmic functions in policy gradient deep reinforcement learning, demonstrating improved performance and stability in various RL tasks.
Momentum Prediction for Tennis Matches Based on Counter-Factual Analysis and Multi-LGBM
Jingxiang Guo, Jinxuan Zhu, Sixu Lin, Feng Shi
IEEE Xplore  /  Code
ICAACE 2024  International Conference on Advanced Algorithms and Control Engineering
TL;DR: Develop a novel approach for tennis match momentum prediction using counter-factual analysis and multi-LGBM models, achieving improved accuracy in match outcome predictions.
ECAPA-TDNN Embeddings for Speaker Recognition
Jingxiang Guo, Jinxuan Zhu, Sixu Lin, Feng Shi
IEEE Xplore  /  Code
AINIT 2024  International Seminar on Artificial Intelligence, Networking and Information Technology
TL;DR: Implement and evaluate ECAPA-TDNN embeddings for speaker recognition tasks, demonstrating improved performance in speaker verification and identification.
Quick reversing device and quick track reversing device
Kuntian Dai, Jingxiang Guo, Nengfeng Liu, Guanyu Hou, Jinbin Guo, Junkai Wang, Ruiquan Dong
Google Patents  /  Certificate
Invention Patent  National Patent CN116000896B
TL;DR: Design and patent a novel quick reversing device and track reversing system, improving efficiency and safety in industrial applications.

Award

Education

National University of Singapore, Singapore 2025.08 - Present

Master of Computing (Artificial Intelligence)
Advisor: Prof. Harold Soh
Certificate

Harbin Institute of Technology, Shenzhen, China 2021.09 - 2025.07

B.E. in Automation
GPA: 3.7/4.0
Certificate

National University of Singapore, Singapore 2024.07 - 2025.05

NGNE Program Exchange Student
GPA: 4.2/5.0
Certificate

Experience

Collaborative, Learning, and Adaptive Robots Lab (CLeAR Lab), Singapore 2024.08 - Present

Research Intern
Advisor: Prof. Harold Soh

Spatial Cognition and Robotic Automative Learning Laboratory (ScaleLab) 2024.05 - 2024.08

Research Intern
Advisor: Prof. Yao Mu

NUS Learning and Intelligent Systems Lab (LinS Lab), Singapore 2024.07 - 2025.05

Research Intern
Advisor: Prof. Lin Shao

HTISZ Reinforcement Learning Group (RLG), Shenzhen, China 2022.10 - 2024.06

Research Intern
Advisor: Prof. Yanjie Li

Employment

Autolife Robotics, Singapore 2025.08 - Present

Robot Software System Intern
Mentor: Siwei Chen

Horizon Robotics, Shanghai, China 2025.06 - 2025.08

Cloud Platform Intern
Mentor: Yusen Qin
Internship Certificate

Organization

VapourX 2025.08 - Present

Member and Co-Founder
Website

UNU Global AI Network 2024.04 - Present

Member
Website

The Millennium Project 2024.03 - 2024.09

Intern
Mentor: Jerome C. Glenn
Website  /  Internship Certificate

Thanks for your visiting😊! Feel free to contact me if you have any problems.
This website is designed based on Jon Barron and Zhenyu Wei. Last Update: May 24, 2024