Prof. Robot: Differentiable Robot Rendering Without Static and Self-Collisions

1South China University of Technology, 2School of Data Science, The Chinese University of Hong Kong, Shenzhen

Prof.Robot improves upon Dr.Robot by additionally enabling differentiable avoidance of static and self-collisions. By learning and integrating a gradient-consistent pose classifier into a differentiable rendering pipeline, the generated pose trajectories are free from physical collisions. The objective is to penalize high collision probabilities during optimization.

Abstract

Differentiable rendering has gained significant attention in the field of robotics, with differentiable robot rendering emerging as an effective paradigm for learning robotic actions from image-space supervision. However, the lack of physical world perception in this approach may lead to potential collisions during action optimization.

In this work, we introduce a novel improvement on previous efforts by incorporating physical awareness of collisions through the learning of a neural robotic collision classifier. This enables the optimization of actions that avoid collisions with static, non-interactable environments as well as the robot itself. To facilitate effective gradient optimization with the classifier, we identify the underlying issue and propose leveraging Eikonal regularization to ensure consistent gradients for optimization. Our solution can be seamlessly integrated into existing differentiable robot rendering frameworks, utilizing gradients for optimization and providing a foundation for future applications of differentiable rendering in robotics with improved reliability of interactions with the physical world.

Both qualitative and quantitative experiments demonstrate the necessity and effectiveness of our method compared to previous solutions.

Sim-to-Real Experiments

Pose Optimization

SO(2) Interpolation

Trajectory w/o Prof.Robot

Control w/o Prof.Robot

Ours

Trajectory w/ Prof.Robot

Control w/ Prof.Robot

Trajectory of Movement Along the Tangent

Trajectory

Control

Integration with Differentiable Robot Rendering

Dr.Robot Inverse Dynamics

Inverse Image

Inverse Trajectory

Prof.Robot Inverse Dynamics

Inverse Image

Inverse Trajectory

Pick Experiments

The red robot arm in the image denotes direct control based on the original parameters learned by Dr. Robot, while the blue robot arm represents control following optimization through our SDF model. To highlight potential collisions, we elevated the plane of the initial 60 views by 0.005 m solely during the rendering process, with collision points marked in red on the gray desktop.

BibTeX

@InProceedings{Ruan_2025_CVPR,
    author    = {Ruan, Quanyuan and Lei, Jiabao and Yuan, Wenhao and Zhang, Yanglin and Lu, Dekun and Liu, Guiliang and Jia, Kui},
    title     = {Prof. Robot: Differentiable Robot Rendering Without Static and Self-Collisions},
    booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
    month     = {June},
    year      = {2025},
    pages     = {22562-22572}
}