RadSimReal: Bridging the Gap Between Synthetic and Real Data in Radar Object Detection With Simulation

General Motors, Technical Center Israel
CVPR 2024

*Indicates Equal Contribution

Author is also with the School of Electrical and Computer Engineering in Ben Gurion University of the Negev.

This work demonstrates the feasibility of training an object detection model for radar data solely on simulated data, achieving comparable performance to models trained on real radar data, thereby showcasing the potential of simulated data in radar object detection training.

Abstract

Object detection in radar imagery with neural networks shows great potential for improving autonomous driving. However, obtaining annotated datasets from real radar images, crucial for training these networks, is challenging, especially in scenarios with long-range detection and adverse weather and lighting conditions where radar performance excels. To address this challenge, we present RadSimReal, an innovative physical radar simulation capable of generating synthetic radar images with accompanying annotations for various radar types and environmental conditions, all without the need for real data collection. Remarkably, our findings demonstrate that training object detection models on RadSimReal data and subsequently evaluating them on real-world data produce performance levels comparable to models trained and tested on real data from the same dataset, and even achieves better performance when testing across different real datasets. RadSimReal offers advantages over other physical radar simulations that it does not necessitate knowledge of the radar design details, which are often not disclosed by radar suppliers, and has faster run-time. This innovative tool has the potential to advance the development of computer vision algorithms for radar-based autonomous driving applications.

Method

Method Overview


Block diagram illustrating the processing steps for conventional simulation (a)+(b) and RadSimReal (a)+(c).
(a) Simulates the environment to generate reflection points with RF reflectivity of an automotive scene, while (b) and (c) represent the conventional approach and RadSimReal's approach, respectively, for transforming the reflection points into a radar image.

Experimental Results