Autonomous Perception

Autonomous Perception system: Computer Vision meets Wireless Sensing.

As a faculty, I expanded my research horizon to learn machine learning and computer vision while connecting these topics to wireless. Automated driving requires sensing of the environment and performing actions related to driving based on that. However, performing automated driving primarily depends on how well we can sense the environment around us. Especially, how well can machine (car) see with sensors compared to human eyes, especially under non-line-of-sight, or bad weather conditions fog, rain, snow or even fire?” is extremely important to perform automated driving. I started working on computer vision to understand the limitations of low-cost camera-based sensing. In that context, we developed depth sensing with a camera for long distances in the first project. I combined the space-time coherence ideas from wireless communication with computer vision-based depth prediction and developed an unsupervised depth prediction model which outperformed by 40% compared to state-of-the-art, which was published in CVPR 2019. I was lead faculty author, collaborators from Toyota, my students, and my colleague Javidi. Next, I developed a camera perception model robust to weather and other perturbations in the data yielded interesting and far-reaching results compared to state-of-art published in ECCV 2021. With the learning from computer vision, I was curious about leveraging and developing radars, which can enable sensing in inclement weather and/or dark, which otherwise is impossible with camera. Radars in automotive perception systems provide a low-cost alternative to Lidar, which can work in inclement weather but suffer from other non-idealities. Autonomous perception requires sensors to provide a detailed bounding box of sensed vehicles, objects. However, the radars use wireless signals, which are specular by nature when they reflect off surfaces; thus, making imaging with a detailed bounding box of perceived objects challenging. We have demonstrated for the first time, a distributed radar system with appropriate antenna placement, can overcome the specular reflection, allowing radars to recreate the bounding box in Sensys 2020. We developed AI/computer vision techniques for radar distribution to predict detailed bounding boxes even in bad weather. This work was news and media, as it solved a long-standing question of whether radars can replace Lidar, and was recently published in wall street journal. In my research, we have newly developed camera-radar fusion which can provide detailed attributes of the detected objects as well, besides providing more accurate bounding boxes. Our next directions are to enable cooperative sensing in this domain, where multiple sensors on multiple cars can collaborate. Furthermore, we are developing an end-to-end navigation framework to deduce the requirement of perception, if and when it fails the root cause for the failure, achieving the required perception with a low-cost camera and radar fusion.