This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.
Smart roads with advanced vehicle sensing capabilities could be the linchpin of future intelligent transportation systems and could even help extend driverless cars‘ perceptual range. A new approach that fuses camera and radar data can now track vehicles precisely at distances of up to 500 meters.
Real-time data on the flow and density of traffic can help city managers avoid congestion and prevent accidents. So-called “roadside perception”, which uses sensors and cameras to track vehicles, can help create smart roads that continually gather this information and relay it to control rooms.
“This is the first work that offers a practical solution that combines these two types of data and works in real world deployments and with really challenging distances.” —Yanyong Zhang, University of Science and Technology of China, Hefei
Installing large numbers of road-side sensors can be expensive, though, as well as time-consuming to maintain, says Yanyong Zhang, a professor of computer science at the University of Science and Technology of China (USTC) in Hefei, China. For smart roads to be cost-effective you need to use as few sensors as possible, she says, which means sensors need to be able to track vehicles at significant distances.
Using a new approach to fuse data from high definition camera and millimeter-wave radar, her team has created a system that can pinpoint vehicle locations to within 1.3m at ranges of up to 500m. The results were outlined in a recent paper in IEEE Robotics and Automation Letters.
“If you can extend the range as far as possible, then you can reduce the number of sensing devices you need to deploy,” says Zhang. “This is the first work that offers a practical solution that combines these two types of data and works in real world deployments and with really challenging distances.”
Where camera-radar fusion becomes necessary
Cameras and radars are both good low-cost options for vehicle tracking, says Zhang, but individually they struggle at distances much beyond 100 meters. Fusing radar and camera data can significantly extend ranges, but to do so involves surmounting a range of challenges due to sensors generating completely different kinds of data. While the camera captures a simple 2D image, the radar output is inherently 3D and can in fact be processed to generate a bird’s eye view. Most approaches to camera-radar fusion to date have simply projected the camera data onto the radar’s birds-eye view, says Zhang, but the researchers discovered that this was far from optimal.
In order to better understand the problem, the USTC team installed a radar and a camera on a pole at the end of a straight stretch of expressway close to the university. They also installed a LIDAR on the pole to take ground truth vehicle location measurements, and two vehicles with high precision GPS units were driven up and down the road to help calibrate the sensors.
The researchers installed a camera, radar and LIDAR to track vehicles on an expressway in Heifei, ChinaYao Li
One of Zhang’s PhD students, Yao Li, then carried out experiments with the data collected by the sensors. He discovered that projecting 3D radar data onto the 2D images resulted in considerably lower location errors at longer ranges, compared to the standard approach in which image data is mapped onto the radar data. This led them to the conclusion that it would make more sense to fuse the data in the 2D images, before projecting it back to a birds eye view for vehicle tracking.
As well as allowing precise localization at distances of up to 500 m, the researchers showed that the new technique also boosted the average precision of tracking at shorter distances by 32 percent compared to previous approaches. While the researchers have only tested the approach offline on previously collected datasets, Zhang said the underlying calculations are relatively simple and should be possible to implement in real-time on standard processors.
Using more than one sensor also entails careful synchronization, to ensure that their data streams match up. Over time, environmental disturbances inevitably cause the sensors to drift apart, and they have to be recalibrated. This involves driving the GPS-equipped vehicle up and down the expressway to collect ground truth location measurements that can be used to tune the sensors.
This is time-consuming and costly, so the researchers also built a self-calibration capability into their system. The process of projecting the radar data onto the 2D image is governed by a transformation matrix based on the sensors’ parameters and physical measurements done during the calibration process. Once the data has been projected, an algorithm then tries to match up radar data points with the corresponding image pixels.
If the distance between these data points starts to increase, that suggests the transformation matrix is becoming increasingly inaccurate as the sensors move. By carefully tracking this drift, the researchers are able to automatically adjust the transformation matrix to account for the error. This only works up to a point, says Zhang, but it could still significantly reduce the number of full-blown calibrations required.
Altogether, Zhang says this makes their approach practical to deploy in the real-world. As well as providing better data for intelligent transport systems, she thinks this kind of road-side perception could also provide future self-driving cars with valuable situational awareness.
“It’s a little futuristic, but let’s say there is something happening a few 100 meters away and the car is not aware of it, because it’s congested, and its sensing range couldn’t reach that far,” she says. “Sensors along the highway can disseminate this information to the cars that are coming into the area, so that they can be more cautious or select a different route.”
From Your Site Articles
Related Articles Around the Web