Getting robots to perceive their surroundings in adverse conditions, such as smoke, fog, or even inside opaque structures, has long been a challenge. However, a group of researchers at the University of Pennsylvania have developed “superhuman vision” thanks to the PanoRadar system , a technology that uses radio waves to give robots vision capable of penetrating obstacles and generating detailed three-dimensional maps.
Nature’s inspiration for superhuman vision
The key behind PanoRadar is based on a principle borrowed from nature: many animals, such as bats and sharks, do not rely on light to perceive their surroundings. Instead, they use echoes of sound waves or electric fields to navigate and hunt.
Following this same logic, researchers at Penn Engineering have shown that radio waves, which have much longer wavelengths than light, can penetrate obstacles such as smoke or walls, giving robots vision beyond what traditional sensors allow.
How does PanoRadar work?
PanoRadar works by using a set of rotating antennas that emit radio waves and capture their reflections to create a representation of the environment. This process is similar to how a lighthouse projects its light in all directions, but in this case, the radar uses electromagnetic waves instead of visible light.
By rotating these antennas, the system can scan the entire environment and, thanks to the power of artificial intelligence (AI), integrate measurements obtained from different angles to generate 3D images with a resolution comparable to that of LiDAR systems .
One of PanoRadar’s key advancements is the use of AI algorithms that process radio signals to improve image resolution. Although the radar system itself is low-cost compared to more expensive technologies such as LiDAR, the integration with AI allows the generated three-dimensional images to be of high quality , offering robots the ability to navigate accurately even in difficult environments.
This is especially useful in applications such as autonomous vehicles and rescue missions, where the ability to detect obstacles invisible to optical sensors can make the difference between success and failure.
Overcoming the challenges of mobility
Maintaining high resolution while the robot is moving presents an additional challenge. In real-world conditions, robots move , and so their images can be affected by small errors in position. However, researchers have overcome this obstacle by using signal processing algorithms that combine measurements of different positions with sub-millimeter precision. With the ability to detect objects and people even through materials such as glass or dense smoke, PanoRadar promises to change the way robots perceive their environment .
Researchers at the University of Pennsylvania are expanding testing to integrate PanoRadar into a variety of robotic platforms , potentially transforming industries such as automotive, search and rescue, and others that require navigation in extreme conditions. By combining this technology with traditional sensors such as cameras and LiDAR, multimodal perception systems can be created that make robots more robust and capable in challenging environments.
With PanoRAdar, robots will be able to see even in challenging environments. Source: Penn Engineering AI
The ability of robots to see beyond the limits of traditional sensors is a crucial advancement for robotics. With PanoRadar, researchers at Penn Engineering have shown that artificial intelligence and radio waves can be combined to create a vision system that allows robots to effectively navigate in conditions that previously seemed impossible.
Follow us on social media and don’t miss any of our posts!
YouTube LinkedIn Facebook Instagram X TikTok
Source and photos: University of Pennsylvania