Rethinking Drone Autonomy in GPS-Denied Environments

Rethinking Drone Autonomy in GPS-Denied Environments
from author
Autonomous drones depend on accurate positioning to navigate safely. In forests, cities, and search-and-rescue environments, that assumption often breaks. This article presents HUNT, a different approach to navigation for exactly those conditions.

The state of autonomous navigation

Autonomous drones are often portrayed as machines capable of navigating complex environments entirely on their own. In practice, most aerial robots still rely on a fragile assumption: that they can maintain an accurate estimate of their position and orientation throughout flight. This global localization estimate anchors the rest of the autonomy pipeline, allowing the robot to determine where its goal lies, plan a trajectory, and execute control commands to reach it.

When localization remains accurate, this architecture works remarkably well. But it also has a fundamental weakness. If the localization estimate drifts, the estimated goal drifts with it. A robot may satisfy its internal objective while remaining displaced from the intended physical location. For aerial vehicles moving quickly through complex environments, even modest errors in state estimation can produce substantial deviations in the world.

To mitigate this risk, most drones combine GPS with visual–inertial odometry, or VIO [1–2]. GPS provides global position corrections, while VIO estimates motion between those corrections by fusing camera observations with inertial measurements. The camera tracks visual features across frames while the inertial measurement unit measures acceleration and rotation, allowing the system to reconstruct the vehicle’s trajectory over time [3–4].

In structured environments with strong signals and rich visual texture, this combination can be extremely accurate. However, the assumptions behind it often fail in exactly the environments where drones could provide the greatest benefit. GPS signals weaken under forest canopies [5], disappear indoors, and degrade in dense urban environments, where reflections distort the measurements [6]. Visual–inertial odometry has its own limitations: the visual features it depends on may become unreliable when lighting changes, motion blur increases, or the environment lacks sufficient texture. As these errors accumulate, the pose estimate begins to drift, and navigation becomes unreliable.

For aerial robots operating in disaster zones, forests, or industrial infrastructure, this creates a fundamental challenge: reliable global localization cannot always be guaranteed, yet the traditional autonomy architecture depends on it.

From global pose to what the sensors see now

During my doctoral research in aerial robotics at New York University, this limitation led me to reconsider a basic assumption of drone navigation. Instead of continuously trying to improve global localization, I began asking whether autonomous drones could operate without depending on it at all. Rather than estimating a persistent global pose, a robot might rely only on quantities directly observable from its onboard sensors at any given moment, such as its orientation from inertial sensing, its altitude from a barometer, its motion inferred from visual cues, and objects currently visible in the camera.

This idea led to the development of a navigation framework called HUNT [7], designed to allow drones to operate in unstructured environments without relying on GPS or persistent global localization. The key change lies in how navigation objectives are defined. Instead of maintaining a global coordinate system that must remain consistent over time, the drone reconstructs its reference frame continuously from the information it can observe directly. In this formulation, perception, planning, and control operate within an instantaneous relative frame, a coordinate system rebuilt at every control cycle from current observations.

The advantage of this approach is subtle but significant. Because the reference frame is reconstructed continuously, the system avoids the accumulation of long-term errors that affect traditional localization pipelines. The drone does not need to maintain a precise estimate of its global position. Instead, it navigates from what it can reliably observe in the present moment.

Search-and-rescue missions illustrate why this perspective matters. A drone deployed during an emergency may need to search an unknown environment and then track a target once it is detected. Many existing systems can perform one of these tasks, but not both [8]. Navigation frameworks rely on global localization to move through space, while target-tracking systems can operate without GPS but often assume that the object of interest is already visible.

In practice, however, the search phase of a mission often begins with no visible target at all. What is needed is a system that can traverse unknown terrain safely without global localization, and then transition seamlessly to tracking once a target appears.

Block diagram of the HUNT framework: loitering mode, tracking mode, switching module, and control stack.
Figure 1 Overview of the HUNT navigation framework The drone operates in two modes: loitering, where it regulates altitude, heading, and forward velocity using directly observable onboard measurements, and tracking, where the navigation frame is redefined relative to a detected target to maintain a desired stand-off distance and orientation. A switching module selects the active mode, while the control stack combines vision, attitude, altitude, dynamics, and safety constraints to generate commands without relying on persistent global localization.

Loitering, tracking, and experiments in the field

In the HUNT framework, the drone begins in a loitering mode (Figure 1). The vehicle maintains a desired altitude, heading, and forward velocity using onboard perception and inertial sensing. State estimation relies on measurements that remain observable in almost any environment, such as altitude from the barometer and motion cues extracted from the camera. Importantly, the control objective does not depend on global horizontal position. Even if the internal position estimate slowly drifts, the navigation behavior remains stable.

Video 1 HUNT loitering without global horizontal position

When a target enters the camera’s field of view, the system transitions naturally into tracking mode. At that moment, the navigation frame is redefined relative to the detected object. The drone then regulates its motion with respect to the target, maintaining a desired distance and orientation. Because the objective is expressed relative to the target rather than to a global coordinate frame, localization drift no longer determines the system’s behavior.

To evaluate this approach, we conducted a series of outdoor experiments designed to stress traditional navigation pipelines. The goal was not simply to demonstrate the algorithm in controlled conditions, but to understand how it behaved in environments where global localization systems typically degrade.

Video 2 Search-mode loitering across mixed environments

We first tested the drone’s ability to sustain stable flight in search mode across a variety of environments (Video 1, Video 2). In these experiments, the vehicle loitered at different altitudes while maintaining a desired forward velocity and heading. Flights were conducted over open grass fields, across gravel sites, and above semi-structured environments composed of buildings and container stacks. The drone was also flown along mixed trajectories that transitioned between these environments, for example, from forest canopy to open fields and into industrial gravel lots. Across all conditions, the same perception and control parameters were used. Despite changes in terrain geometry, visual texture, and altitude, the vehicle maintained consistent behavior, demonstrating that navigation based on instantaneous observations can generalize across diverse environments without environment-specific tuning.

Video 3 High-speed pursuit of a moving target

We then pushed the system further in tracking experiments. In one test, a pickup truck served as the moving target along a forest trail that eventually opened into a large clearing (Video 3). The truck accelerated to highway speeds, reaching nearly 90 kilometers per hour. The drone pursued the vehicle while maintaining a fixed safety distance and keeping the target centered in its field of view. During aggressive maneuvers, the quadrotor pitched forward by more than forty degrees, highlighting the level of agility required to sustain the pursuit. Despite these conditions, the relative navigation formulation allowed the drone to maintain stable tracking even as GPS accuracy fluctuated and visual detections occasionally became uncertain.

Video 4 Search-then-track in a gravel lot

We also conducted full mission scenarios designed to resemble realistic search-and-rescue operations. In one experiment performed in a large gravel lot, the drone began in search mode, flying a predefined pattern while scanning the environment (Video 4). When a wrecked vehicle entered the camera’s field of view, the system automatically transitioned to tracking mode. From that point onward, the drone maintained a safe distance from the target while keeping it centered in the camera frame, demonstrating a seamless transition between search and pursuit.

Forest experiment trajectories and GPS horizontal uncertainty under canopy during search and tracking.
Figure 2 Forest search-and-acquisition experiments in a GPS-degraded environment The drone traverses dense canopy from two different starting points while searching for a human mannequin, then transitions to target-relative tracking once the target is detected. The colored trajectories and time histories show that GPS horizontal uncertainty (EPH) rises to roughly 3 to 6 m and fluctuates substantially under the canopy, illustrating how forested environments corrupt and destabilize the GPS signal. In these conditions, traditional autonomy pipelines that depend on reliable global localization become unreliable, whereas HUNT remains stable by operating from directly observable measurements and relative target geometry rather than persistent global position estimates.

A second set of experiments repeated this scenario in a forest environment (Figure 2). The drone initially loitered below the canopy while traversing the terrain, searching for a human mannequin placed hundreds of meters away (Video 5). Once the target was detected, the system switched into tracking mode and locked onto the mannequin, maintaining a stable safety distance as it approached. We repeated this mission from different starting points within the forest to verify that the system could reliably perform both search and acquisition under varying conditions (Video 6).

Video 5 Forest search under canopy for a distant mannequin

Taken together, these experiments illustrate a central point: autonomous drones can perform both exploration and pursuit in environments where traditional navigation systems struggle. By relying on directly observable information rather than persistent global localization, the system remains stable even when GPS signals degrade or visual localization drifts. For real-world deployments, particularly in search-and-rescue missions, that robustness may determine whether a system can operate reliably in the field rather than only under controlled conditions.

Video 6 Repeated forest search-and-acquisition from alternate starts

Takeaway

For decades, robotics has focused on building increasingly precise global representations of the world through mapping and localization. Yet real environments rarely provide the reliable information these systems assume. Sensors degrade, GPS becomes unreliable, and conditions change in ways no map can fully anticipate. In such settings, autonomy built around precise global positioning can become fragile. A more robust approach is to design navigation around what a robot can reliably observe in the moment. By grounding autonomy in directly observable information rather than a persistent global state, robots can remain stable even when traditional localization fails. For aerial robots operating in search-and-rescue missions, where infrastructure is damaged and environments are unpredictable, that shift may determine whether autonomous systems can navigate, search, and assist responders when they are needed most.

Bibliography

[1] C. Campos, R. Elvira, J. J. Gómez Rodríguez, J. M. M. Montiel, and J. D. Tardós, “ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-Map SLAM,” arXiv:2007.11898, 2020. (arXiv)

[2] T. Qin, P. Li, and S. Shen, “VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator,” arXiv:1708.03852, 2017. (arXiv)

[3] K. Mohta et al., “Fast, Autonomous Flight in GPS-Denied and Cluttered Environments,” arXiv:1712.02052, 2017. (arXiv)

[4] S. Barbas Laina et al., “Scalable Autonomous Drone Flight in the Forest with Visual-Inertial SLAM and Dense Submaps Built without LiDAR,” arXiv:2403.09596, 2024. (arXiv)

[5] Y. Tian et al., “Search and Rescue under the Forest Canopy using Multiple UAVs,” arXiv:1908.10541, 2019. (arXiv)

[6] K. A. Pant, Z. Yang, J. M. Goppert, and I. Hwang, “An Open-Source Gazebo Plugin for GNSS Multipath Signal Emulation in Virtual Urban Canyons,” arXiv:2212.04018, 2022. (arXiv)

[7] A. Saviolo, J. Mao, and G. Loianno, “HUNT: High-Speed UAV Navigation and Tracking in Unstructured Environments via Instantaneous Relative Frames,” arXiv:2509.19452, 2025. (arXiv)

[8] A. Saviolo and G. Loianno, “NOVA: Navigation via Object-Centric Visual Autonomy for High-Speed Target Tracking in Unstructured GPS-Denied Environments,” arXiv:2506.18689, 2025. (arXiv)

About the author

Alessandro Saviolo is a robotics engineer specializing in autonomous aerial systems and resilient autonomy. He received his Ph.D. in Electrical and Computer Engineering from New York University, where he developed AI-based autonomy architectures for high-speed drone flight in complex GPS-denied environments. His work was validated through more than one hundred experimental flights across forests, urban areas, and search-and-rescue scenarios. He currently works as a Senior Autonomy Engineer at PlusAI, developing AI systems for large-scale autonomous trucking.

Weekly Robotics is sponsored by:
From Prototype to Production: The Supply Chain Gap Robotics Teams Keep Hitting