VLM-RRT: Vision Language Model Guided RRT Search for Autonomous UAV Navigation
Authors: Jianlin Ye, Savvas Papaioannou, Panayiotis Kolios
Venue: 2025 International Conference on Unmanned Aircraft Systems (ICUAS)
Location: Charlotte, NC, USA
Pages: 633-640
Publisher: IEEE
DOI: 10.1109/ICUAS65942.2025.11007837
Abstract
Path planning is a fundamental capability of autonomous Unmanned Aerial Vehicles (UAVs), enabling them to efficiently navigate toward a target region or explore complex environments while avoiding obstacles. Traditional path-planning methods, such as Rapidly-exploring Random Trees (RRT), have proven effective but often encounter significant challenges. These include high search space complexity, suboptimal path quality, and slow convergence, issues that are particularly problematic in high-stakes applications like disaster response, where rapid and efficient planning is critical.
To address these limitations and enhance path-planning efficiency, we propose Vision Language Model RRT (VLM-RRT), a hybrid approach that integrates the pattern recognition capabilities of Vision Language Models (VLMs) with the path-planning strengths of RRT. By leveraging VLMs to provide initial directional guidance based on environmental snapshots, our method biases sampling toward regions more likely to contain feasible paths, significantly improving sampling efficiency and path quality.
Extensive quantitative and qualitative experiments with various state-of-the-art VLMs demonstrate the effectiveness of this proposed approach.
BibTeX
@inproceedings{ye2025vlmrrt,
title={VLM-RRT: Vision Language Model Guided RRT Search for Autonomous UAV Navigation},
author={Ye, Jianlin and Papaioannou, Savvas and Kolios, Panayiotis},
booktitle={2025 International Conference on Unmanned Aircraft Systems (ICUAS)},
pages={633--640},
year={2025},
organization={IEEE},
address={Charlotte, NC, USA},
doi={10.1109/ICUAS65942.2025.11007837}
}
Comments