LoD-Loc v2: Aerial Visual Localization over Low Level-of-Detail City Models using Explicit Silhouette Alignment

ICCV 2025

1 National University of Defense Technology 2 Westlake University
Teaser Visualization

In this paper, we introduce LoD-Loc v2 to tackle aerial visual localization using low-LoD city models. These models are characterized by wide availability, lightweight properties, and inherent privacy-preserving capabilities. Given a query image with its prior pose, our approach utilizes the explicit silhouette alignment to recover the camera pose.

Abstract

We propose a novel method for aerial visual localization over low Level-of-Detail (LoD) city models. Previous wireframe-alignment-based method LoD-Loc has shown promising localization results leveraging LoD models. However, LoD-Loc mainly relies on high-LoD (LoD3 or LoD2) city models, but the majority of available models and those many countries plan to construct nationwide are low-LoD (LoD1). Consequently, enabling localization on low-LoD city models could unlock drones' potential for global urban localization. To address these issues, we introduce LoD-Loc v2, which employs a coarse-to-fine strategy using explicit silhouette alignment to achieve accurate localization over low-LoD city models in the air. Specifically, given a query image, LoD-Loc v2 first applies a building segmentation network to shape building silhouettes. Then, in the coarse pose selection stage, we construct a pose cost volume by uniformly sampling pose hypotheses around a prior pose to represent the pose probability distribution. Each cost of the volume measures the degree of alignment between the projected and predicted silhouettes. We select the pose with maximum value as the coarse pose. In the fine pose estimation stage, a particle filtering method incorporating a multi-beam tracking approach is used to efficiently explore the hypothesis space and obtain the final pose estimation. To further facilitate research in this field, we release two datasets with LoD1 city models covering 10.7 km2, along with real RGB queries and ground-truth pose annotations. Experimental results show that LoD-Loc v2 improves estimation accuracy with high-LoD models and enables localization with low-LoD models for the first time. Moreover, it outperforms state-of-the-art baselines by large margins, even surpassing texture-model-based methods, and broadens the convergence basin to accommodate larger prior errors. The project are available at https://github.com/VictorZoo/LoD-Loc-v2.

Demo Video

Please watch the video for a detailed explanation of our pipeline and qualitative results.

Overview of Methodology

Overview of LoD-Loc v2

1. LoD-Loc v2 employs a building segmentation module to extract building silhouettes \( M_q \) from the query image \( I_q \). 2. A 4D pose cost volume \( \mathcal{C} \) is built for pose hypotheses \( \{ \boldsymbol{\xi}_{hyp} \} \) sampled around the prior pose \( {\boldsymbol{\xi}}_p \) to select the pose \( {\boldsymbol{\xi}}_c \) with the highest probability, based on the alignment between projected and predicted building silhouettes. 3. A particle filter refinement is applied to refine the pose \( {\boldsymbol{\xi}}_c \) to obtain a final accurate pose \( {\boldsymbol{\xi}}^{*} \).

Visualization Results

Visualization Results

The visualized segmentation results demonstrate the model’s excellent segmentation performance. Better alignment indicates a more accurate pose prediction. Besides, the top two rows are from the UAVD4L-LoDv2 dataset, while the bottom two rows are from the Swiss-EPFLv2 dataset.

BibTeX

If you find this work useful for your research, please cite our paper:

@article{LoDLocv2,
  title={LoD-Loc v2: Aerial Visual Localization over Low Level-of-Detail City Models using Explicit Silhouette Alignment},
  author={Zhu, Juelin and Peng, Shuaibang and Wang, Long and Tan, Hanlin and Liu, Yu and Zhang, Maojun and Yan, Shen},
  journal={ICCV},
  year={2025}
}