GeoSeg: Training-Free Reasoning-Driven Segmentation in Remote Sensing Imagery
Lifan Jiang1
Yuhang Pei1
Boxi Wu1
Yan Zhao2
Tianrun Wu1
Shulong Yu1
Lihui Zhang3
Deng Cai1
1State Key Lab of CAD&CG, Zhejiang University
2UniTTEC Co. Ltd.
3Wuchan Zhongda Chengtou (Ningbo) Holdings. Ltd.
Submitted to ECCV 2026
Paper | GitHub | Dataset
Remote sensing (RS) segmentation is currently evolving from fixed-category prediction toward instruction-grounded localization; however,
the scarcity of reasoning-oriented datasets and domain-specific challenges, such as overhead viewpoints, hinder its progress.
We introduce GeoSeg, a zero-shot, training-free framework that addresses these bottlenecks by coupling the reasoning power of multimodal large language models (MLLMs)
with precise promptable segmenters.
To ensure reliable localization, GeoSeg incorporates Bias-Aware Coordinate Refinement to correct systematic grounding shifts inherent in MLLMs under rotation-invariant RS visual statistics.
Furthermore, it utilizes a Dual-Route Prompting mechanism that synergizes fine-grained visual keypoints (Route A) with semantic intent (Route B), finalized through a consensus-driven fusion strategy.
To benchmark this task, we present GeoSeg-Bench, a dedicated diagnostic dataset of 810 image-query pairs featuring hierarchical difficulty levels—ranging from explicit visual attributes to implicit reasoning tasks.
Experimental results on GeoSeg-Bench and SegEarth-R2 demonstrate that GeoSeg significantly outperforms established baselines, achieving state-of-the-art performance in instruction faithfulness and boundary precision
without any domain-specific fine-tuning.
Abstract
Recent advances in MLLMs are reframing segmentation from fixed-category prediction to instruction-grounded localization. While reasoning based segmentation has progressed rapidly in natural scenes, remote sensing lacks a generalizable solution due to the prohibitive cost of reasoning-oriented data and domain-specific challenges like overhead viewpoints. We present GeoSeg, a zero-shot, training-free framework that bypasses the supervision bottleneck for reasoning-driven remote sensing segmentation. GeoSeg couples MLLM reasoning with precise localization via: (i) bias-aware coordinate refinement to correct systematic grounding shifts and (ii) a dual-route prompting mechanism to fuse semantic intent with fine-grained spatial cues. We also introduce GeoSeg-Bench, a diagnostic benchmark of 810 image--query pairs with hierarchical difficulty levels. Experiments show that GeoSeg consistently outperforms all baselines, with extensive ablations confirming the effectiveness and necessity of each component.
Method / Pipeline
|
Remote sensing (RS) segmentation is currently evolving from fixed-category prediction toward instruction-grounded localization; however,
the scarcity of reasoning-oriented datasets and domain-specific challenges, such as overhead viewpoints, hinder its progress.
We introduce GeoSeg, a zero-shot, training-free framework that couples the reasoning power of multimodal large language models (MLLMs)
with precise promptable segmenters.
|
Figure: GeoSeg Pipeline
|
Results
|
Qualitative comparison with multiple baselines. This figure demonstrates the superiority of our approach over three major categories of baseline models: generalist segmentation, reasoning segmentation, and open-source MLLMs. Most baseline methods struggle to comprehend the query intent, resulting in segmentation failures or excessive noise, whereas our method successfully generates accurate masks. More examples are provided in the Appendix. |
Figure: Baseline Comparison
|
|
Ablation study on component effectiveness. We validate the contribution of the Bias-Aware Coordinate Refinement (Box Refine) and the Dual-Route strategy. Route A represents the Point-Prompt path (Visual Cues), and Route B denotes the Text-Prompt path (Semantic Cues). Removing any module significantly degrades performance, confirming the necessity of our full pipeline. More examples are provided in the Appendix. |
Figure: Ablation Study
|