GeoSeg: Training-Free Reasoning-Driven Segmentation in Remote Sensing Imagery

Lifan Jiang1    Yuhang Pei1    Boxi Wu1    Yan Zhao2
Tianrun Wu1    Shulong Yu1    Lihui Zhang3    Deng Cai1

1State Key Lab of CAD&CG, Zhejiang University ZJU Logo    2UniTTEC Co. Ltd.    3Wuchan Zhongda Chengtou (Ningbo) Holdings. Ltd.

Submitted to ECCV 2026

Paper | GitHub | Dataset

Remote sensing (RS) segmentation is currently evolving from fixed-category prediction toward instruction-grounded localization; however, the scarcity of reasoning-oriented datasets and domain-specific challenges, such as overhead viewpoints, hinder its progress. We introduce GeoSeg, a zero-shot, training-free framework that addresses these bottlenecks by coupling the reasoning power of multimodal large language models (MLLMs) with precise promptable segmenters.

To ensure reliable localization, GeoSeg incorporates Bias-Aware Coordinate Refinement to correct systematic grounding shifts inherent in MLLMs under rotation-invariant RS visual statistics. Furthermore, it utilizes a Dual-Route Prompting mechanism that synergizes fine-grained visual keypoints (Route A) with semantic intent (Route B), finalized through a consensus-driven fusion strategy. To benchmark this task, we present GeoSeg-Bench, a dedicated diagnostic dataset of 810 image-query pairs featuring hierarchical difficulty levels—ranging from explicit visual attributes to implicit reasoning tasks. Experimental results on GeoSeg-Bench and SegEarth-R2 demonstrate that GeoSeg significantly outperforms established baselines, achieving state-of-the-art performance in instruction faithfulness and boundary precision without any domain-specific fine-tuning.

GeoSeg Teaser
Figure: GeoSeg Teaser

Abstract

Recent advances in MLLMs are reframing segmentation from fixed-category prediction to instruction-grounded localization. While reasoning based segmentation has progressed rapidly in natural scenes, remote sensing lacks a generalizable solution due to the prohibitive cost of reasoning-oriented data and domain-specific challenges like overhead viewpoints. We present GeoSeg, a zero-shot, training-free framework that bypasses the supervision bottleneck for reasoning-driven remote sensing segmentation. GeoSeg couples MLLM reasoning with precise localization via: (i) bias-aware coordinate refinement to correct systematic grounding shifts and (ii) a dual-route prompting mechanism to fuse semantic intent with fine-grained spatial cues. We also introduce GeoSeg-Bench, a diagnostic benchmark of 810 image--query pairs with hierarchical difficulty levels. Experiments show that GeoSeg consistently outperforms all baselines, with extensive ablations confirming the effectiveness and necessity of each component.

GeoSeg paper thumbnail

Paper

Submitted to ECCV 2026 https://arxiv.org/abs/2603.03983, 2026.

Citation

Lifan Jiang, Yuhang Pei, Boxi Wu, Yan Zhao, Tianrun Wu, Shulong Yu, Lihui Zhang, Deng Cai. "GeoSeg: Training-Free Reasoning-Driven Segmentation in Remote Sensing Imagery". 2026.
Bibtex

Method / Pipeline

Remote sensing (RS) segmentation is currently evolving from fixed-category prediction toward instruction-grounded localization; however, the scarcity of reasoning-oriented datasets and domain-specific challenges, such as overhead viewpoints, hinder its progress. We introduce GeoSeg, a zero-shot, training-free framework that couples the reasoning power of multimodal large language models (MLLMs) with precise promptable segmenters.
Phase 1 – Bias-Aware Coordinate Refinement. GeoSeg incorporates Bias-Aware Coordinate Refinement to correct systematic grounding shifts inherent in MLLMs under rotation-invariant RS visual statistics.
Phase 2 – Dual-Route Prompting and Fusion. GeoSeg employs a Dual-Route Prompting mechanism that synergizes fine-grained visual keypoints (Route A) with semantic intent (Route B), finalized through a consensus-driven fusion strategy. To benchmark this task, we construct GeoSeg-Bench, a diagnostic dataset of 810 image–query pairs with hierarchical difficulty levels, on which GeoSeg achieves state-of-the-art instruction faithfulness and boundary precision without any domain-specific fine-tuning.

Overview of the GeoSeg pipeline. Given a remote sensing image \(I\) and a natural language query \(q\), the pipeline operates in three stages: (1) Reasoning-Driven Grounding: the MLLM \(\mathcal{L}\) generates a coarse bounding box \(b\) and extracts the object prompt \(p\). (2) Bias-Aware Coordinate Refinement: to mitigate grounding bias, the box is adjusted via asymmetric expansion \((\alpha, \beta)\) to yield a refined RoI \(I_{b'}\). (3) Dual-Route Segmentation & Fusion: within the RoI, we perform parallel segmentation using Route A (visual cues via CLIP Surgery) and Route B (semantic cues via SAM3 with prompt \(p\)). The final prediction \(\hat{M}\) is obtained by integrating both paths via Intersection-First Fusion.

GeoSeg Pipeline
Figure: GeoSeg Pipeline

Results

Qualitative comparison with multiple baselines. This figure demonstrates the superiority of our approach over three major categories of baseline models: generalist segmentation, reasoning segmentation, and open-source MLLMs. Most baseline methods struggle to comprehend the query intent, resulting in segmentation failures or excessive noise, whereas our method successfully generates accurate masks. More examples are provided in the Appendix.

Baseline Comparison
Figure: Baseline Comparison

Ablation study on component effectiveness. We validate the contribution of the Bias-Aware Coordinate Refinement (Box Refine) and the Dual-Route strategy. Route A represents the Point-Prompt path (Visual Cues), and Route B denotes the Text-Prompt path (Semantic Cues). Removing any module significantly degrades performance, confirming the necessity of our full pipeline. More examples are provided in the Appendix.

Ablation Study
Figure: Ablation Study

Related Work




BibTeX

×
@misc{jiang2026geosegtrainingfreereasoningdrivensegmentation,
  title={GeoSeg: Training-Free Reasoning-Driven Segmentation in Remote Sensing Imagery},
  author={Lifan Jiang and Yuhang Pei and oxi Wu and Yan Zhao and Tianrun Wu and Shulong Yu and Lihui Zhang and Deng Cai},
  year={2026},
  eprint={2603.03983},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={https://arxiv.org/abs/2603.03983},
}