EditThinker
EditThinker
Instruction-based image editing has emerged as a prominent research area. Benefiting from image generation foundation models, it has achieved high aesthetic quality, making instruction-following capability the primary challenge. Existing approaches improve instruction adherence via supervised or reinforcement learning, yet single-turn success rates remain limited due to inherent stochasticity and a lack of deliberation.
In this work, we propose a deliberative editing framework to "think" while they edit, which simulates the human cognitive loop by iteratively executing a Think-while-Edit cycle: Critiquing results and Refining instructions, followed by Repeating the generation until satisfactory. Specifically, we train a single MLLM, EditThinker, to act as the reasoning engine of this framework, which jointly produces the critique score, reasoning process, and refined instructions. We employ reinforcement learning to align the EditThinker's thinking with its editing, thereby generating more targeted instruction improvements.
Extensive experiments on four benchmarks demonstrate that our approach significantly improves the instruction-following capability of any image editing model by a large margin.
At iteration $t$, EditThinker receives the state $(I_{src}, I_{edit}^{t-1}, T_{t-1}, T_s)$ and jointly evaluates the previous edit, generating critique score $S_t$ and reasoning process $R_t$.
Based on the critique, EditThinker generates a refined instruction $T_t$ that addresses identified deficiencies, creating an explicit chain of thought that grounds instruction refinement in visual critique.
The Editor executes the refined instruction $T_t$ on $I_{src}$ to produce $I_{edit}^t$. This Critique-Refine-Repeat cycle continues until the score $S_t$ indicates success.
To train EditThinker, we constructed ThinkEdit-140k. We employ reinforcement learning (RL) on difficult, high-variance trajectories to align EditThinker's reasoning with actual editing outcomes, while using SFT on stable trajectories for foundational capabilities.
@misc{li2025editthinkerunlockingiterativereasoning,
title={EditThinker: Unlocking Iterative Reasoning for Any Image Editor},
author={Hongyu Li and Manyuan Zhang and Dian Zheng and Ziyu Guo and Yimeng Jia and Kaituo Feng and Hao Yu and Yexin Liu and Yan Feng and Peng Pei and Xunliang Cai and Linjiang Huang and Hongsheng Li and Si Liu},
year={2025},
eprint={2512.05965},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2512.05965},
}