Foundation segmentation models, while powerful, pose a significant risk: they enable users to effortlessly extract any objects from any digital content with a single click, potentially leading to copyright infringement or malicious misuse. To mitigate this risk, we introduce a new task Anything Unsegmentable'' to grant any image
the right to be unsegmented''. The ambitious pursuit of the task is to achieve highly transferable adversarial attack against all prompt-based segmentation models, regardless of model parameterizations and prompts. Through observation and analysis, we found that prompt-specific adversarial attacks generate highly variant perturbations that transfer narrowly, due to the heterogeneous nature of prompts. To achieve prompt-agnostic attacks, we focus on manipulating the image encoder features. Surprisingly we found that targetted feature perturbations lead to more transferable attacks, suggesting the optimal direction of optimization should be along the image distribution. Based on the observations, we design a novel attack named Unsegment Anything by Simulating Deformation (UAD). Our attack optimizes a differentiable deformation function to create a target deformed image, which alters structural information while preserving achievable feature distance by adversarial example. The optimization objective seeks trade-off between structural deformation and the fidelity of adversarial noise in simulating this deformation. Extensive experiments verify the effectiveness of our approach, compromising a variety of promptable segmentation models with different architectures and prompt interfaces.