State-of-the-art personalized text-to-image generation systems are usually trained on a few reference images to learn novel visual representations. However, this is likely to incur infringement of copyright for the reference image owners, when these images are personal and publicly available. Recent progress has been made in protecting these images from unauthorized use by adding protective noises. Yet current protection methods work under the assumption that these protected images are not changed, which is in contradiction to the fact that most public platforms intend to modify user-uploaded content, e.g., image compression. This paper introduces a robust watermarking method, namely InMark, to protect images from unauthorized learning. Inspired by influence functions, the proposed method forges protective watermarks on more important pixels for these reference images from both heuristic and statistical perspectives. In this way, the personal semantics of these images are under protection even if these images are modified to some extent. Extensive experiments demonstrate that the proposed InMark outperforms previous state-of-the-art methods in both protective performance and robustness.