Abstract:
The widespread use of cloud-based face recognition technology raises privacy concerns, as unauthorized access to face images can expose personal information or be exploited for fraudulent purposes. In response, privacy-preserving face recognition (PPFR) schemes have emerged to hide visual information and thwart unauthorized access. However, the validation methods employed by these schemes often rely on unrealistic assumptions, leaving doubts about their true effectiveness in safeguarding facial privacy. In this paper, we introduce a new approach to privacy validation called Minimum Assumption Privacy Protection Validation (Map$^2$V). This is the first exploration of formulating a privacy validation method utilizing deep image priors and zeroth-order gradient estimation, with the potential to serve as a general framework for PPFR evaluation. Building upon Map$^2$V, we comprehensively validate the privacy-preserving capability of PPFRs through a combination of human and machine vision. The experiment results and analysis demonstrate the effectiveness and generalizability of the proposed Map$^2$V, showcasing its superiority over native privacy validation methods from PPFR works of literature. Additionally, this work exposes privacy vulnerabilities in evaluated state-of-the-art PPFR schemes, laying the foundation for the subsequent effective proposal of countermeasures. The source code is available at https://github.com/Beauty9882/MAP2V.
Chat is not available.