Output list
Conference proceeding
Published 22/11/2025
Proceedings of the 2025 ACM SIGSAC Conference on Computer and Communications Security, 4833 - 4835
CCS '25: ACM SIGSAC Conference on Computer and Communications Security, 13/10/2025–17/10/2025, Taipei, Taiwan
With the exponential surge in media data volumes and their growing intrinsic value, the landscape has become increasingly susceptible to persistent and strategically designed data poisoning attacks targeting these valuable assets. In this work, we propose a novel approach leveraging generative AI techniques to craft covert and robust poisonous data samples, referred to as Chimera Images. These images seamlessly blend visual features from two target classes to generate hybrid objects that preserve appearance fidelity. These ''normal'' samples with correct labels can subtly distort the model's decision boundary without raising suspicion. Extensive experimental results on CIFAR-10 and Flowers datasets demonstrate that the proposed method i) reduces the accuracy of the targeted class, ii) maintains the performance of other classes, and iii) exhibits immunity to state-of-the-art defence strategies. We also explore the usage of generative AI content detection as a defence mechanism, demonstrating that the recently discovered snapshot technique is ineffective against the AI-generated poisonous Chimera samples.
Conference proceeding
Mitigating Over-Unlearning in Machine Unlearning with Synthetic Data Augmentation
Published 02/2025
Algorithms and Architectures for Parallel Processing: 24th International Conference, ICA3PP 2024, Macau, China, October 29–31, 2024, Proceedings, Part IV, 300 - 314
24th International Conference, ICA3PP 2024, 29/10/2024–31/10/2024, Macau, China
In machine learning, data privacy and security has become an increasingly growing concern. The introduction of machine unlearning offers the ability to address this issue through the removal of personal and sensitive data from trained models to comply with laws, regulations, and user privacy requirements. However, despite the significant benefits of this technique, performing unlearning operations is only sometimes smooth in practice. When we attempt to remove specific data, the model may over-adjust, resulting in its performance on unlearned data. This phenomenon not only reduces the accuracy of the model on known data but also affects its ability to generalize new data, thus weakening the model’s overall performance. Therefore, in this paper, we analyze the phenomenon of over-unlearning; firstly, we explore how to mitigate the effects of over-unlearning by generating synthetic data to fill in the forgotten parts, using data synthesis-based techniques. Secondly, we combine the data synthesis-based compensation strategy with model fine-tuning to further improve the model’s adaptability and generalization capability by further training the model to accommodate synthetic data. Through comprehensive experimentation, we verify the effectiveness of the proposed data synthesis-based compensation strategy. The experimental results show that the data synthesis-based compensation strategy can effectively mitigate the effects of the over-unlearning phenomenon, as well as maintaining the stability and accuracy of the model after removing specific data.
Conference proceeding
Deceptive Waves: Embedding Malicious Backdoors in PPG Authentication
Published 12/2024
Web Information Systems Engineering – WISE 2024: 25th International Conference, Doha, Qatar, December 2–5, 2024, Proceedings, Part II , 258 - 272
Web Information Systems Engineering – WISE 2024, 02/12/2024–05/12/2024, Doha, Qatar
Recently, research interest has increasingly focused on the utilization of unobservable physiological signals as distinctive identifiers in biometric systems, which contributes to the enhancement of biometric authentication systems. Photoplethysmography (PPG) signals, favored for their ease of acquisition and integration with machine learning, generally exhibit robust protection against remote adversaries during authentication processes. However, the robustness of PPG signal models to backdoor attacks remains unexplored, and this powerful attack may pose a security threat to PPG-based biometric authentication systems due to its stealthiness. To the best of our knowledge, this paper first proposes a backdoor attack that targets PPG-based biometric authentication, which utilizes our elaborate waveform variations embedded in PPG signals as the backdoor. The compromised PPG-based authentication only behaves maliciously on the attacker-chosen inputs, while it behaves normally on clean inputs. We evaluate this backdoor attack on three popular datasets, showing that our attack successfully embeds and activates the backdoor in the PPG-based authentication. Experiment results on the state-of-the-art PPG-based authentication systems indicate that this first backdoor embedded in PPG signals poses a severe threat to PPG-based biometric authentication.
Conference proceeding
Hiding Your Signals: A Security Analysis of PPG-Based Biometric Authentication
Published 01/01/2024
Computer Security - ESORICS 2023, PT III, 14346, 183 - 202
European Symposium on Research in Computer Security, 25/09/2023–29/09/2023, The Hague, The Netherland
Recently, physiological signal-based biometric systems have received wide attention. Photoplethysmogram (PPG) signal is easy to measure, making it more attractive than many other physiological signals for biometric authentication. However, with the advent of remote PPG, unobservability has been challenged when the attacker can remotely steal the PPG signals by monitoring the victim's face, subsequently posing a threat to PPG-based biometrics. In this paper, we firstly analyze the security of PPG-based biometrics, including user authentication and communication protocols. We evaluate the signal waveforms and interpulse-interval information extracted by five rPPG methods. Our empirical studies on five datasets show that rPPG poses a serious threat to the authentication system. The success rate of the rPPG signal spoofing attack in the user authentication system reached 35%. The bit hit rate is 60% in inter-pulse-interval-based security protocols. Further, we propose an active defence strategy to hide the physiological signals of the face to resist the attack. It reduces the success rate of rPPG spoofing attacks in user authentication to 5%. The bit hit rate was reduced to 50%, which is at the level of a random guess. Our strategy effectively prevents the exposure of PPG signals to protect users' sensitive physiological data.
Conference proceeding
SigA: rPPG-based Authentication for Virtual Reality Head-mounted Display
Published 16/10/2023
RAID '23: Proceedings of the 26th International Symposium on Research in Attacks, Intrusions and Defenses, 686 - 699
The 26th International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2023), 16/10/2023–18/10/2023, Hong Kong China
Consumer-grade virtual reality head-mounted displays (VR-HMD) are becoming increasingly popular. Despite VR’s convenience and booming applications, VR-based authentication schemes are underdeveloped. The recently proposed authentication methods (Electrooculogram based, Electrical Muscle Stimulation-based, and alike) require active user involvement, disturbing many scenarios like drone flight and telemedicine. This paper proposes an effective and efficient user authentication method in VR environments resilient to impersonation attacks using physiological signals — Photoplethysmogram (PPG), namely SigA. SigA exploits the advantage that PPG is a physiological signal invisible to the naked eye. Using VR-HMDs to cover the eye area completely, SigA reduces the risk of signal leakage during PPG acquisition. We conducted a comprehensive analysis of SigA’s feasibility on five publicly available datasets, nine different pre-trained models, three facial regions, various lengths of the video clips required for training, four different signal time intervals, and continuous authentication with different sliding window sizes. The results demonstrate that SigA achieves more than 95% of the average F1-score in a one-second signal to accommodate a complete cardiac cycle for most adults, implying its applicability in real-world scenarios. Furthermore, experiments have shown that SigA is resistant to zero-effort attacks, statistical attacks, impersonation attacks (with a detection accuracy of over 95%) and session hijacking attacks.
Conference proceeding
SigD: A Cross-Session Dataset for PPG-based User Authentication in Different Demographic Groups
Published 02/08/2023
2023 International Joint Conference on Neural Networks (IJCNN), 2023, 1 - 8
International Joint Conference on Neural Networks (IJCNN), 18/06/2023–23/06/2023, Gold Coast, Australia
Recently, unobservable physiological signals have received widespread attention from researchers as unique identifiers of users in biometrics. However, due to the lack of data sets, existing methods are limited in evaluating cross-session scenarios. Cross-session means that signals are collected at different sessions (times). In real scenarios, authentication is almost always cross-session. Currently, the datasets commonly used for Photoplethysmogram (PPG) signal authentication span around one month, which is insufficient for authentication. On the other hand, different demographic groups have different hemodynamic characteristics, but existing methods lack an assessment of these aspects. This paper introduces a dataset to provide insights into PPG signal-based authentication across different time spans and user groups (age, gender). As physiological signals offer unique advantages for user authentication, the potential of PPG signals is gradually explored. Furthermore, our comparative analysis of recent publications on data-driven user authentication using PPG can further identify the similarities and differences among the performance of the proposed authentication models. Our findings may help future research towards a consensus on an appropriate set of performance metrics.
Conference proceeding
Video is All You Need: Attacking PPG-based Biometric Authentication
Published 07/11/2022
Proceedings of the 15th ACM Workshop on Artificial Intelligence and Security, 57 - 66
CCS '22: 2022 ACM SIGSAC Conference on Computer and Communications Security, Los Angeles, USA
Unobservable physiological signals enhance biometric authentication systems. Photoplethysmography (PPG) signals are convenient owing to its ease of measurement and are usually well protected against remote adversaries in authentication. Any leaked PPG signals help adversaries compromise the biometric authentication systems, and the advent of remote PPG (rPPG) enables adversaries to acquire PPG signals through restoration. While potentially dangerous, rPPG-based attacks are overlooked because existing methods require the victim's PPG signals. This paper proposes a novel spoofing attack approach that uses the waveforms of rPPG signals extracted from video clips to fool the PPG-based biometric authentication. We develop a new PPG restoration model to accomplish adversarial attacks without leaked PPG signals. Empirical study results on state-of-art PPG-based biometric authentication show that the signals recovered through rPPG pose a severe threat to PPG-based biometric authentication.