Skip to content

A curated list of Meachine learning Security & Privacy papers published in security top-4 conferences (IEEE S&P, ACM CCS, USENIX Security and NDSS).

Notifications You must be signed in to change notification settings

ideasplus/Awesome-ML-SP-Papers

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 

Repository files navigation

Awesome-ML-Security-and-Privacy-Papers

Awesome PRs Welcome

A curated list of Meachine learning Security & Privacy papers published in security top-4 conferences (IEEE S&P, ACM CCS, USENIX Security and NDSS).

Contents:

1. Security Papers

1.1 Adversarial Attack & Defense

1.1.1 Image

  1. Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited Queries. USENIX Security 2020. Transferability + Query. Black-box Attack [pdf] [code]

  2. Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning. USENIX Security 2020. Defense of Image Scaling Attack [pdf] [code]

  3. HopSkipJumpAttack: A Query-Efficient Decision-Based Attack. IEEE S&P 2020. Query-based Black-box Attack [pdf] [code]

  4. PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking. USENIX Security 2021. Adversarial Patch Defense [pdf] [code]

  5. Gotta Catch'Em All: Using Honeypots to Catch Adversarial Attacks on Neural Networks. ACM CCS 2020. Build an trap in model to induce specific adversarial perturbation [pdf] [code]

  6. A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models. ACM CCS 2020. Perturbate both input and model [pdf] [code]

  7. Feature-Indistinguishable Attack to Circumvent Trapdoor-Enabled Defense. ACM CCS 2021. A new attack method can break TeD defense mechanism [pdf] [code]

  8. DetectorGuard: Provably Securing Object Detectors against Localized Patch Hiding Attacks. ACM CCS 2021. Provable robustness for patch hiding in object detection [pdf] [code]

  9. RamBoAttack: A Robust and Query Efficient Deep Neural Network Decision Exploit. NDSS 2022. Query-based black box attack [pdf] [code]

  10. What You See is Not What the Network Infers: Detecting Adversarial Examples Based on Semantic Contradiction. NDSS 2022. Generative-based AE detection [pdf] [code]

  11. AutoDA: Automated Decision-based Iterative Adversarial Attacks. USENIX 2022. Program Synthesis for Adversarial Attack [pdf]

  12. Blacklight: Scalable Defense for Neural Networks against Query-Based Black-Box Attacks. USENIX Security 2022. AE Detection using probabilistic fingerprints based on hash of input similarity [pdf] [code]

  13. Physical Hijacking Attacks against Object Trackers. ACM CCS 2022. Adversarial Attacks on Object Trackers [pdf] [code]

  14. Post-breach Recovery: Protection against White-box Adversarial Examples for Leaked DNN Models. ACM CCS 2022. Adversarial Attacks on Object Trackers [pdf]

1.1.2 Text

  1. TextShield: Robust Text Classification Based on Multimodal Embedding and Neural Machine Translation. USENIX Security 2020. Defense in preprossing [pdf]

  2. Bad Characters: Imperceptible NLP Attacks. IEEE S&P 2022. Use unicode to conduct human imperceptible attack [pdf] [code]

  3. Order-Disorder: Imitation Adversarial Attacks for Black-box Neural Ranking Models. ACM CCS 2022. Attack Neural Ranking Models [pdf]

1.1.3 Audio

  1. WaveGuard: Understanding and Mitigating Audio Adversarial Examples. USENIX Security 2021. Defense in preprossing [pdf] [code]

  2. Dompteur: Taming Audio Adversarial Examples. USENIX Security 2021. Defense in preprossing. Preprocessing the audio to make the noise human noticeable [pdf] [code]

  3. Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems. IEEE S&P 2021. Attack [pdf] [code]

  4. Hear "No Evil", See "Kenansville": Efficient and Transferable Black-Box Attacks on Speech Recognition and Voice Identification Systems. IEEE S&P 2021. Black-box Attack [pdf]

  5. SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems. IEEE S&P 2021. Survey [pdf]

  6. AdvPulse: Universal, Synchronization-free, and Targeted Audio Adversarial Attacks via Subsecond Perturbations. ACM CCS 2020. Attack [pdf]

  7. Black-box Adversarial Attacks on Commercial Speech Platforms with Minimal Information. ACM CCS 2021. Black-box Attack. Physical World [pdf]

  8. Perception-Aware Attack: Creating Adversarial Music via Reverse-Engineering Human Perception. ACM CCS 2022. Adversarial Audio with human-aware noise [pdf]

  9. SpecPatch: Human-in-the-Loop Adversarial Audio Spectrogram Patch Attack on Speech Recognition. ACM CCS 2022. Adversarial Patch for audio [pdf]

1.1.4 Video

  1. Universal 3-Dimensional Perturbations for Black-Box Attacks on Video Recognition Systems. IEEE S&P 2022. Adversarial attack in video recognition [pdf]

1.1.5 Graph

  1. A Hard Label Black-box Adversarial Attack Against Graph Neural Networks. ACM CCS 2021. Graph Classification [pdf]

1.1.6 Software

  1. Evading Classifiers by Morphing in the Dark. ACM CCS 2017. Morpher and search to generate adversarial PDF [pdf]

  2. Misleading Authorship Attribution of Source Code using Adversarial Learning. USENIX Security 2019. Adversarial attack in source code, MCST [pdf] [code]

  3. Intriguing Properties of Adversarial ML Attacks in the Problem Space. IEEE S&P 2020. Attack Malware Classification [pdf]

  4. Structural Attack against Graph Based Android Malware Detection. IEEE S&P 2020. Perturbed function call graph [pdf]

1.1.7 Hardware

  1. ATTRITION: Attacking Static Hardware Trojan Detection Techniques Using Reinforcement Learning. ACM CCS 2022. Attack Hardware Trojan Detection [pdf]

1.1.8 Interpret Method

  1. Interpretable Deep Learning under Fire. USENIX Security 2020. Attack both image classification and interpret method [pdf]

  2. “Is your explanation stable?”: A Robustness Evaluation Framework for Feature Attribution. ACM CCS 2022. Hypothesis Testing to increasing the robustness of explaination methods [pdf]

1.1.9 Physical World

  1. SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations. USENIX Security 2021. Projector light causes misclassification [pdf] [code]

  2. Understanding Real-world Threats to Deep Learning Models in Android Apps. ACM CCS 2022. Adversarial Attack in real-world models [pdf]

1.1.10 Reinforcement Learning

  1. Adversarial Policy Training against Deep Reinforcement Learning. USENIX Security 2021. Weird behavior to trigger opposite abnormal action. Two-agent competitor game [pdf] [code]

1.1.11 Robust Defense

  1. Cost-Aware Robust Tree Ensembles for Security Applications. USENIX Security 2021. Propose Cost of feature to certify the model robustness [pdf] [code]

  2. CADE: Detecting and Explaining Concept Drift Samples for Security Applications. USENIX Security 2021. Detect Concept shift [pdf] [code]

  3. Learning Security Classifiers with Verified Global Robustness Properties. ACM CCS 2021. Train a classifier with global robustness [pdf] [code]

  4. On the Robustness of Domain Constraints. ACM CCS 2021. Domain constraints. Input space robustness [pdf]

  5. Cert-RNN: Towards Certifying the Robustness of Recurrent Neural Networks. ACM CCS 2021. Certify robustness in RNN [pdf]

  6. TSS: Transformation-Specific Smoothing for Robustness Certification. ACM CCS 2021. Certify robustness about transformation [pdf][code]

  7. Transcend: Detecting Concept Drift in Malware Classification Models. USENIX Security 2017. Conformal evaluators [pdf][code]

  8. Transcending Transcend: Revisiting Malware Classification in the Presence of Concept Drift. IEEE S&P 2022. New conformal evaluators [pdf][code]

  9. Transferring Adversarial Robustness Through Robust Representation Matching. USENIX Security 2022. Robust Transfer Learning [pdf]

1.1.12 Network Traffic

  1. Defeating DNN-Based Traffic Analysis Systems in Real-Time With Blind Adversarial Perturbations. USENIX Security 2021. Adversarial attack to defeat DNN-based traffic analysis [pdf][code]

1.1.13 Wireless Communication System

  1. Robust Adversarial Attacks Against DNN-Based Wireless Communication Systems. ACM CCS 2021. Attack [pdf]

1.2 Distributed Machine Learning

1.2.1 Federated Learning

  1. Local Model Poisoning Attacks to Byzantine-Robust Federated Learning. USENIX Security 2020. Poisoning Attack [pdf]

  2. Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning. NDSS 2021. Poisoning Attack [pdf]

  3. DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection. NDSS 2022. Backdoor defense [pdf]

  4. FLAME: Taming Backdoors in Federated Learning. USENIX Security 2022. Backdoor defense [pdf]

  5. EIFFeL: Ensuring Integrity for Federated Learning. ACM CCS 2022. New FL Protocol to guarteen integrity [pdf]

  6. Eluding Secure Aggregation in Federated Learning via Model Inconsistency. ACM CCS 2022. Model inconsistency to break the secure aggregation [pdf]

  7. FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information. IEEE S&P 2023. Poisoned Model Recovery Algorithm [pdf]

1.2.2 Normal Distributed Learning

  1. Justinian's GAAvernor: Robust Distributed Learning with Gradient Aggregation Agent. USENIX Security 2020. Defense in Gradient Aggregation. Reinforcement learning [pdf]

1.3 Data Poisoning

1.3.1 Hijack Embedding

  1. Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning. IEEE S&P 2020. Hijack Word Embedding [pdf]

1.3.2 Hijack Autocomplete Code

  1. You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion. USENIX Security 2021. Hijack Code Autocomplete [pdf]

1.3.3 Semi-Supervised Learning

  1. Poisoning the Unlabeled Dataset of Semi-Supervised Learning. USENIX Security 2021. Poisoning semi-supervised learning [pdf]

1.3.4 Recommender Systems

  1. Data Poisoning Attacks to Deep Learning Based Recommender Systems. NDSS 2021. The attacker chosen items are recommended as much as possible [pdf]

  2. Reverse Attack: Black-box Attacks on Collaborative Recommendation. ACM CCS 2021. Black-box setting. Surrogate model. Collaborative Filtering. Demoting and Promoting [pdf]

1.3.5 Classification

  1. Subpopulation Data Poisoning Attacks. ACM CCS 2021. Poisoning to flip a group of data samples [pdf]

  2. Get a Model! Model Hijacking Attack Against Machine Learning Models. NDSS 2022. Fusing dataset to hijacking model [pdf] [code]

1.3.6 Constractive Learning

  1. PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning. USENIX Security 2022. Poison attack in constractive learning [pdf]

1.3.7 Privacy

  1. Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets. ACM CCS 2022. Poison attack to reveal sensitive information [pdf]

1.3.8 Defense

  1. Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks. USENIX Security 2022. Identify poisioned subset by clustering and purning benign set [pdf]

1.4 Backdoor

1.4.1 Image

  1. Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection. USENIX Security 2021. Class-specific Backdoor. Defense by decomposition [pdf]

  2. Double-Cross Attacks: Subverting Active Learning Systems. USENIX Security 2021. Active Learning System. Backdoor Attack [pdf]

  3. Detecting AI Trojans Using Meta Neural Analysis. IEEE S&P 2021. Meta Neural Classifier [pdf] [code]

  4. BadEncoder: Backdoor Attacks to Pre-trained Encoders in Self-Supervised Learning. IEEE S&P 2022. Backdoor attack in image-text pretrained model [pdf] [code]

  5. Composite Backdoor Attack for Deep Neural Network by Mixing Existing Benign Features. ACM CCS 2020. Composite backdoor. Image & text tasks [pdf] [code]

  6. AI-Lancet: Locating Error-inducing Neurons to Optimize Neural Networks. ACM CCS 2021. Locate neural location and finetuning it [pdf]

  7. LoneNeuron: a Highly-Effective Feature-Domain Neural Trojan Using Invisible and Polymorphic Watermarks. ACM CCS 2022. Backdoor attack by modifying neuros [pdf]

  8. ATTEQ-NN: Attention-based QoE-aware Evasive Backdoor Attacks. NDSS 2022. Backdoor attack by attention techniques [pdf]

  9. RAB: Provable Robustness Against Backdoor Attacks. IEEE S&P 2023. Backdoor Cetrification [pdf]

1.4.2 Text

  1. T-Miner: A Generative Approach to Defend Against Trojan Attacks on DNN-based Text Classification. USENIX Security 2021. Backdoor Defense. GAN to recover trigger [pdf] [code]

  2. Hidden Backdoors in Human-Centric Language Models. ACM CCS 2021. Novel trigger [pdf] [code]

  3. Backdoor Pre-trained Models Can Transfer to All. ACM CCS 2021. Backdoor in pre-trained to poison the down stream task [pdf] [code]

  4. Hidden Trigger Backdoor Attack on NLP Models via Linguistic Style Manipulation. USENIX Security 2022. Backdoor via linguistic style manipulation [pdf]

1.4.3 Graph

  1. Graph Backdoor. USENIX Security 2021. Classification [pdf] [code]

1.4.4 Software

  1. Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers. USENIX Security 2021. Explanation Method. Evade Classification [pdf] [code]

1.5 ML Library Security

1.5.1 Loss

  1. Blind Backdoors in Deep Learning Models. USENIX Security 2021. Loss Manipulation. Backdoor [pdf] [code]

1.6 AI4Security

1.6.1 Cyberbullying

  1. Towards Understanding and Detecting Cyberbullying in Real-world Images. NDSS 2021. Detect image cyberbully [pdf]

1.6.2 Label Complete

  1. FARE: Enabling Fine-grained Attack Categorization under Low-quality Labeled Data. NDSS 2021. Clustering Method to complete the dataset label [pdf] [code]

1.6.3 Advertisement detection

  1. WtaGraph: Web Tracking and Advertising Detection using Graph Neural Networks. IEEE S&P 2022. GNN [pdf]

1.6.4 CAPTCHA

  1. Text Captcha Is Dead? A Large Scale Deployment and Empirical Studys. ACM CCS 2020. Adversarial CAPTCHA [pdf]

1.6.5 Code embedding

  1. PalmTree: Learning an Assembly Language Model for Instruction Embedding. ACM CCS 2021. Pre-trained model to generate code embedding [pdf] [code]

1.6.6 Chatbot

  1. Why So Toxic? Measuring and Triggering Toxic Behavior in Open-Domain Chatbots. ACM CCS 2022. Measuring Chatbot Textico behavior [pdf]

1.6.7 Survey

  1. Dos and Don'ts of Machine Learning in Computer Security. USENIX Security 2022. Survey pitfalls in ML4Security [pdf]

1.6.8 Security Event

  1. CERBERUS: Exploring Federated Prediction of Security Events. ACM CCS 2022. Federated Learning to predict security event [pdf]

1.7 AutoML Security

1.7.1 Security Analysis

  1. On the Security Risks of AutoML. USENIX Security 2022. Adversarial evasion. Model poisoning. Backdoor. Functionality stealing. Membership Inference [pdf]

1.8 Hardware Related Security

1.8.1 Verification

  1. DeepDyve: Dynamic Verification for Deep Neural Networks. ACM CCS 2020. [pdf]

1.9 Security Related Interpreting Method

1.9.1 Unsupervised Learning

  1. DeepAID: Interpreting and Improving Deep Learning-based Anomaly Detection in Security Applications. ACM CCS 2021. Anomaly detection [pdf] [code]

1.10 Deepfake

1.10.1 Deepfake Detection

  1. Who Are You (I Really Wanna Know)? Detecting Audio DeepFakes Through Vocal Tract Reconstruction. USENIX Security 2022. deepfake detection using vocal tract reconstruction [pdf]

2. Privacy Papers

2.1 Training Data

2.1.1 Data Recovery

  1. Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning. USENIX Security 2020. Online Learning. Model updates [pdf]

  2. Extracting Training Data from Large Language Models. USENIX Security 2021. Membership inference attack. GPT-2 [pdf]

  3. Analyzing Information Leakage of Updates to Natural Language Models. ACM CCS 2020. data leakage in model changes [pdf]

  4. TableGAN-MCA: Evaluating Membership Collisions of GAN-Synthesized Tabular Data Releasing. ACM CCS 2021. Membership collision in GAN [pdf]

  5. DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation. ACM CCS 2021. DP to train an privacy preserving GAN [pdf]

  6. Property Inference Attacks Against GANs. NDSS 2022. Property Inference Attacks Against GAN [pdf] [code]

  7. MIRROR: Model Inversion for Deep Learning Network with High Fidelity. NDSS 2022. Model inversion attack using GAN [pdf] [code]

2.1.2 Membership Inference Attack

  1. Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference. USENIX Security 2020. White-box Setting [pdf]

  2. Systematic Evaluation of Privacy Risks of Machine Learning Models. USENIX Security 2020. Metric-based Membership inference Attack Method. Define Privacy Risk Score [pdf] [code]

  3. Practical Blind Membership Inference Attack via Differential Comparisons. NDSS 2021. Use non-member data to replace shadow model [pdf] [code]

  4. GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models. ACM CCS 2020. Membership inference attack in Generative model. Member has small reconstruction error [pdf]

  5. Quantifying and Mitigating Privacy Risks of Contrastive Learning. ACM CCS 2021. Membership inference attack. Property inference attack. Contrastive learning in classification task [pdf] [code]

  6. Membership Inference Attacks Against Recommender Systems. ACM CCS 2021. Recommender System [pdf] [code]

  7. EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning. ACM CCS 2021. Contrastive learning in pre-trained model. Data augmentation has higher similarity [pdf] [code]

  8. Auditing Membership Leakages of Multi-Exit Networks. ACM CCS 2022. Membership inference attack in multi-exit networks [pdf]

  9. Membership Inference Attacks by Exploiting Loss Trajectory. ACM CCS 2022. Membership inference attack, knowledge distillation [pdf]

  10. On the Privacy Risks of Cell-Based NAS Architectures. ACM CCS 2022. Membership inference attack in NAS [pdf]

  11. Membership Inference Attacks and Defenses in Neural Network Pruning. USENIX Security 2022. Membership inference attack in Neural Network Pruning [pdf]

  12. Mitigating Membership Inference Attacks by Self-Distillation Through a Novel Ensemble Architecture. USENIX Security 2022. Membership inference defense by ensemble [pdf]

  13. Enhanced Membership Inference Attacks against Machine Learning Models. USENIX Security 2022. Membership inference attack with hypothesis testing [pdf] [code]

  14. Membership Inference Attacks and Generalization: A Causal Perspective. ACM CCS 2022. Membership inference attack with casual reasoning [pdf]

2.1.3 Information Leakage in Distributed ML System

  1. Label Inference Attacks Against Vertical Federated Learning. USENIX Security 2022. Label Leakage. Federated Learning [pdf] [code]

  2. The Value of Collaboration in Convex Machine Learning with Differential Privacy. IEEE S&P 2020. DP as Defense [pdf]

  3. Leakage of Dataset Properties in Multi-Party Machine Learning. USENIX Security 2021. Dataset Properties Leakage [pdf]

  4. Unleashing the Tiger: Inference Attacks on Split Learning. ACM CCS 2021. Split learning. Feature-space hijacking attack [pdf] [code]

  5. Local and Central Differential Privacy for Robustness and Privacy in Federated Learning. NDSS 2022. DP in federated learning [pdf]

2.1.4 Information Leakage in Embedding

  1. Privacy Risks of General-Purpose Language Models. IEEE S&P 2020. Pretrained Language Model [pdf]

  2. Information Leakage in Embedding Models. ACM CCS 2020. Exact Word Recovery. Attribute inference. Membership inference [pdf]

  3. Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers' Outputs. ACM CCS 2021. Infer privacy information in classification output [pdf] [code]

2.1.5 Graph Leakage

  1. Stealing Links from Graph Neural Networks. USENIX Security 2021. Inference Graph Link [pdf]

  2. Inference Attacks Against Graph Neural Networks. USENIX Security 2022. Property inference: number of nodes. Subgraph inference. Graph reconstruction [pdf] [code]

  3. LinkTeller: Recovering Private Edges from Graph Neural Networks via Influence Analysis. IEEE S&P 2022. Use node connection influence to infer graph edges [pdf]

  4. Locally Private Graph Neural Networks. IEEE S&P 2022. LDP as defense for node privacy [pdf] [code]

  5. Finding MNEMON: Reviving Memories of Node Embeddings. ACM CCS 2022. Graph recovery attack through node embedding [pdf]

  6. Group Property Inference Attacks Against Graph Neural Networks. ACM CCS 2022. Group Property inference attack on GNN [pdf]

  7. LPGNet: Link Private Graph Networks for Node Classification. ACM CCS 2022. DP to build private GNN [pdf]

2.1.6 Unlearning

  1. Machine Unlearning. IEEE S&P 2020. Shard and isolate the training dataset [pdf] [code]

  2. When Machine Unlearning Jeopardizes Privacy. ACM CCS 2021. Membership inference attack in unlearning setting [pdf] [code]

  3. Graph Unlearning. ACM CCS 2022. Graph Unlearning [pdf] [code]

  4. On the Necessity of Auditable Algorithmic Definitions for Machine Unlearning. ACM CCS 2022. Auditable Unlearning [pdf]

2.1.7 Attribute Inference Attack

  1. Are Attribute Inference Attacks Just Imputation?. ACM CCS 2022. Attribute Inference Attack by identified neuro with data [pdf] [code]

  2. Feature Inference Attack on Shapley Values. ACM CCS 2022. Attribute Inference Attack using shapley values [pdf]

  3. QuerySnout: Automating the Discovery of Attribute Inference Attacks against Query-Based Systems. ACM CCS 2022. Attribute Inference detection [pdf]

2.2 Model

2.2.1 Model Extraction

  1. Exploring Connections Between Active Learning and Model Extraction. USENIX Security 2020. Active Learning [pdf]

  2. High Accuracy and High Fidelity Extraction of Neural Networks. USENIX Security 2020. Fidelity [pdf]

  3. DRMI: A Dataset Reduction Technology based on Mutual Information for Black-box Attacks. USENIX Security 2021. Query Data Selection Method to reduce the query [pdf]

  4. Entangled Watermarks as a Defense against Model Extraction. USENIX Security 2021. Backdoor as watermark against model extraction [pdf]

  5. CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples. NDSS 2020. Adversarial Example to strengthen model stealing [pdf]

  6. Teacher Model Fingerprinting Attacks Against Transfer Learning. USENIX Securiy 2022. Teacher model fingerprinting [pdf]

  7. StolenEncoder: Stealing Pre-trained Encoders in Self-supervised Learning. ACM CCS 2022. Model Stealing attack in encoder [pdf]

2.2.2 Watermarking Model Outputs

  1. Adversarial Watermarking Transformer: Towards Tracing Text Provenance with Data Hiding. IEEE S&P 2021. Encode secret message into LM [pdf]

2.2.3 Model Owenership

  1. Proof-of-Learning: Definitions and Practice. IEEE S&P 2021. Proof the ownership of model parameters [pdf]

  2. SoK: How Robust is Image Classification Deep Neural Network Watermarking?. IEEE S&P 2022. Survey of DNN watermarking [pdf]

  3. Copy, Right? A Testing Framework for Copyright Protection of Deep Learning Models. IEEE S&P 2022. Calculate model similarity by generating test examples [pdf] [code]

  4. SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders. ACM CCS 2022. Watermarking in encoder [pdf]

2.3 User Related Privacy

2.3.1 Image

  1. Fawkes: Protecting Privacy against Unauthorized Deep Learning Models. USENIX Security 2020. Protect Face Privacy [pdf] [code]

  2. Automatically Detecting Bystanders in Photos to Reduce Privacy Risks. IEEE S&P 2020. Detecting bystanders [pdf]

  3. Characterizing and Detecting Non-Consensual Photo Sharing on Social Networks. IEEE S&P 2020. Detecting Non-Consensual People in a photo [pdf]

2.4 MPC ML Protocols

2.4.1 3PC

  1. SWIFT: Super-fast and Robust Privacy-Preserving Machine Learning. USENIX Security 2021. [pdf]

  2. BLAZE: Blazing Fast Privacy-Preserving Machine Learning. NDSS 2020. [pdf]

2.4.2 4PC

  1. Trident: Efficient 4PC Framework for Privacy Preserving Machine Learning. NDSS 2020. [pdf]

2.4.3 SMPC

  1. Cerebro: A Platform for Multi-Party Cryptographic Collaborative Learning. USENIX Security 2021. [pdf] [code]

2.5 Platform

2.5.1 Inference Attack Measurement

  1. ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models. USENIX Security 2022. Membership inference attack. Model inversion. Attribute inference. Model stealing [pdf]

2.6 Differential Privacy

2.6.1 Tree Model

  1. Federated Boosted Decision Trees with Differential Privacy. ACM CCS 2022. Federated Learning with Tree Model in DP [pdf]

Contributing

This list is mainly maintained by Ping He from NESA Lab.

We are very much welcome contributors for contributing this repository!

Markdown format

**Paper Name**. Conference Year. `Keywords` [[pdf](pdf_link)] [[code](code_link)]

Licenses

CC0

To the extent possible under law, gnipping all copyright and related or neighboring rights to this repository.

About

A curated list of Meachine learning Security & Privacy papers published in security top-4 conferences (IEEE S&P, ACM CCS, USENIX Security and NDSS).

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published