CS 562 - Advanced Topics in Security, Privacy and Machine Learning (Fall 2022)

CS 562: Advanced Topics in Security, Privacy and Machine Learning (Fall 2022)

Instructor
Bo Li, lbo@illinois.edu, 4310 Siebel Center

Lectures
1310 Digital Computer Laboratory

Teaching Assistant
Zijian Huang, zijianh4@illinois.edu

Office Hours
Bo Li: After class each day
Zijian Huang: Friday 4-5 pm CDT, Zoom
Forums
Canvas link

Course Overview

This course will first introduce topics on machine learning, security, privacy, fairness, robust learning, and game theory; then from the research perspective, we will discuss the fundamental contributions and potential extension for each topic as well as corresponding open challenges, with a focus on the algorithm and system perspectives. Students will understand different machine learning algorithms, analyze their implementation and security vulnerabilities through a series of readings and projects, and develop the ability to conduct research projects on related topics.

Prerequisites: CS 446 Machine Learning, CS 461 Computer Security I (CS 463 Computer Security II)

Please contact the instructor if you have questions regarding the material or concerns about whether your background is suitable for the course.

Course Schedule

The following table outlines the schedule for the course. We will update it as the semester progresses. Please refer to the class syllabus for more details.

Date Lecture Readings Slides
8/23 Course Overview Background ideas about general adversarial machine learning, including the fundamental causes of the problem and current research status Slides
8/25 Evasion Attacks Against Machine Learning Models (Against Classifiers) Intriguing properties of neural networks
Explaining and harnessing adversarial examples
Towards Evaluating the Robustness of Neural Networks
Slides
8/30 ​Evasion Attacks Against Machine Learning Models (Non-traditional Attacks) ​​Generating Adversarial Examples with Adversarial Networks
​Spatially Transformed Adversarial Examples
​​Robust Physical-World Attacks on Deep Learning Models
​​Wasserstein Adversarial Examples via Projected Sinkhorn Iterations
Slides 1 Slides 2
9/1 Evasion Attacks Against Machine Learning Models (Against Detectors/Generative Models/RL) ​​Houdini: Fooling Deep Structured Prediction Models
Adversarial Examples for Semantic Segmentation and Object Detection
Adversarial Examples for Generative Models
Adversarial Attacks on Neural Network Policies
Slides 1 Slides 2
9/6 Evasion Attacks Against Machine Learning Models (Blackbox Attacks) Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples
Exploring the Space of Black-box Attacks on Deep Neural Networks
​​Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
Slides 1 Slides 2
9/8 Guest Lecture C3.ai DTI Colloquium on Digital Transformation Science - Fall 2022
9/13 Detection Against Adversarial Attacks ​Pre-process input: Exploring the Space of Adversarial Images
​​Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality
​​SafetyNet: Detecting and Rejecting Adversarial Examples Robustly
Slides 1 Slides 2
9/15 Defenses Against Adversarial Attacks (Empirical) Distillation as a defense to adversarial perturbations against DNNs
​Towards Deep learning models resistant to adversarial attacks
​PeerNets: Exploiting Peer Wisdom Against Adversarial Attacks
Characterizing Adversarial Examples Based on Spatial Consistency Information for Semantic Segmentation
Fortified Networks: Improving the Robustness of Deep Networks by Modeling the Manifold of Hidden Representations 
Slides
9/20 Defenses Against Adversarial Attacks (Theoretic) Certified Defenses Against Adversarial Examples
​Provable Defenses against adversarial examples via the convex outer adversarial polytope
Certified Adversarial Robustness via Randomized Smoothing
​On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models
Slides
9/22 Poisoning Attacks Against Machine Learning Models Optimization based poisoning attack methods against: Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks Slides
9/27 Proposal Report
9/29 Guest Lecture Adversarial, Backdoor and Unlearnable Speaker Bio and Talk Abstract Zoom link
10/4 Poisoning Attacks Analysis Universal Multi-Party Poisoning Attacks
Trojaning Attack on Neural Networks
Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
Data Poisoning Attack against Knowledge Graph Embedding
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
Slides 1 Slides 2
10/6 Guest Lecture Towards Understanding End-to-end Learning in the Context of Data: Machine Learning Dancing over Semirings and Codd’s Table Speaker Bio and Talk Abstract Slides
10/11 Defenses Against Poisoning Attacks ​Certifed Defenses for Data poisoning attacks
​Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
​CRFL: Certifiably Robust Federated Learning against Backdoor Attacks
Slides 1 Slides 2
10/13 (Robust) Data Valuation Efficient Task-Specific Data Valuation for Nearest Neighbor Algorithms
Towards Efficient Data Valuation Based on the Shapley Value
Understanding Black-box Predictions via Influence Function
Slides
10/18 Robustness of Graph Neural Networks Semi-supervised classification with graph convolutional networks
Robust Graph Convolutional Networks Against Adversarial Attacks
Batch Virtual Adversarial Training for Graph Convolutional Networks
Topology Attack and Defense for Graph Neural Networks: An Optimization Perspective
Slides 1 Slides 2
10/20 Guest Lecture How Can We Trust a Black-box? A Quest for Scalable and Powerful Neural Network Verifiers Speaker Bio and Talk Abstract Slides
10/25 Beyond Images: Adversarial Attacks on NLP/Audio/Video/Graphs Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial Examples
Adversarial Examples for Evaluating Reading Comprehension Systems
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
​CommanderSong: A Systematic Approach for Practical Adversarial Voice Recognition
​Adversarial Attacks on Node Embeddings via Graph Poisoning
Adversarial Attack on Graph Structured Data
Slides1 Slides2
10/27 Generative Adversarial Networks (Empirical) Generative Adversarial Nets
​Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
Conditional Generative Adversarial Nets
​​Video-to-Video Synthesis
Slides1 Slides2
11/1 Generative Adversarial Networks (Theoretic) Generalization and equilibrium in generative adversarial nets (GANs)
​Do GANs Actually Learn the Distribution?
Theoretical limitations of Encoder-Decoder GAN architectures
​Certifying some distributional robustness with principled adversarial training
Slides1 Slides2
11/3 Privacy in Machine Learning Models (Attacks) ​Membership Inference attacks against machine learning models
​The secret sharer: Measuring unintended neural network memorization & extracting secrets
​​Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning
Slides1 Slides2
11/8 2022 General Election Day (all-campus holiday) - no class
11/10 Differentially Private Machine Learning Models Deep Learning with Differential Privacy
Semi-supervised Knowledge Transfer for Deep Learning from Private Training Data
​Scalable Private Learning With PATE
​​Plausible deniability for privacy-preserving data synthesis
Slides1 Slides2
11/15 Differential Privacy on Graphs ​​Analyzing Graphs with Node Differential Privacy
Generating Synthetic Decentralized Social Graphs with Local Differential Privacy
Detecting Communities under Differential Privacy
Slides1 Slides2
11/17 Fairness of Machine Learning Delayed impact of fair machine learning
Fairness without demographics in repeated loss minimization
​​On Formalizing Fairness in Prediction with Machine Learning
Avoiding Discrimination through Causal Reasoning
Slides1 Slides2
Fall Break
11/29 (Part 1) Game Theoretic Analysis for Adversarial Learning ​Adversarial Learning
​Adversarial Classification
​Feature Cross-Substitution in Adversarial Classification
Scalable Optimization of Randomized Operational Decisions in Adversarial Classification Settings
Slides1 Slides2
11/29 (Part 2) Robust Reinforcement Learning and Improve learning robustness with unlabeled data (Part 1) Robust Adversarial reinforcement learning
​​Adversarially Robust Policy Learning: Active Construction of Physically-Plausible Perturbations
​Inverse Reward Design 
Slides
12/1 (Part 1) Robust Reinforcement Learning and Improve learning robustness with unlabeled data (Part 2) ​​Unlabeled Data Improves Adversarial Robustness
Are Labels Required for Improving Adversarial Robustness?
Slides
12/1 (Part 2) Robustness In Distributed Learning CRFL: Certifiably Robust Federated Learning against Backdoor Attacks
DBA: Distributed Backdoor Attacks Against Federated Learning
Towards Realistic Byzantine-Robust Federated Learning
Slides
12/6 Final Report

Grading

The course will involve one paper presentation, three reading reviews, and a final project. Unless otherwise noted by the instructor, all work in this course is to be completed independently. If you are ever uncertain of how to complete an assignment, you can go to office hours or engage in high-level discussions about the problem with your classmates on the Canvas boards.

Grades will be assigned as follows:

Course Expectations

The expectations for the course are that students will attend every class, do any readings assigned for class, and actively and constructively participate in class discussions. Class participation will be a measure of contributing to the discourse both in class, through discussion and questions, and outside of class through contributing and responding to the Canvas forum.

More information about course requirements will be made available leading up to the start of classes.

Ethics Statement

This course will include topics related computer security and privacy. As part of this investigation we may cover technologies whose abuse could infringe on the rights of others. As computer scientists, we rely on the ethical use of these technologies. Unethical use includes circumvention of an existing security or privacy mechanisms for any purpose, or the dissemination, promotion, or exploitation of vulnerabilities of these services. Any activity outside the letter or spirit of these guidelines will be reported to the proper authorities and may result in dismissal from the class and possibly more severe academic and legal sanctions.

Academic Integrity Policy

The University of Illinois at Urbana-Champaign Student Code should also be considered as a part of this syllabus. Students should pay particular attention to Article 1, Part 4: Academic Integrity. Read the Code at the following URL: http://studentcode.illinois.edu/.

Academic dishonesty may result in a failing grade. Every student is expected to review and abide by the Academic Integrity Policy: http://studentcode.illinois.edu/. Ignorance is not an excuse for any academic dishonesty. It is your responsibility to read this policy to avoid any misunderstanding. Do not hesitate to ask the instructor(s) if you are ever in doubt about what constitutes plagiarism, cheating, or any other breach of academic integrity.

Students with Disabilities

To obtain disability-related academic adjustments and/or auxiliary aids, students with disabilities must contact the course instructor and the as soon as possible. To insure that disability-related concerns are properly addressed from the beginning, students with disabilities who require assistance to participate in this class should contact Disability Resources and Educational Services (DRES) and see the instructor as soon as possible. If you need accommodations for any sort of disability, please speak to me after class, or make an appointment to see me, or see me during my office hours. DRES provides students with academic accommodations, access, and support services. To contact DRES you may visit 1207 S. Oak St., Champaign, call 333-4603 (V/TDD), or e-mail a message to disability@uiuc.edu. Please refer to http://www.disability.illinois.edu/.

Emergency Response Recommendations

Emergency response recommendations can be found at the following website: http://police.illinois.edu/emergency-preparedness/. I encourage you to review this website and the campus building floor plans website within the first 10 days of class: http://police.illinois.edu/emergency-preparedness/building-emergency-action-plans/.

Family Educational Rights and Privacy Act (FERPA)

Any student who has suppressed their directory information pursuant to Family Educational Rights and Privacy Act (FERPA) should self-identify to the instructor to ensure protection of the privacy of their attendance in this course. See http://registrar.illinois.edu/ferpa for more information on FERPA.

Statement on CS CARES and CS Values and Code of Conduct

All members of the Illinois Computer Science department - faculty, staff, and students - are expected to adhere to the CS Values and Code of Conduct. The CS CARES Committee is available to serve as a resource to help people who are concerned about or experience a potential violation of the Code. If you experience such issues, please contact the CS CARES Committee. The instructors of this course are also available for issues related to this class.