Katharine Jarmul of DropoutLabs discusses security and privacy concerns as they relate to Machine Learning. Host Justin Beyer spoke with Jarmul about attacks that can be leveraged against data pipelines and machine learning models; attack types – adversarial example, model inference, deanonymization; and how they can be utilized to manipulate model outcomes; the dangers of Machine Learning as a Service (MLaaS) platforms; privacy concerns surrounding the use and collection of data; securing data and APIs; Privacy Preserving Machine Learning: Federated Learning, and Encrypted Learning through techniques such as Homomorphic Encryption and Secure Multi-Party Computation.
Show Notes
Related Links
- Proposals for model vulnerability and security – Patrick Hall
- BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
- Mitigating Deep Learning Vulnerabilities from Adversarial Examples Attack in the Cybersecurity Domain
- Privacy Preserving Machine Learning: Threats and Solutions
- Turtle Classified as Rifle
- Wild Patterns
- Google Adversarial Example Attack Toolkit (Cleverhans)
- Florian Tramèr: Stealing Machine Learning Models via Prediction APIs (USENIX)
- Netflix Data De-Anonymization Attack
- Episode 383 – Neil Madden On Securing Your API
- Episode 286 – Katie Malone Intro to Machine Learning
- Episode 342 – István Lam on Privacy by Design with GDPR
- Episode 315 – Jeroen Janssens on Tools for Data Science
SE Radio theme: “Broken Reality” by Kevin MacLeod (incompetech.com — Licensed under Creative Commons: By Attribution 3.0)