đđ Security and Privacy Recap
TheSequence is the best way to build and reinforce your knowledge about machine learning and AI
Occasionally, weâd like to wrap-up in one newsletter a topic that weâve covered in mini-series. These collections will help you navigate the articles and fill the gaps if you missed something.
đĄÂ Security and Privacy in ML Models
In this series of TheSequence Edge, weâve covered different topics related to security in machine learning models. Security and privacy are the aspects of machine learning solutions that are often ignored until they become a problem. In some contexts, nobody can dispute the importance of preserving privacy in training datasets in machine learning models. However, it is important to realize that, very often, introducing privacy methods creates friction in the learning process of machine learning models. Â
The friction between privacy and learning is conceptually trivial to understand. We shouldnât expect a model trained in a clear dataset to perform identically to a model trained using processes such as differential privacy or secured multi-party computations. Those techniques require very unique architectures in order to enforce privacy without affecting the performance of the target machine learning model. Furthermore, frameworks for private machine learning are still in very early stages, requiring high degrees of expertise in order to be applied correctly. From a practical experience standpoint, the only way to build effective private machine learning solutions is to start from day one with privacy as a first-class component of your neural network architecture.Â
Edge#30 (read without subscription): privacy-preserving machine learning; Googleâs PATE method for scalable private machine learning; the PySyft open-source framework for private deep learning.Â
Edge#31: the concept of differential privacy; Appleâs research about differential privacy at scale, the TensorFlow Privacy framework.Â
Edge#32: the concept of adversarial attacks; OpenAIâs metric for the robustness against adversarial attacks; IBMâs adversarial robustness toolbox to protect neural networks against security attacks.
Edge#33: the concept of secure multi-party computations (sMPC); Microsoft CRYPTFLOW â an Architecture for using sMPC in TensorFlow; Facebookâs CrypTen framework for sMPC implementations in PyTorch. Â
Edge#34: the concept of homomorphic encryption; Intelâs nGraph-HE that shows how neural networks can operate on homomorphically encrypted data; and Microsoftâs SEAL.
Reading TheSequence Edge regularly, you become smarter about ML and AI. Trusted by the major AI Labs and universities of the world.