🔐🔏 Security and Privacy Wrap-Up

TheSequence is the best way to build and reinforce your knowledge about machine learning and AI

Occasionally, we’d like to wrap-up in one newsletter a topic that we’ve covered in mini-series. These collections will help you navigate the articles and fill the gaps if you missed something.

💡 Security and Privacy in ML Models

In this series of TheSequence Edge, we’ve covered different topics related to security in machine learning models. Security and privacy are the aspects of machine learning solutions that are often ignored until they become a problem. In some contexts, nobody can dispute the importance of preserving privacy in training datasets in machine learning models. However, it is important to realize that, very often, introducing privacy methods creates friction in the learning process of machine learning models.  

The friction between privacy and learning is conceptually trivial to understand. We shouldn’t expect a model trained in a clear dataset to perform identically to a model trained using processes such as differential privacy or secured multi-party computations. Those techniques require very unique architectures in order to enforce privacy without affecting the performance of the target machine learning model. Furthermore, frameworks for private machine learning are still in very early stages, requiring high degrees of expertise in order to be applied correctly. From a practical experience standpoint, the only way to build effective private machine learning solutions is to start from day one with privacy as a first-class component of your neural network architecture. 

->become Premium to read the following Edges and stay up-to-date with the most relevant developments in the ML world

Edge#30 (read without subscription): privacy-preserving machine learning; Google’s PATE method for scalable private machine learningthe PySyft open-source framework for private deep learning. 

Edge#31: the concept of differential privacy; Apple’s research about differential privacy at scale, the TensorFlow Privacy framework. 

Edge#32: the concept of adversarial attacks; OpenAI’s metric for the robustness against adversarial attacksIBM’s adversarial robustness toolbox to protect neural networks against security attacks.

Edge#33: the concept of secure multi-party computations (sMPC); Microsoft CRYPTFLOW – an Architecture for using sMPC in TensorFlow; Facebook’s CrypTen framework for sMPC implementations in PyTorch.  

Edge#34: the concept of homomorphic encryption; Intel’s nGraph-HE that shows how neural networks can operate on homomorphically encrypted data; and Microsoft’s SEAL.

Reading TheSequence Edge regularly, you become smarter about ML and AI. Trusted by the major AI Labs and universities of the world.

Join them