🏆 Edge#10: Feature Selection and Feature Extraction

Best way to build and reinforce your knowledge about machine learning and AI

In this issue:

  • we explain the difference between feature extraction and feature selection;

  • we explore a feature visualization method known as Activation Atlases

  • we review the HopsWorks feature store platform;

  • we introduce to you The Quiz – just click the picture and answer two simple questions that test how well you know the topics covered in Edge#10.

Enjoy the learning and check your knowledge!


💡 ML Concept of the Day: Feature Selection and Feature Extraction

Machine learning is the process of creating programs from datasets. In order to do that, machine learning programs need to model the dataset using an abstraction that enables knowledge representation. This is the role of features. In machine learning, a feature is a measurable property of a given dataset that is relevant in the training process. Plain and simple, the quality of features in a dataset is directly influential in the quality of the training process. In order to select the right features from a dataset, machine learning practitioners rely on two fundamental techniques: feature selection and feature extraction.  

The importance of feature extraction and feature selection is such that there are entire segments of research dedicated to new techniques in these areas. Both methods fall under the umbrella of dimensionality reduction techniques that try to inspect all the possible features in a dataset and select the ones that are relevant for the learning task. However, both methods tackle this problem in different ways.  

Feature extraction is an omnipresent task in most machine learning scenarios. Conceptually, feature extraction tries to translate a raw dataset into the inputs of a machine learning model. This process has a couple of major challenges. For starters, there is rarely a direct mapping between raw data and relevant features but also, the number of features produced from a raw dataset can become unmanageable from a computational standpoint. These challenges make feature extraction an iterative process in which features are aggregated and transformed into new features in order to capture the essence of the dataset.  

Compared to feature extraction, feature selection is a much more cleaner task. Feature selection takes a set of possible features as input and produces another set with the most relevant features while discharging the rest of them. Feature selection focuses on eliminating redundancy in features in a given dataset.  

So now you know the difference between feature extraction and feature selection. In most machine learning problems, feature extraction is almost always needed while feature selection should be applied when there is suspicion of redundancy or irrelevance of features.     


🔎 ML Research You Should Know: Feature Visualization with Activation Atlases

In Exploring Neural Networks with Activation Atlases, researchers from OpenAI and Google Brain proposed a feature visualization technique to understand how image classifiers understand concepts.  

The objective: The OpenAI paper introduces activation atlases, visualization methods to understand the relevance of different features in a neural network.   

Why is it so important: Visualizations are the most effective way of understanding the relevance of different features in a neural network. However, finding effective visualizations for neural networks with millions of neurons is far from an easy endeavor. Activation atlases are one of the techniques that have proven to be effective in image classification methods.       

Diving deeper: Feature visualization is a key technique in real-world deep learning scenarios. Visualizing features is the equivalent of “seeing through the eyes of a neural network” as we are trying to understand how neural networks form concepts and make decisions. It’s not a surprise that feature visualization has become an important area of research in the deep learning space.  

Visualizing features is conceptually trivial but incredibly hard to implement from a technical standpoint. The obvious idea for feature visualization relies on visualizing individual neurons, but that can be incomprehensible in a complex neural network with millions of neurons. Alternative approaches focus on visualizing the hidden layers in a network, which are easier to understand. These types of visualization models are limited in that as they don’t provide any information about the activity within the different hidden layers.  

Activation atlases were proposed by OpenAI in 2019 as a new visualization method for representing the interactions between neurons in neural networks. The idea behind activation atlases is to visualize the activations within a neural network based on a given input. Activation atlases correlate every input image with the features activated in the target neural network. This approach provides a global view of the training dataset, allowing it to draw some inferences between the input images and the activated features. It is important to notice that, in activation atlases, the visualizations focus on the averaged feature activations in a neural network. The resulting visualization is a 2D map that enables the exploration of activated features without becoming overwhelming, like in the case of individual neurons.  

Image credit: the original research paper

From a visualization standpoint, activation atlases rely on a technique known as activation grids, which create a vector of activated features for different parts of an input record. Activation atlases expand the concept of activation grids to training datasets with millions of records, by creating a 2D map of the activated features and their correlation with the input dataset.  

Activation atlases can help us understand how deep learning models form concepts. Using this technique, we can visually determine which features are developed in each hidden layer and correlate it to the input dataset. A side benefit of activation atlases is that it doesn’t only provide a visual interpretation of a model. It can also help reveal errors, misclassifications and even potential security vulnerabilities in a neural network. As research in feature visualization continues growing, we are likely to see new ideas based on the concepts of activation atlases. For starters, OpenAI released a series of demos of activation atlases in its GitHub repository allowing data scientists to play and extend this technique in their deep learning models.  


🤖 ML Technology to Follow: HopsWorks is a Feature Store for Your Deep Learning Solution 

Why should I know about this: HopsWorks Feature Store enables the management and maintenance of features in a deep learning infrastructure.  

What is it: Managing features for a single machine learning model can be overwhelming, so imagine the challenge for an infrastructure containing dozens or hundreds of models. A feature store is a common building block in modern machine learning architectures. Conceptually, a feature store enables a centralized catalog of features in machine learning pipelines. Feature stores are a relatively nascent concept in the machine learning space and consequently, there are not many reliable platforms that have been battle-tested in real-world implementations. HopsWorks’ feature store is an open-source feature store platform with its own loyal community that has been adopted by several major companies. 

HopsWorks’ feature is a key component of its machine learning platform. The feature store enables the management and reusability of features in machine learning pipelines. From a functional standpoint, you can think of HopsWorks’ feature store as a centralized catalog of features that can be discovered, used, and maintained across different machine learning models. The current release includes a series of capabilities that are very relevant in machine learning infrastructures:  

  • Feature Reusability: HopsWorks’s feature store enables the reusability of features across different machine learning models.  

  • Feature Discovery: Using the feature store, data scientists can discover and evaluate new features that can be relevant to their machine learning models.  

  • Feature Analysis: HopsWorks’s feature store allows data scientists to analyze different features as well as evaluate distributions and correlations over time.  

Image credit: GitHub

Integration is one of the key capabilities of the HopsWorks feature store. The platform is integrated with different cloud platforms such as AWS, Azure or DataBricks so that it can be easily enabled in machine learning programs.  The feature store also provides connectors to different data stores so that features can be engineered and reevaluated over time. Finally, the Hopsworks feature store provides a seamless programming model in order to incorporate the feature store into deep learning programs developed using different frameworks, such as TensorFlow or PyTorch.  

How can I use it: The HopsWorks feature store is included as part of HopsWorks platform and is open sourced at https://github.com/logicalclocks/hopsworks


🧠 The Quiz

Now, let’s check your knowledge. Please click the image below or go to this Google Form.

That was fun! Thank you. See you on Sunday 😉


TheSequence is a summary of groundbreaking ML research papers, engaging explanations of ML concepts, and exploration of new ML frameworks and platforms. TheSequence keeps you up to date with the news, trends, and technology developments in the AI field.

5 minutes of your time, 3 times a week – you will steadily become knowledgeable about everything happening in the AI space. Make it a gift for those who can benefit from it.