š¦¾Transformer Architectures Recap
As requested by many of our readers, before diving deeper into Self-Supervised Learning, we put together a recap of the Transformer Architectures series. As a proverb says: Repetition is the mother of learning ;) Letās have some useful intro about the whole category first:
š”What are Transformers?
Transformer architectures are considered by many the most important development in the recent years of deep learning. These architectures were specialized in the processing of sequential datasets, which are relevant in domains such as computer vision or natural language processing (NLP). Before the inception of transformers, that space was dominated by recurrent neural network (RNNs) models, such as long-short term memory (LSTM) networks. Transformers challenged the conventional wisdom in RNN architectures by not relying on an ordered position of the input data. Instead, transformers relied on unique attention mechanisms that provided context for any position in the sequence. Transformers have been the cornerstone of such groundbreaking models as Google BERT and OpenAI GPT3, which set up new milestones in NLP scenarios. In recent months, transformers are also making important inroads in other areas such as computer vision and time-series analysis.
Forward this email to those who might benefit from reading it or give a gift subscription.
š¤ Edge#109: Transformer Architectures ā the technique that made possible a few major breakthroughs in deep learning;Ā Googleās Attention paper that started the transformer revolution;Ā +Tensor2Tensor.
š¤ Edge#111: The concept of Attention; Google Switch Transformer ā the biggest transformer model ever built; +Hugging Face.
š§ Edge#112 is a deep dive about how DeepMindās compressive transformer improves long-term memory in transformer architectures.
š¢ Edge#113: the architecture of Google BERT; TAPAS ā a model that extends BERTās architecture to work with tabular datasets; +AutoNLP.
š Edge#114 is a deep dive into AI2āsĀ Longformer ā a Transformer Model for Long (read it without a subscription).
š¤©Edge#115: the concept of the most famous transformer ever built ā GPT-3; two mechanisms for improving the current generation of transformer models by FAIR; +OpenAI API.
š Edge#117: Transformers and Computer Vision; ImageGPT ā anĀ adaptation of GPT model to computer vision scenarios;Ā +Hugging Face library.
šÆāāļø Edge#118 is a deep dive into DeepMindās Perceiver and Perceiver IO.
šš Edge#121: Transformers andĀ Time Series; Google Researchās paper aboutĀ temporal fusion transformers; +GluonTS.
š Edge#122 is a deep dive into Unified VLP ā a transformer model for visual question answering (VQA).
Next week we will continue with Self-Supervised Learning. Stay tuned!