TheSequence

TheSequence

Share this post

TheSequence
TheSequence
👾 Edge#228: How Amazon is Improving BERT-Based Models Used in Alexa

👾 Edge#228: How Amazon is Improving BERT-Based Models Used in Alexa

Recently Amazon Research published three papers about BERT-based models

Sep 22, 2022
∙ Paid
47

Share this post

TheSequence
TheSequence
👾 Edge#228: How Amazon is Improving BERT-Based Models Used in Alexa
Share

On Thursdays, we dive deep into one of the freshest research papers or technology frameworks that is worth your attention. Our goal is to keep you up to date with new developments in AI to complement the concepts we debate in other editions of our newsletter.

💥 What’s New in AI: How Amazon is Improving BERT-Based Models Used in Alexa

BERT has become one of the most iconic machine learning (ML) methods of the last decade. Since the publication of the now-famous paper BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding, BERT has inspired a generation of language models that have revolutionized the field of natural language understanding (NLU). Amazon has been one of the main adopters of BERT-based models, particularly in the architecture powering the Alexa digital assistant. As a result, Amazon Research regularly publishes improvements to BERT-based models in order to address some of the large-scale scenarios required by Alexa. Recently, we got a glimpse of Amazon Research’s recent work in BERT-based models with the publication of three different papers. Let’s take a quick look at them.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Jesus Rodriguez
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share