TheSequence

TheSequence

Share this post

TheSequence
TheSequence
☝️ Edge#48: When More Data and Bigger Models can Hurt Performance

☝️ Edge#48: When More Data and Bigger Models can Hurt Performance

Deep dive into the double descent hypothesis, one of the most original papers in deep learning optimization

Dec 17, 2020
∙ Paid
5

Share this post

TheSequence
TheSequence
☝️ Edge#48: When More Data and Bigger Models can Hurt Performance
Share

Recently we introduced a new format – What’s New in AI, a deep dive into one of the freshest research papers or technology frameworks that’s worth your attention. Our goal is to keep you up to date with new developments in AI in a way that complements the concepts we are debating in other editions of our newsletter.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Jesus Rodriguez
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share