TheSequence Scope: Faster, Smaller Machine Learning
Initially published as 'This Week in AI', a newsletter about the AI news, and the latest developments in ML research and technology
From the Editor: Faster, Smaller Machine Learning
The catchy phrase that bigger is better definitely applies when comes to machine learning these days. Bigger models trained on larger datasets have consistently outperformed groups of smaller models specialized in specific tasks. The examples are everywhere, Google’s BERT, OpenAI’s GPT-2, Microsoft’s Turing-NLG operate at scales with computational costs that are not feasible for most organizations. As a result, we are starting to see efforts to create smaller and more efficient machine learning models.
The idea of optimizing the size of a machine learning model without sacrificing its performance is conceptually trivial but really hard to implement in practice. Most large scale machine learning models grow to fairly large sizes during training. It’s becoming really hard to understand which sections can be removed without affecting the performance. This week, MIT researchers published a new pruning algorithm to make AI applications run faster and we are likely to continue seeing more research in this area.
Now let’s take a look at the core developments in AI research and technology this week.
AI Research:
Shrinking Deep Neural Networks
Researchers from MIT published a paper that proposed a method for shrinking deep neural networks.
The image credit: MIT
>Read more in this coverage from MIT News
Jukebox
Researchers from OpenAI unveiled Jukebox, a deep neural network that can generate music and lyrics
>Read more in this blog post from OpenAI
Better Loss Functions
Google Research published two papers discussing a method to create a single loss function that can optimize for different tasks.
>Read more in this blog post from Google Research
Cool AI Tech Releases:
A New TensorFlow Runtime
TensorFlow open-sourced TFRT, a new runtime that provides a consistent infrastructure that maximizes performance across different hardware topologies.
>Read more in this blog post from the TensorFlow team
Blender
Facebook open source Blender, an open-domain chatbot that outperforms others in terms of engagements and human-like communication capabilities.
>Read more in this blog post from Facebook Research
Tecton.ai
The team behind Uber’s Michelangelo has launched a new startup to operate machine learning models and they just raised a $20 million Series A.
>Read more in this coverage from TechCrunch
Querying Tables Using Natural Languages
Google open-sourced a BERT-based model that processes natural language queries against tabular datasets.
>Read more in this blog post from Google Research
AI in the Real World:
A Fascinating AI Experiment in the Defense Industry
The US Defense Intelligence Agency showcase an AI model that showed more risk tolerance than humans in the absence of critical data.
>Read more in this coverage from Defense One
AI for Autocompleting Code
AI startup Codota raised $12 million for using AI to autocomplete code.
>Read more in this coverage from TechCrunch
Protecting AI from Adversarial Attacks
Resistant.AI, a startup developing technology to protect AI models from adversarial attacks, just announced a new $2.75 million round.
>Read more in this coverage from VentureBeat
“This Week in AI” is a newsletter curated by industry insiders and the Invector Labs team, every week it brings you the latest developments in AI research and technology.
From July 14th the newsletter will change its name and format to develop ideas of systematic AI education.
To stay up-to-date and know more about TheSequence, please consider to ➡️