TheSequence

TheSequence

Share this post

TheSequence
TheSequence
💁🏻‍♀️💁🏾 Edge#232: DeepMind’s New Method for Discovering when an Agent is Present in a System

💁🏻‍♀️💁🏾 Edge#232: DeepMind’s New Method for Discovering when an Agent is Present in a System

The paper proposes a first method for discovering AI agents in a system based solely on empirical data

Oct 06, 2022
∙ Paid
18

Share this post

TheSequence
TheSequence
💁🏻‍♀️💁🏾 Edge#232: DeepMind’s New Method for Discovering when an Agent is Present in a System
Share

On Thursdays, we dive deep into one of the freshest research papers or technology frameworks that is worth your attention. Our goal is to keep you up to date with new developments in AI to complement the concepts we debate in other editions of our newsletter.

💥 What’s New in AI: DeepMind’s New Method for Discovering when an Agent is Present in a System

So very often, the terms “agents” and “models” are used interchangeably in machine learning. Conceptually, a model is the mathematical and architecture representation of a knowledge task. A model receives some inputs and produces some outputs based on its knowledge acquired via the exploration of some datasets. An agent is composed of one or more models but has the distinct characteristic that they interact with an environment in order to refine their knowledge or policy. From that perspective, an agent is not necessarily a model (although that point is debatable) but is rather composed of one or more models.

So, in short, agents are groups of models that actively interact with an environment and modify their policy accordingly. Following that definition, one of the most important challenges in agent modeling is to understand their behavior sensitivity to ecological interactions. Recently,

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Jesus Rodriguez
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share