Edge 322: Inside Generative Agents : How Google and Stanford Researchers Used Generative AI to Learn to Simulate Human Behavior
One of the most groundbreaking papers of the last year showed the emergence of human behavior such as social constructs in a game simulated environment.
Simulating human behavior has long been one of the crown jewels of artificial intelligence(AI). Recent advancements in generative AI in the form of large language models(LLMs) have certainly made a lot of progress toward simulating human behavior in a single point. However, doing that across long periods of time and complex sets of interactions remains unexplored territory. This challenge is even more accentuated if the scenarios involve complex interactions with the agent’s environments. Recently, researchers from Google and Stanford University collaborated on a paper demonstrating generative agents that are able to simulate human behavior across complex tasks.
Different from building another LLM that works in isolation, the Google-Stanford team focused their research on agents that actively interact with an environment. A wide variety of inferences can be drawn by generative agents about themselves, other agents, and their environment. Their daily plans reflect their characteristics and experiences, and they act out those plans, react, and re-plan as appropriate. When the end user changes their environment or commands them in natural language, generative agents respond accordingly. For example, they turn off the stove if they see that their breakfast is burning, wait outside the bathroom if it is occupied, and stop to chat when they meet another agent they want to talk to. In a society where generative agents are present, emergent social dynamics can be observed, where new relationships are formed, information diffuses, and coordination arises across agents.