Inside OPRO: Google DeepMind’s New Method that Optimizes Prompts Better than Humans
The technique uses LLMs as prompt optimization agents.
Prompt engineering and optimization is one of the most debated topics with large language models(LLMs). Terms such as prompt engineering are often used to describe the task of optimizing a language instruction in order to achieve a specific task with an LLM. These tasks are typically performed by humans but what if AI could do a better job optimizing prompts? In a recent paper, researchers from Google DeepMind proposed a technique called Optimization by Prompting (OPRO) that attempts to address precisely this challenge.