How OpenAI Uses GPT-4 to Interpret the Functions of Neurons in Other Language Models
A new interpretability method based on GPT-4 can derive explanations about specific neurons in LLMs.
As language models have advanced in capability and widespread usage, there remains a significant knowledge gap regarding their internal workings. Understanding whether these models employ biased heuristics or engage in deception solely based on their outputs can be challenging. In the pursuit of interpretability, OpenAI delves into uncovering additional insights by exploring the model’s internal mechanisms. A straightforward approach to interpretability research involves gaining a deeper understanding of the individual components within the model, such as neurons and attention heads. Traditionally, this process entailed manual inspection by human experts to decipher the data features represented by these components. However, this manual inspection approach faces scalability issues, particularly when dealing with neural networks containing tens or hundreds of billions of parameters. Recently, OpenAI proposed an automated process that leverages the power of GPT-4 to generate natural language explanations for neuron behavior and subsequently score their quality. This automated process is then applied to neurons within another language model.