TheSequence

TheSequence

Share this post

TheSequence
TheSequence
Edge 307: Learning About Program-Aided Language Models

Edge 307: Learning About Program-Aided Language Models

LLMs that generate code for intermediate steps in a target task and the NLP Test framework.

Jul 11, 2023
∙ Paid
22

Share this post

TheSequence
TheSequence
Edge 307: Learning About Program-Aided Language Models
2
Share
Created Using Midjourney

In this Issue:

  1. The concept of program-aided language models.

  2. The original PAL paper from Carnegie Mellon University.

  3. The NLP Test framework.

💡 ML Concept of the Day: Program-Aided Language Models

Reasoning is one of the next frontiers for large language models (LLMs). This type of model has recently exhibited a remarkable aptitude in carrying out arithmetic and symbolic reasoning tasks, showcasing its prowess in "few-shot prompting" scenarios. The achievement owes much to innovative prompting techniques like "chain-of-thought" (covered in this series), wherein LLMs excel not only in comprehending the problem description by breaking it down into sequential steps but also in resolving each individual step. Despite their proficiency in stepwise decomposition, LLMs frequently encounter logical and arithmetic errors when attempting to produce accurate solutions, even in cases where the problem decomposition itself is executed flawlessly.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Jesus Rodriguez
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share