Edge 364: About COSP and USP: Two New LLM Reasoning Methods Built by Google Research
Via adaptive prompting, the two new methods enhance common sense reasoning capabilities in LLMs.
The evolution of prompt generation is one of the key building blocks of LLM-based applications. Tasks such as reasoning or fine-tuning highly depend on having strong prompt datasets. Techniques such as few-shot setup have significantly reduced the necessity for copious amounts of data to fine-tune models for specific tasks. Nevertheless, challenges persist when it comes to crafting sample prompts, especially in scenarios where a broad array of tasks is covered by general-purpose models. Even generating a modest number of demonstrations can be a formidable task. This is particularly true for tasks such as summarizing lengthy articles or addressing inquiries that demand specialized domain knowledge, like medical question answering.
In such situations, models endowed with robust zero-shot performance come to the rescue, eliminating the need for manual prompt generation. However, it’s worth noting that zero-shot performance tends to be less potent since the language model operates without specific guidance, leaving room for occasional erroneous outputs.