LLaMA is Meta AI's New LLM that Matchest GPT-3.5 Across Many Tasks Despite Being Quite Smaller
The model is significatively smaller than GPT-3.5 but matches its performance on many important LLM benchmarks.
Large Language Models (LLMs) have recently taken the world by storm with their remarkable ability to perform new tasks from textual instructions or a few examples. This ability, known as few-shot learning, was first observed when models were scaled up to a sufficient size. As a result, researchers have focused on scaling these models even further. The general assumption is that more parameters will lead to better performance. However, recent research has shown that, for a given compute budget, the best performance is not achieved by the largest models. Instead, smaller models trained on more data outperform their larger counterparts. In that context, Meta AI recently published a paper detailing LLaMA, a 65B LLM that is able to outperform GPT-3 across many tasks despite being significantly smaller.