Edge 344: LLMs and Memory is All You Need. Inside One of the Most Shocking Papers of the Year
Can memory-augmented LLMs simulate any algorithm?
Large language models(LLMs) continue to push the limits of computations models one breakthrough at a time. How far could this go? Well, a recent research paper published from AI researchers from Google Brain and University of Alberta shows that it can go VERY FAR. Could we possibly simulate any algorithm using large language models(LLMs) and memory? Can the combination of LLM and memory be Turing complete?