Edge 332: Inside FlashAttention: The Method Powering LLM Scalability to Whole New Levels
FlashAttention and FlashAttention-2 have been implemented by some of the major LLM platforms in the market.
Scaling the context of large language models(LLMs) remains one of the biggest challenges to expanding the universe of use cases. In recent months, we have seen vendors such as Anthropic or OpenAI pushing the context lengths of their models to new heights. This trend is likely to continue, but it's likely to require some research breakthroughs. One of the most interesting works in this area was recently published by Stanford University. Dubbed FlashAttention, this new technique has been rapidly adopted as one of the main mechanisms for increasing the context of LLMs. The second iteration of FlashAttention, FlashAttention-2, was recently published. In this post, I would like to review the fundamentals of both versions.