In this guest post, Vladimir Malinovskii discusses the intense competition between research teams at Yandex, IST Austria, KAUST, and Cornell University in developing cutting-edge neural network compression techniques. Highlighting methods like AQLM, QuIP, and their evolutions, the post explores the progress in compressing LLMs to 2-bit formats, enabling significant reductions in model size while maintaining quality, ultimately making large models accessible on standard hardware.
We live in the era of Large Language Models (LLMs), with companies increasingly deploying models with billions of parameters. These expansive models unlock new possibilities for users but come with high deployment costs and complexity, especially on personal computers. As a result, researchers and engineers often train these large models first and then compress them, while trying to minimize quality loss.
Models are released in the float16 format, where 16 bits are allocated for each weight. Two years ago, technological advancement got us to the point where we can reliably compress neural networks down to 4 bits using techniques like GPTQ. However, the progress doesn't stop here, as the research community is now actively seeking methods to reduce model size eightfold, down to 2 bits.
A little while ago, the Yandex Research team joined efforts with their colleagues from IST Austria and KAUST to introduce a new method of achieving 8x compression through the combined use of AQLM and PV-Tuning. This method has been made publicly available to software developers and researchers, with the code published in a GitHub repository. Examples of popular open-source models that were compressed using this method can be downloaded here. For fine-tuning a compressed neural network, take a look at these training materials.
In the meantime, we'd like to shed some light on how the method was developed thanks to a brief (albeit very exciting!) "rivalry" between two research teams and their state-of-the-art compression algorithms: QuIP and AQLM. Competition fuels innovation, and this rapidly evolving story is a prime example. Every other month brings new breakthroughs, optimizations, and ingenious solutions. Sit back and enjoy!
Methodology
In our example, we'll be using Llama2 13b. Along with it, Meta released two more models sized 7b and 70b. In academic research, perplexity is typically used to assess the quality of compressed models. This metric measures how perplexed (hence the name) a neural network gets after seeing a correctly written text. Lower perplexity means a better model.
For this article, we've chosen a different primary metric: effective model size. All three Llama2 models fall on the same line when plotting perplexity against model size, so we can report a more intuitive metric.
Quantization of LLMs With Guarantees
In mid-2023, a group of scientists from Cornell University published an article titled QuIP, where the authors seriously discuss for the first time the possibility of compressing neural networks by a factor of 8. The idea is that 99% of weights in large language models behave similarly and can be easily compressed with minimal loss. However, the remaining 1% — outliers — significantly deviate from the norm and lead to inaccuracies during compression.
The QuIP authors devised a technique to shuffle these weights, effectively solving the outlier problem. The algorithm compresses Llama2 13b to a perplexity of 13.5. At first glance, this may seem like a poor result, as it corresponds to a model with a mere size of 0.43M parameters. But this was the first time the perplexity remained in the double digits after compression. For reference, GPTQ compresses the same model to a perplexity of 123. So, what QuIP managed to achieve was nothing short of a breakthrough.
Extreme Compression of LLMs via Additive Quantization
Six months later, in January 2024, researchers from Yandex and the Institute of Science and Technology Austria (ISTA) published a paper on additive quantization for language models (AQLM). The method relies on additive quantization proposed in 2014 by Artem Babenko, Head of Yandex Research — back then, it was intended to help the company's search engine needs. Ten years later, this technology was repurposed to compress large language models while maintaining maximum quality.
AQLM compresses Llama 13b to a perplexity corresponding to the 3.7b model. This was a significant improvement over QuIP — a model like that was now actually usable.
A Close Competition — Upgrade From QuiP to QuIP#
Just a month later, researchers from Cornell struck back with a new study titled "QuIP#." The authors refined their method with fine-tuning and other optimizations, showing that you can fine-tune a compressed neural network on a calibrated dataset to significantly reduce compression error without changing the model's representation format.
QuIP# performed better than AQLM, improving the SOTA quantization of the Llama2-13b model to an effective size of 5.3B parameters.
A New Chapter for AQLM
In late May, a joint study from the creators of AQLM and researchers from King Abdullah University of Science and Technology (KAUST) sees light. The authors propose an advanced fine-tuning algorithm — PV-Tuning — to improve on the old AQLM. While the QuIP# paper suggests fine-tuning only continuous parameters with gradients, PV-Tuning modifies all model parameters, including discrete ones.
In terms of quality, this is the best algorithm for 2-bit quantization to date. It compresses the Llama 13b model to an effective size of 6.9B, falling just a bit short of the 7b model.
Every year, neural network quantization becomes more and more efficient: inference costs are going down exponentially, and large models with open weights can now be launched on regular hardware. A few years ago, one could only dream of launching a large model like Llama3 70b on an RTX 3090. In 2024, that's already a reality.
Healthy competition among researchers is a catalyst for technological advancement, driving innovation and the development of creative approaches. How will the story unfold from here? Time will tell! And we have a feeling it won't be long.