A Visual Guide to LLM Quantization

  • This is really an awesome introduction into quantization! One small comment about the GPTQ section:

    It uses asymmetric quantization and does so layer by layer such that each layer is processed independently before continuing to the next

    GPTQ also supports symmetric quantization and almost everyone uses it. The problem with GPTQ asymmetric quantization is that all popular implementations have a bug [1] where all zero/bias values of 0 are reset to 1 during packing (out of 16 possible biases in 4-bit quantization), leading to quite a large loss in quality. Interestingly, it seems that people initially observed that symmetric quantization worked better than asymmetric quantization (which is very counter-intuitive, but made GPTQ symmetric quantization far more popular) and only discovered later that it is due to a bug.

    [1] https://notes.danieldk.eu/ML/Formats/GPTQ#Packing+integers

  • Fairly helpful overview. One thing that probably has a good answer is why to use floats at all; even at 32 bits? Is there an advantage relative to using just 32 bit ints? It seems integer math is a lot easier to do in hardware. Back when I was young, you had to pay extra to get floating point hardware support in your PC. It required a co-processor. I'm assuming that is still somewhat true in terms of numbers of transistors needed on chips.

    Intuitively, I like the idea of asymmetric scales as well. Treating all values as equal seems like it's probably wasteful in terms of memory. It would be interesting to see where typical values fall statistically in an LLM. I bet it's nowhere near a random distribution of values.

  • I've read the huggingface blog on quantization, and a plethora of papers such as `bitsandbytes`. This was an approachable agglomeration of a lot of activity in this space with just the right references at the end. Bookmarked!

  • It’s a shame that the article didn’t mention AWQ 4-bit quantization, which is quite widely supported in libraries and deployment tools (e.g. vLLM).

  • I've long held the assumption that neurons in networks are just logic functions, where you can just write out their truth tables by taking all the combinations of their input activations and design an logic network that matches that 100% - thus 1-bit 'quantization' should be enough to perfectly recreate any neural network for inference.

  • This is a very misleading article.

    Floats are not distributed evenly across the number line. The number of floats between 0 and 1 is the same as the number of floats between 1 and 3, then between 3 and 7 and so on. Quantising well to integers means that you take this sensitivity into account since the spacing between integers is always the same.

  • What an awesome collection of visual mappings between process and output, immediately gripping, visually striking and thoughtfully laid out. I'd love to hear more about the process behind them, a hallmark in exploratory visualisation.

  • I wonder why AWQ is not mentioned. It’s pretty popular and I always was curious how it is different from GPTQ.