From 1d13f2cb015349016aa3053d4d6b245095485ce0 Mon Sep 17 00:00:00 2001 From: Vasiliy Kuznetsov Date: Mon, 8 Dec 2025 15:06:47 -0500 Subject: [PATCH] fix table formatting in quantization readme missed this in https://github.com/pytorch/ao/pull/3449 --- torchao/quantization/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/torchao/quantization/README.md b/torchao/quantization/README.md index b9a62c99fa..0f386ba994 100644 --- a/torchao/quantization/README.md +++ b/torchao/quantization/README.md @@ -6,7 +6,7 @@ Typically quantization algorithms will have different schemes for how the activa All the following benchmarks are for `meta-llama/Llama-3-8.1B` using `lm-eval` measured on an H100 GPU. | weight | activation | wikitext-perplexity | winogrande | checkpoint size (GB) | -| --------- | ------------------- | ---------- | -------------------- | +| --------- | ------------------- | ---------- | -------------------- | -------- | | bfloat16 | bfloat16 | 7.3315 | 0.7380 | 16.1 | | float8_rowwise | float8_rowwise | 7.4197 | 0.7388 | 9.1 | | int8_rowwise | bfloat16 | 7.3451 | 0.7340 | 9.1 |