You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I see that the GPTQModel library supports GPTAQ/GPTQ v2. In the GPTAQ paper, results for configurations such as W8A8, W4A4 are shown. However, I could test only weight quantization with GPTAQ/GPTQ v2 so far with the library. Are there any plans to add support for activation quantization as well? Thanks.