QAT

#42
by rezzie-rich - opened

Was any of the phi 3.5 model quantization-aware trained (QAT) for fp8 like mistral-nemo-12b?

Microsoft org
β€’
edited 7 days ago

Thanks for your interest!
We haven't done any QAT, but if you quantize expert weights only (no gating weights), it wouldn't hurt much performance based on our previous experiments.
Also, you can save most of the params by quantizing expert weights only by MoE's design.

Thank you for your quick response and i'm sorry for being late lol. I will try as you instructed.

It would be highly appreciated and benificial if the future models are QAT. llama 3.1 405b fp8 has the same score as fp16 on chatbot arena, a huge computational saving.

Sign up or log in to comment