Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MWA4(The bit width of the weights is mixed, the bit width of the activation is 4) quantization problem of resnet18 #3

Open
maibaodexiaohangjiaya opened this issue Jan 8, 2024 · 2 comments

Comments

@maibaodexiaohangjiaya
Copy link

I used the model (resnet18) provided by the project and the default parameters of the code to run MWA4 quantization. The results are quite different from those in the paper. Are there any other specific settings required to activate quantization to 4 bits?

@liuyiming199721
Copy link

您好,您怎么运行代码的?

@liuyiming199721
Copy link

我使用项目提供的模型(resnet18)和代码的默认参数运行了MWA4量化,结果和论文中的结果相差很大,要将量化激活到4位还需要其他什么特殊设置吗?

您好您运行成功了吗

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants