-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RBM Weight Updates Issue #9
Comments
@Darwin2011 I think (ph_mean[i] * input[j] - nh_means[i] * nv_samples[j]) is right. The weights gradient should be data-model。 |
@liulhdarks ,thanks |
@Darwin2011 https://github.com/echen/restricted-boltzmann-machines/blob/master/rbm.py doesn't use the visible layer after sampling when updating weights. You can refer to the follow ML frameworks.
INDArray wGradient = input.transpose().mmul(probHidden.getSecond()).sub(
DoubleMatrix wGradient = input.transpose().mmul(hProp1.prob) Hope to help you! |
@liulhdarks thanks for your kind help. I will follow your guide. Best Regards |
@liulhdarks @yusugomori In origin python RBM.py. , self.W and self.hbias is not correct. This problem is also in C/C++ files, but the W has been fixed. |
Such Issues are in your RBM different implementation.
[python version]
self.W += lr * (numpy.dot(self.input.T, ph_sample)
- numpy.dot(nv_samples.T, nh_means))
[C++ version]
W[i][j] += lr * (ph_mean[i] * input[j] - nh_means[i] * nv_samples[j]) / N;
For there two versions, the weight update methods are incosistent. And Actually I think the right version should be
self.W += lr * (numpy.dot(self.input.T, ph_means)
- numpy.dot(nv_means.T, nh_means))
Could you please help me confirm such issues? I am not quite sure that whether it's issue or not. And I am just a freshman for deep learning.
Best Regards
The text was updated successfully, but these errors were encountered: