-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add AverageLearner1D and AverageLearner2D #143
base: main
Are you sure you want to change the base?
Conversation
cb83625
to
7ccb583
Compare
8228cb7
to
79c9edf
Compare
a73c167
to
2696f86
Compare
5f3aeba
to
51f4292
Compare
d057831
to
2681aeb
Compare
@anton I have implemented what you suggested in chat:
The problem now is that once a point has a lot of "seeds", increasing the number of seeds by 10% will give a big loss improvement, probably the biggest, so the number of values at that point will grow very big. Conceptually this shouldn't happen, so I probably made a mistake in the following method: def loss_per_existing_point(self):
"""Increase the number of seeds by 10%."""
if len(self.data) < 4:
return [], []
scale = self.value_scale()
points = []
loss_improvements = []
neighbors = self._get_neighbor_mapping_existing_points()
mean_values_per_neighbor = self._mean_values_per_neighbor(neighbors)
for p, sem in self.data_sem.items():
n_neighbors = mean_values_per_neighbor[p]
N = self.n_values(p)
n_more = int(1.1 * N) # increase the amount of points by 10%
points.append((p, n_more))
# This is the improvement considering we will add
# n_more seeds to the stack.
sem_improvement = (1 / sqrt(N) - 1 / sqrt(N + n_more)) * sem
loss_improvement = self.weight * sem_improvement / scale # XXX: Do I need to divide by the scale?
loss_improvements.append(loss_improvement)
return points, loss_improvements |
If you increase the number of points by 10%, the rms at the point drops by 5%; why would this be the biggest loss improvement? |
b235c4e
to
17c9b79
Compare
713d22c
to
e9da31d
Compare
fd30d36
to
ddfc9b8
Compare
61afe1d
to
8b448a5
Compare
I've noticed that the |
ae94904
to
155fe13
Compare
b84633f
to
7e060e1
Compare
081b3a5
to
ee808d3
Compare
The failing test |
ee808d3
to
55294db
Compare
55294db
to
cc28853
Compare
91e38f1
to
bc190a7
Compare
(original merge request on GitLab)
opened by Bas Nijholt (@basnijholt) at 2018-06-05T21:40:00.078Z
This merge request implements a Learner2D that can learn averages on the points, the
AverageLearner2D
.When choosing points the learner can either
The learner compares the loss of potential new triangles with the standard error of an existing point.
The relative importance of both can be adjusted by a hyperparameter
learner.weight
.From the doc-string:
All tests that pass for the
Learner2D
currently pass for theAvererageLearner2D
too.Run with:
which results in:
and
x
).δy
between neighbouring points is comparable tostd(y)
. This is best to do in 1D learner.reference/adaptive.learner.average1D.html
.