-
Notifications
You must be signed in to change notification settings - Fork 410
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add TopK downselection for initial batch generation. #2636
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
couple of small nits, o/w this looks great.
@Balandat has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Other initializers use Boltzmann sampling through |
That would make sense, I'd be interested in seeing a benchmark how that affects performance of SEBO |
I would be entirely happy to not merge this if it will end up as dead code if theres a decision to swap the ic default for SEBO to use Boltzmann sampling. The motivation here was that I want to use the polytope sampling from botorch such that I can add inequality constraints to SEBO, adding a bunch of switching logic is bad in terms of maintenance there so the minimal change to keep the same behavior for how it's currently setup is to add this here. The longer term idea for the cleanest way to do it I had was based on this #2610 where SEBO default could just pass the |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2636 +/- ##
========================================
Coverage 99.98% 99.98%
========================================
Files 196 198 +2
Lines 17410 17910 +500
========================================
+ Hits 17408 17908 +500
Misses 2 2 ☔ View full report in Codecov by Sentry. |
test/optim/test_initializers.py
Outdated
for dtype in (torch.float, torch.double): | ||
bounds = bounds.to(device=self.device, dtype=dtype) | ||
mock_acqf.X_baseline = bounds # for testing sample_around_best | ||
mock_acqf.model = MockModel(MockPosterior(mean=bounds[:, :1])) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if you pull this into the product()
below you can save one indent ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
happy to change this but it matches the other testcases perhaps to avoid reassigning lines 289-291 in every iteration of the product. If I change here will change in all the equivalent tests for consistency.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough. I think we can go either way here. FWIW the overhead of reassigning lines 289-291 in every iteration should be negligible.
Thank you! The tutorial failure is unrelated. |
Could you update the documentation for the |
Co-authored-by: Elizabeth Santorella <[email protected]>
@saitcakmak is there anything blocking this currently or can it be merged such that facebook/Ax#2938 can hopefully be merged in short term? |
@saitcakmak has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Shouldn't be any blockers. I'll work with @Balandat to get this landed |
We have a test failure when running on GPU. Looks like cuda & cpu tensors are being mixed together somewhere.
X & bounds must be on different devices here. Since it is a single test failure, it could just be on the specific test |
) | ||
with warnings.catch_warnings(record=True) as ws: | ||
warnings.simplefilter("ignore", category=BadInitialCandidatesWarning) | ||
batch_initial_conditions = gen_batch_initial_conditions( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The failure is on this line. X
is generated internally and bounds
is on the correct device above. So, the code generating X
must be keeping it on CPU
Co-authored-by: Sait Cakmak <[email protected]>
@saitcakmak has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@saitcakmak merged this pull request in c1eb255. |
Motivation
In order to get facebook/Ax#2938 over the line with initial candidate generation that obey the constraints we want to use the existing tooling within
botorch
. The hard coded logic currently in Ax uses topk to downselect the sobol samples. To make a change there that will not impact existing users we then need to implement topk downselection inbotorch
.Test Plan
TODO:
Related PRs
facebook/Ax#2938. (#2610 was initially intended to play part of this solution but then I realized that the pattern I wanted to use was conflating repeats and the batch dimension.)