-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: Is it possible to utilize multiple cores when training (adding measurements)? #344
Comments
Hi @brandon-holt, as always, thanks for reporting the issue. The fact that the mere addition of measurements (i.e., without even recommending) causes delays is clearly suboptimal and needs to be fixed. Ideally, this should not be noticeable at all but the current overhead stems from a design choice that we might need to rethink: it's probably caused by the process of "marking" parameter configurations being measured in the search space metadata. This process is currently by no means optimized for speed and I see different potential ways around it that we'd need to discuss in our team:
I suspect your search space is quite big, causing the delays? Can you give me a rough estimate of your dimensions so that I have something to work with? |
@AdrianSosic I see, this is interesting insight! Here is the size of a typical campaign searchspace I am working with campaign.searchspace.discrete.comp_rep size = (37324800, 191) |
Thanks for sharing. That is indeed already quite a bit. I'll take this into our team meeting and see what we can do about it. Perhaps we can find a quick fix for you... But priority-wise, a full fix could take a while since my focus is currently still on the surrogate / SHAP issue 😋 |
@AdrianSosic this Issue needs to be updated to properly describe the issue causing the computational bottleneck, otherwise I will convert it to a discussion |
Hi, I noticed that when adding measurements to a campaign object only one core is being utilized. Is there a way to parallelize this process to decrease runtime? This is currently a very slow process for me.
By contrast, I noticed when running the
simulate_experiment
module all cores are in use. I know these are different processes, but just was curious why this module can utilize multiple cores.Thanks!
The text was updated successfully, but these errors were encountered: