Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pin optuna to latest version 3.6.1 #62

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

pyup-bot
Copy link
Collaborator

@pyup-bot pyup-bot commented Apr 1, 2024

This PR pins optuna to the latest release 3.6.1.

Changelog

3.6.0

This is the release note of [v3.6.0](https://github.com/optuna/optuna/milestone/60?closed=1).

Highlights

Optuna 3.6 newly supports the following new features. See [our release blog](TBD) for more detailed information.
- Wilcoxon Pruner: New Pruner Based on Wilcoxon Signed-Rank Test
- Lightweight Gaussian Process (GP)-Based Sampler
- Speeding up Importance Evaluation with PED-ANOVA
- Stricter Verification Logic for FrozenTrial
- Refactoring the Optuna Dashboard
- Migration to Optuna Integration

Breaking Changes

- Implement `optuna.terminator` using `optuna._gp` (5241)

These migration-related PRs do not break the backward compatibility as long as optuna-integration v3.6.0 or later is installed in your environment.

- Move TensorBoard Integration (https://github.com/optuna/optuna-integration/pull/56, thanks dheemantha-bhat!)
- Delete TensorBoard integration for migration to `optuna-integration` (5161, thanks dheemantha-bhat!)
- Remove CatBoost integration for isolation (5198)
- Remove PyTorch integration (5213)
- Remove Dask integration (5222)
- Migrate the `sklearn` integration (5225)
- Remove BoTorch integration (5230)
- Remove `SkoptSampler` (5234)
- Remove the `cma` integration (5236)
- Remove the `wandb` integration (5237)
- Remove XGBoost Integration (5239)
- Remove MLflow integration (5246)
- Migrate LightGBM integration (5249)
- Add CatBoost integration (https://github.com/optuna/optuna-integration/pull/61)
- Add PyTorch integration (https://github.com/optuna/optuna-integration/pull/62)
- Add XGBoost integration (https://github.com/optuna/optuna-integration/pull/65, thanks buruzaemon!)
- Add `sklearn` integration (https://github.com/optuna/optuna-integration/pull/66)
- Move Dask integration (https://github.com/optuna/optuna-integration/pull/67)
- Migrate BoTorch integration (https://github.com/optuna/optuna-integration/pull/72)
- Move `SkoptSampler` (https://github.com/optuna/optuna-integration/pull/74)
- Migrate `pycma` integration (https://github.com/optuna/optuna-integration/pull/77)
- Migrate the Weights & Biases integration (https://github.com/optuna/optuna-integration/pull/79)
- Add LightGBM integration (https://github.com/optuna/optuna-integration/pull/81, thanks DanielAvdar!)
- Migrate `MLflow` integration (https://github.com/optuna/optuna-integration/pull/84)

New Features

- Backport the change of the timeline plot in Optuna Dashboard (5168)
- Wilcoxon pruner (5181)
- Add `GPSampler` (5185)
- Add a super quick f-ANOVA algorithm named PED-ANOVA (5212)

Enhancements

- Add `formats.sh` based on `optuna/master` (https://github.com/optuna/optuna-integration/pull/75)
- Use vectorization for categorical distance (5147)
- Unify implementation of fast non-dominated sort (5160)
- Raise `TypeError` if `params` is not a `dict` in `enqueue_trial` (5164, thanks adjeiv!)
- Upgrade `FrozenTrial._validate()` (5211)
- Import SQLAlchemy lazily (5215)
- Add UCB for `optuna._gp` (5224)
- Enhance performance of `GPSampler` (5274)
- Fix inconsistencies between terminator and its visualization (5276, thanks SimonPop!)
- Enhance `GPSampler` performance other than introducing local search (5279)

Bug Fixes

- Fix import path (https://github.com/optuna/optuna-integration/pull/83)
- Fix `README.md` (https://github.com/optuna/optuna-integration/pull/88)
- Fix `LightGBMTuner` test (https://github.com/optuna/optuna-integration/pull/89)
- Fix `JSONDecodeError` in `JournalStorage` (5195)
- Fix trial validation (5229)
- Make `gp.fit_kernel_params` more robust (5247)
- Fix checking value in `study.tell`  (5269, thanks ryota717!)
- Fix `_split_trials` of `TPESampler` for constrained optimization with constant liar (5298)
- Make each importance evaluator compatible with doc (5311)

Documentation

- Remove `study optimize` from CLI tutorial page (5152)
- Clarify the `GridSampler` with ask-and-tell interface (5153)
- Clean-up `faq.rst` (5170)
- Make Methods section hidden from Artifact Docs (5188)
- Enhance README (5189)
- Add a new section explaing how to customize figures (5194)
- Replace legacy `plotly.graph_objs` with `plotly.graph_objects` (5223)
- Add a note section to explain that reseed affects reproducibility (5233)
- Update links to papers (5235)
- adding link for module's example to documetation for the `optuna.terminator` module (5243, thanks HarshitNagpal29!)
- Replace the old example directory (5244)
- Add Optuna Dashboard section to docs (5250, thanks porink0424!)
- Add a safety guard to Wilcoxon pruner, and modify the docstring (5256)
- Replace LightGBM with PyTorch-based example to remove `lightgbm` dependency in visualization tutorial (5257)
- Remove unnecessary comment in `Specify Hyperparameters Manually` tutorial page (5258)
- Add a tutorial of Wilcoxon pruner (5266)
- Clarify that pruners module does not support multi-objective optimization (5270)
- Minor fixes (5275)
- Add a guide to PED-ANOVA for `n_trials>10000` (5310)
- Minor fixes of docs and code comments for `PedAnovaImportanceEvaluator` (5312)
- Fix doc for `WilcoxonPruner` (5313)
- Fix doc example in `WilcoxonPruner` (5315)

Examples

- Remove Python 3.7 and 3.8 from tensorboard CI (https://github.com/optuna/optuna-examples/pull/231)
- Specify black version in the CI (https://github.com/optuna/optuna-examples/pull/232)
- Apply Black 2024 to codebase (https://github.com/optuna/optuna-examples/pull/236)
- Remove MXNet examples (https://github.com/optuna/optuna-examples/pull/237)
- Add an example of Wilcoxon pruner (https://github.com/optuna/optuna-examples/pull/238)
- Make Keras examples Keras 3 friendly (https://github.com/optuna/optuna-examples/pull/239)
- Remove a comment for keras that is not used anymore in this file (https://github.com/optuna/optuna-examples/pull/240)
- Use Keras 3 friendly syntax in MLflow example (https://github.com/optuna/optuna-examples/pull/242)
- Remove `-pre` option in the `rl` integration (https://github.com/optuna/optuna-examples/pull/243)
- Hotfix CI by adding version constraints to `dask` and `tensorflow` (https://github.com/optuna/optuna-examples/pull/245)

Tests

- Unify the implementation of `_create_frozen_trial()` under `testing` module (5157)
- Remove the Python version constraint for PyTorch (5278)

Code Fixes

- Fix unused (and unintended) import (https://github.com/optuna/optuna-integration/pull/68)
- Add Dask to `__init__.py` and fix its documentation generation (https://github.com/optuna/optuna-integration/pull/71)
- Replace `optuna.integration` with `optuna_integration` in the doc and the issue template (https://github.com/optuna/optuna-integration/pull/73)
- Fix the doc for TensorFlow (https://github.com/optuna/optuna-integration/pull/76)
- Add skopt dependency (https://github.com/optuna/optuna-integration/pull/78)
- Fastai readme fix (https://github.com/optuna/optuna-integration/pull/82, thanks DanielAvdar!)
- Fix `__init__.py` (https://github.com/optuna/optuna-integration/pull/86)
- Apply Black 2024 to codebase (https://github.com/optuna/optuna-integration/pull/87)
- Change the order of dependencies by name (https://github.com/optuna/optuna-integration/pull/92)
- Remove the deprecated decorator of `KerasPruningCallback` (https://github.com/optuna/optuna-integration/pull/93)
- Remove `UserWarning` by `tests/test_keras.py` (https://github.com/optuna/optuna-integration/pull/94)
- Refactor `TPESampler` for more clarity before c-TPE integration (5117)
- Fix `Checks(integration)` failure (5167)
- Fix type annotation of logging (5176)
- Update NamedTuple in `_ParzenEstimatorParameters` to more modern style (5193)
- Apply Black 2024 to codebase (5252)
- Simplify annotations in `optuna/study/_optimize.py` (5261, thanks shahpratham!)
- Unify and refactor `plot_timeline` test (5281)

Continuous Integration

- Remove non oldest and latest Python versions from tests (https://github.com/optuna/optuna-integration/pull/44)
- Fix flake8 failure in CI (https://github.com/optuna/optuna-integration/pull/55)
- Delete workflow dispatch input (https://github.com/optuna/optuna-integration/pull/57)
- Fix default branch (https://github.com/optuna/optuna-integration/pull/58)
- Fix coverage source path (https://github.com/optuna/optuna-integration/pull/60)
- Not use `black 24.*` (https://github.com/optuna/optuna-integration/pull/64)
- Simplify integration test (https://github.com/optuna/optuna-integration/pull/95)
- Hotfix the version of `botorch<0.10.` for CI failures (https://github.com/optuna/optuna-integration/pull/96)
- Hotfix the CI error by adding version constraint to dask (https://github.com/optuna/optuna-integration/pull/99)
- Fix tests with MPI (5166)
- Fix Checks (Integration) CI for NumPy 1.23.5 (5177)
- Add version constraint for black (5210)
- Skip the reproducibility tests for lightgbm (5214)
- Fix the errors in mypy for the `Checks (Integration)` CI (5217)
- Add a version constraint for Torch (5221)
- Hotfix mypy error in integration (5232)
- Skip `test_reproducible_in_other_process` for `GPSampler` with Python 3.12 (5251)
- Add CI settings to test Matplotlib without Plotly (5263, thanks DanielAvdar!)
- Unify indent size, two in toml file (5271)
- Follow up for split integrations (5277)
- Add a version constraint to `fakeredis` (5307)

Other

- Bump up version number to 3.6.0.dev (https://github.com/optuna/optuna-integration/pull/53)
- Bump up version number to 3.6.0 (https://github.com/optuna/optuna-integration/pull/100)
- Bump the version up to v3.6.0.dev (5143)
- Ignore auto generated files by Sphinx (5192)
- Delete `labeler.yml` to disable the `triage` action (5240)
- Bump up to version number 3.6.0 (5318)

Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

Alnusjaponica, DanielAvdar, HarshitNagpal29, HideakiImamura, SimonPop, adjeiv, buruzaemon, c-bata, contramundum53, dheemantha-bhat, eukaryo, gen740, hrntsm, knshnb, nabenabe0928, not522, nzw0301, porink0424, ryota717, shahpratham, toshihikoyanase, y0z

3.5.0

This is the release note of [v3.5.0](https://github.com/optuna/optuna/milestone/59?closed=1).

Highlights

This is a maintenance release with various bug fixes and improvements to the documentation and more.

Breaking Changes

- Isolate the fast.ai module from optuna (https://github.com/optuna/optuna-integration/pull/49, thanks sousu4!)
- Change `n_objectives` condition to be greater than 4 in candidates functions (5121, thanks adjeiv!)

New Features

- Support constraints in plot contour (4975, thanks y-kamiya!)
- Support infeasible coloring for plot_timeline (5014)
- Support `constant_liar` in multi-objective `TPESampler` (5021)
- Add `optuna study-names` cli (5029)
- Use `ExpectedHypervolumeImprovement` candidates function for `BotorchSampler` (5065, thanks adjeiv!)
- Fix logei_candidates_func in `botorch.py` (5094, thanks sousu4!)
- Report CV scores from within `OptunaSearchCV` (5098, thanks adjeiv!)

Enhancements

- Support `constant_liar` in multi-objective `TPESampler` (5021)
- Make positional args to kwargs in suggest_int (5044)
- Ensure n_below is never negative in TPESampler (5074, thanks p1kit!)
- Improve visibility of infeasible trials in `plot_contour` (5107)

Bug Fixes

- Fix random number generator of `NSGAIIChildGenerationStrategy` (5003)
- Return `trials` for above in MO split when `n_below=0` (5079)
- Enable loading of read-only files (5103, thanks Guillaume227!)
- Fix `logpdf` for scaled `truncnorm` (5110)
- Fix the bug of matplotlib's plot_rank function (5133)

Documentation

- Add the table of dependencies in each integration module (5005)
- Enhance the documentation of `LightGBM` tuner and separate `train()` from `__init__.py` (5010)
- Update link to reference (5064)
- Update the FAQ on reproducible optimization results to remove note on `HyperbandPruner` (5075, thanks felix-cw!)
- Remove `MOTPESampler` from `index.rst` file (5084, thanks Ashhar-24!)
- Add a note about the deprecation of `MOTPESampler` to the doc (5086)
- Add the TPE tutorial paper to the doc-string (5096)
- Update `README.md` to fix the installation and integration (5126)
- Clarify that `Recommended budgets` include `n_startup_trials` (5137)

Examples

- Update version syntax for PyTorch and PyTorch Lightning examples (https://github.com/optuna/optuna-examples/pull/205, thanks JustinGoheen!)
- Update import path (https://github.com/optuna/optuna-examples/pull/213)
- Bump up python versions (https://github.com/optuna/optuna-examples/pull/214)
- Add the simplest example directly to README (https://github.com/optuna/optuna-examples/pull/215)
- Add simples examples for multi-objective and constrained optimizations (https://github.com/optuna/optuna-examples/pull/216)
- Revise the comment to describe the problem (https://github.com/optuna/optuna-examples/pull/217)
- Modify simple examples based on the Optuna code conventions (https://github.com/optuna/optuna-examples/pull/218)
- Remove version specification of `jax` and `jaxlib` (https://github.com/optuna/optuna-examples/pull/223)
- Import examples from `optuna/optuna-dashboard` (https://github.com/optuna/optuna-examples/pull/224)
- Add `OptunaSearchCV` with terminator (https://github.com/optuna/optuna-examples/pull/225)
- Drop python 3.8 from haiku test (https://github.com/optuna/optuna-examples/pull/227)
- Run MXNet in Python 3.11 (https://github.com/optuna/optuna-examples/pull/228)

Tests

- Remove tests for allennlp and chainer (https://github.com/optuna/optuna-integration/pull/47)
- Reduce the warning in `tests/study_tests/test_study.py` (5070, thanks sousu4!)

Code Fixes

- Implement NSGA-III elite population selection strategy (5027)
- Fix import path of `PyTorchLightning` (5028)
- Fix `Any` with `float` in `_TreeNode.children` (5040, thanks aanghelidi!)
- Fix future annotation in `typing.py` (5054, thanks jot-s-bindra!)
- Add future annotations to callback and terminator files inside terminator folder (5055, thanks jot-s-bindra!)
- Fix future annotations to edf python file (5056, thanks Vaibhav101203!)
- Fix future annotations in _hypervolume_history.py (5057, thanks Vaibhav101203!)
- Reduce the warning in `tests/storages_tests/test_heartbeat.py` (5066, thanks sousu4!)
- Fix future annotation to `frozen.py` (5080, thanks Vaibhav101203!)
- Fix annotation for `dataframe.py` (5081, thanks Vaibhav101203!)
- Fix future annotation (5083, thanks Vaibhav101203!)
- Fix type annotation (5105)
- Fix mypy error in CI (5106)
- Isolate the fast.ai module (5120, thanks sousu4!)
- Clean up workflow file (5122)

Continuous Integration

- Run `test_tensorflow` in Python 3.11 (https://github.com/optuna/optuna-integration/pull/46)
- Exclude mypy checks for chainer (https://github.com/optuna/optuna-integration/pull/48)
- Support Python 3.12 on tests for core modules (5018)
- Fix the issue where formats.sh does not handle tutorial/ (5023, thanks sousu4!)
- Skip slow integration tests (5033)
- Install PyTorch for CPU on CIs (5042)
- Remove unused `type: ignore` (5047)
- Reduce `tests-mpi` to the oldest and latest Python versions (5067)
- Add workflow matrices for the tests to reduce GitHub check runtime (5093)
- Remove the skip of Python 3.11 in `tests-mpi` (5100)
- Downgrade kaleido to 0.1.0post1 for fixing Windows CI (5101)
- Rename `should-skip` to `test-trigger-type` for more clarity (5134)
- Pin the version of PyQt6-Qt6 (5135)
- Revert `Pin the version of PyQt6-Qt6` (5140)

Other

- Bump up version to v3.5.0.dev (https://github.com/optuna/optuna-integration/pull/43)
- Bump up version number to 3.5.0 (https://github.com/optuna/optuna-integration/pull/52)
- Bump the version up to v3.5.0.dev (5032)
- Remove email of authors (5078)
- Update the integration sections in `README.md` (5108)
- Pin mypy version to 1.6.* (5123)
- Remove `!examples` from `.dockerignore` (5129)

Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

Alnusjaponica, Ashhar-24, Guillaume227, HideakiImamura, JustinGoheen, Vaibhav101203, aanghelidi, adjeiv, c-bata, contramundum53, eukaryo, felix-cw, gen740, jot-s-bindra, keisuke-umezawa, knshnb, nabenabe0928, not522, nzw0301, p1kit, sousu4, toshihikoyanase, y-kamiya

3.4.0

This is the release note of [v3.4.0](https://github.com/optuna/optuna/milestone/58?closed=1).

Highlights

Optuna 3.4 newly supports the following new features. See [our release blog](https://medium.com/optuna/announcing-optuna-3-4-0087644c92fa) for more detailed information.

* Preferential Optimization (Optuna Dashboard)
* Optuna Artifact
* Jupyter Lab Extension
* VS Code Extension
* User-defined Distance for Categorical Parameters in TPE
* Constrained Optimization Support for Visualization Functions
* User-Defined Plotly’s Figure Support (Optuna Dashboard)
* 3D Model Viewer Support (Optuna Dashboard)

Breaking Changes

- Remove deprecated arguments with regard to `LightGBM>=4.0` (4844)
- Deprecate `SkoptSampler` (4913)

New Features

- Support constraints for intermediate values plot (4851, thanks adjeiv!)
- Display all objectives on hyperparameter importances plot (4871)
- Implement `get_all_study_names()` (4898)
- Support constraints `plot_rank` (4899, thanks ryota717!)
- Support Study Artifacts (4905)
- Support specifying distance between categorical choices in `TPESampler` (4926)
- Add `metric_names` getter to study (4930)
- Add artifact middleware for exponential backoff retries (4956)
- Add `GCSArtifactStore` (4967, thanks semiexp!)
- Add `BestValueStagnationEvaluator` (4974, thanks smygw72!)
- Allow user-defined objective names in hyperparameter importance plots (4986)

Enhancements

- CHG constrained param displayed in cccccc (4877, thanks louis-she!)
- Faster implementation of fANOVA (4897)
- Support constraint in plot slice (4906, thanks hrntsm!)
- Add mimetype input (4910, thanks hrntsm!)
- Show all ticks in `_parallel_coordinate.py` when log scale (4911)
- Speed up multi-objective TPE (5017)

Bug Fixes

- Fix numpy indexing bugs and named tuple comparing (4874, thanks ryota717!)
- Fix `fail_stale_trials` with race condition (4886)
- Fix alias handler (4887)
- Add lazy random state and use it in `RandomSampler` (4970, thanks shu65!)
- Fix TensorBoard error on categorical choices of mixed types (4973, thanks ciffelia!)
- Use lazy random state in samplers (4976, thanks shu65!)
- Fix an error that does not consider `min_child_samples` (5007)
- Fix `BruteForceSampler` in parallel optimization (5022)

Documentation

- Fix typo in `_filesystem.py` (4909)
- Mention a pruner instance is not stored in a storage in resuming tutorial (4927)
- Add introduction of `optuna-fast-fanova` in documents (4943)
- Add artifact tutorial (4954)
- Fix an example code in `Boto3ArtifactStore`'s docstring (4957)
- Add tutorial for `JournalStorage` (4980, thanks semiexp!)
- Fix document regarding `ArtifactNotFound` (4982, thanks smygw72!)
- Add the workaround for duplicated samples to FAQ (5006)

Examples

- Add huggingface's link to external projects (https://github.com/optuna/optuna-examples/pull/201)
- Fix samplers CI (https://github.com/optuna/optuna-examples/pull/202)
- Set version constraint on aim (https://github.com/optuna/optuna-examples/pull/206)
- Add an example of Optuna Terminator for LightGBM (https://github.com/optuna/optuna-examples/pull/210, thanks hamster-86!)

Tests

- Reduce `n_trials` in `test_combination_of_different_distributions_objective` (4950)
- Replaces California housing dataset with iris dataset (4953)
- Fix numpy duplication warning (4978, thanks torotoki!)
- Make test order deterministic for `pytest-xdist` (4999)

Code Fixes

- Move shap (https://github.com/optuna/optuna-integration/pull/32)
- Remove shap (4791)
- Use `isinstance` instead of `if type() is ...` (4896)
- Make `cmaes` dependency optional (4901)
- Call internal sampler's `before_trial` (4914)
- Refactor `_grid.py` (4918)
- Fix the `checks-integration` errors on LightGBMTuner (4923)
- Replace deprecated `botorch` method to remove warning (4940)
- Fix type annotation (4941)
- Add `_split_trials` instead of `_get_observation_pairs` and `_split_observation_pairs` (4947)
- Use `__future__.annotations` in `optuna/visualization/_optimization_history.py` (4964, thanks YuigaWada!)
- Fix 4508 for `optuna/visualization/_hypervolume_history.py` (4965, thanks RuTiO2le!)
- Use future annotation in `optuna/_convert_positional_args.py` (4966, thanks hamster-86!)
- Fix type annotation of `SQLAlchemy` (4968)
- Use `collections.abc` in `optuna/visualization/_edf.py` (4969, thanks g-tamaki!)
- Use `collections.abc` in plot pareto front (4971)
- Remove `experimental_func` from `metric_names` property (4983, thanks semiexp!)
- Add `__future__.annotations` to `progress_bar.py` (4992)
- Fix annotations in `optuna/optuna/visualization/matplotlib/_optimization_history.py` (5015, thanks sousu4!)

Continuous Integration

- Fix checks integration (4869)
- Remove fakeredis version constraint (4873)
- Support `asv` 0.6.0 (4882)
- Fix speed-benchmarks CI (4903)
- Fix Tests (MPI) CI (4904)
- Fix xgboost pruning callback (4921)
- Enhance speed benchmark (4981, thanks g-tamaki!)
- Drop Python 3.7 on `tests-mpi` (4998)
- Remove Python 3.7 from the development docker image build (5009)
- Use CPU version of PyTorch in Docker image (5019)

Other

- Bump up version number to v3.4.0.dev (https://github.com/optuna/optuna-integration/pull/37)
- Update python shield in `README.md` (https://github.com/optuna/optuna-integration/pull/39)
- Replace deprecated mypy option (https://github.com/optuna/optuna-integration/pull/40)
- Bump up version to v3.4.0 (https://github.com/optuna/optuna-integration/pull/42)
- Bump the version up to v3.4.0.dev (4861)
- Use OIDC (4867)
- Add `FUNDING.yml` (4912)
- Update `optional-dependencies` and document deselecting integration tests in `CONTRIBUTING.md` (4962)
- Bump the version up to v3.4.0 (5031)

Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

Alnusjaponica, HideakiImamura, RuTiO2le, YuigaWada, adjeiv, c-bata, ciffelia, contramundum53, cross32768, eukaryo, g-tamaki, g-votte, gen740, hamster-86, hrntsm, hvy, keisuke-umezawa, knshnb, lucasmrdt, louis-she, moririn2528, nabenabe0928, not522, nzw0301, ryota717, semiexp, shu65, smygw72, sousu4, torotoki, toshihikoyanase, xadrianzetx

3.3.0

This is the release note of [v3.3.0](https://github.com/optuna/optuna/milestone/57?closed=1).

Highlights

CMA-ES with Learning Rate Adaptation

A new variant of CMA-ES has been added. By setting the `lr_adapt` argument to `True` in `CmaEsSampler`, you can utilize it. For multimodal and/or noisy problems, adapting the learning rate can help avoid getting trapped in local optima. For more details, please refer to 4817. We want to thank nomuramasahir0, one of the authors of LRA-CMA-ES, for his great work and the development of [cmaes](https://github.com/CyberAgentAILab/cmaes) library.


<img width="513" alt="256118903-6796d0c4-3278-4d99-bdb2-00b6fe0fa13b" src="https://github.com/optuna/optuna/assets/5564044/50ed3200-2e02-4b10-8ad1-1f237cb3f3ea">


Hypervolume History Plot for Multiobjective Optimization

In multiobjective optimization, the history of hypervolume is commonly used as an indicator of performance. Optuna now supports this feature in the visualization module. Thanks to y0z for your great work!

![246094447-f17d5961-216a-44b3-b9ce-715c105445a7](https://github.com/optuna/optuna/assets/5564044/36350c77-87e1-44e2-83e4-4a4a9480bfde)


Constrained Optimization Support for Visualization Functions

| Plotly | matplotlib |
| --- | --- |
| ![constrained-optimization-history-plot (1)](https://github.com/optuna/optuna/assets/5564044/942316ac-0e04-4ff8-97a9-dea02fd45f9c) | <img width="1056" alt="254270811-e85c3c5e-44e5-4a04-ba8a-f6ea2c53611f (1)" src="https://github.com/optuna/optuna/assets/5564044/c043c79b-a6ad-46bc-92f5-fd54ee61f995"> |

Some samplers support constrained optimization, however, many other features cannot handle it. We are continuously enhancing support for constraints. In this release, `plot_optimization_history` starts to consider constraint violations. Thanks to hrntsm for your great work!

python
import optuna

def objective(trial):
 x = trial.suggest_float("x", -15, 30)
 y = trial.suggest_float("y", -15, 30)
 v0 = 4 * x**2 + 4 * y**2
 trial.set_user_attr("constraint", [1000 - v0])
 return v0

def constraints_func(trial):
 return trial.user_attrs["constraint"]

sampler = optuna.samplers.TPESampler(constraints_func=constraints_func)
study = optuna.create_study(sampler=sampler)
study.optimize(objective, n_trials=100)
fig = optuna.visualization.plot_optimization_history(study)
fig.show()



Streamlit Integration for Human-in-the-loop Optimization

<img width="1127" alt="streamlit_integration" src="https://github.com/optuna/optuna/assets/5564044/e8ea5d13-c834-4ed3-8c7b-24ab07c37105">

[Optuna Dashboard v0.11.0](https://github.com/optuna/optuna-dashboard/releases/tag/v0.11.0) provides the tight integration with [Streamlit](https://streamlit.io/) framework. By using this feature, you can create your own application for human-in-the-loop optimization. Please check out [the documentation](https://optuna-dashboard.readthedocs.io/en/latest/api.html#streamlit) and [the example](https://github.com/optuna/optuna-dashboard/tree/main/examples/streamlit_plugin) for details.


Breaking Changes

- Move mxnet (https://github.com/optuna/optuna-integration/pull/31)
- Remove mxnet (4790)
- Remove `ordered_dict` argument from `IntersectionSearchSpace` (4846)

New Features

- Add `logei_candidate_func` and make it default when available (4667)
- Support `JournalFileStorage` and `JournalRedisStorage` on CLI (4696)
- Implement hypervolume history plot for matplotlib backend (4748, thanks y0z!)
- Add `cv_results_` to `OptunaSearchCV` (4751, thanks jckkvs!)
- Add `optuna.integration.botorch.qnei_candidates_func` (4753, thanks kstoneriv3!)
- Add hypervolume history plot for `plotly` backend (4757, thanks y0z!)
- Add `FileSystemArtifactStore` (4763)
- Sort params on fetch (4775)
- Add constraints support to `_optimization_history_plot` (4793, thanks hrntsm!)
- Bump up `LightGBM` version to v4.0.0 (4810)
- Add constraints support to `matplotlib._optimization_history_plot` (4816, thanks hrntsm!)
- Introduce CMA-ES with Learning Rate Adaptation (4817)
- Add `upload_artifact` api (4823)
- Add `before_trial` (4825)
- Add `Boto3ArtifactStore` (4840)
- Display best objective value in contour plot for a given param pair, not the value from the most recent trial (4848)

Enhancements

- Speed up `logpdf` in `_truncnorm.py` (4712)
- Speed up `erf` (4713)
- Speed up `get_all_trials` in `InMemoryStorage` (4716)
- Add a warning for a progress bar not being displayed 4679 (4728, thanks rishabsinghh!)
- Make `BruteForceSampler` consider failed trials (4747)
- Use shallow copy in `_get_latest_trial` (4774)
- Speed up `plot_hypervolume_history` (4776)

Bug Fixes

- Solve issue 4557 - error_score (4642, thanks jckkvs!)
- Fix `BruteForceSampler` for pruned trials (4720)
- Fix `plot_slice` bug when some of the choices are numeric (4724)
- Make `LightGBMTuner` reproducible (4795)

Installation

- Bump up python version (https://github.com/optuna/optuna-integration/pull/34)

Documentation

- Remove `jquery-extension` (4691)
- Add FAQ on combinatorial search space (4723)
- Fix docs (4732)
- Add `plot_rank` and `plot_timeline` plots to visualization tutorial (4735)
- Fix typos found in `integration/sklearn.py` (4745)
- Remove `study.n_objectives` from document (4796)
- Add lower version constraint for `sphinx_rtd_theme` (4853)
- Artifact docs (4855)

Examples

- Run DaskML example with Python 3.11 (https://github.com/optuna/optuna-examples/pull/188)
- Show more information in terminator examples (https://github.com/optuna/optuna-examples/pull/192)
- Drop support for Python 3.7 on Haiku (https://github.com/optuna/optuna-examples/pull/198)
- Add `LICENSE` file (https://github.com/optuna/optuna-examples/pull/200)

Tests

- Remove unnecessary `pytestmark` (https://github.com/optuna/optuna-integration/pull/29)
- Add `GridSampler` test for failed trials (4721)
- Follow up PR 4642 by adding a unit test to confirm `OptunaSearchCV` behavior (4758)
- Fix `test_log_gass_mass` with SciPy 1.11.0 (4766)
- Fix Pytorch lightning unit test (4780)
- Remove skopt (4792)
- Rename test directory (4839)

Code Fixes

- Simplify the type annotations in `benchmarks` (4703, thanks caprest!)
- Unify sampling implementation in `TPESampler` (4717)
- Get values after `_get_observation_pairs` (4742)
- Remove unnecessary period (4746)
- Handle deprecated argument `early_stopping_rounds` (4752)
- Separate dominate function from `_fast_non_dominated_sort()` (4759)
- Separate `after_trial` strategy (4760)
- Remove unused attributes in `TPESampler` (4769)
- Remove `pkg_resources` (4770)
- Use trials as argument of `_calculate_weights_below_for_multi_objective` (4773)
- Fix type annotation (4797, thanks taniokay!)
- Follow up separation of after trial strategy (4803)
- Loose coupling nsgaii child generation (4806)
- Remove `_study_id` parameter from `Trial` class (4811, thanks adjeiv!)
- Loose coupling nsgaii elite population selection (4821)
- Fix checks integration (4826)
- Remove `OrderedDict` (4838, thanks taniokay!)
- Fix typo (4842, thanks wouterzwerink!)
- Followup child generation strategy (4856)
- Remove `samplers._search_space.IntersectionSearchSpace` (4857)
- Add experimental decorators to artifacts functionalities (4858)

Continuous Integration

- Output dependency tree (https://github.com/optuna/optuna-integration/pull/9)
- Use OIDC (https://github.com/optuna/optuna-integration/pull/33)
- Drop Python 3.7 support (https://github.com/optuna/optuna-integration/pull/35)
- Enhance speed benchmark for storages (4778)
- Drop Python 3.7 on `tests-integration` (4784)
- Remove unused `type:ignore`s (4787)
- Restrict numpy version < 1.24 (4788)
- Upgrade redis version (4805)
- Add version constraints on LightGBM (4807)
- Follow-up 4807 : Fix windows-tests and mac-tests (4809)
- Support 3.11 integration (4820)
- Support flake8 6.1.0 (4847)

Other

- Bump up version number to 3.3.0dev (https://github.com/optuna/optuna-integration/pull/27)
- Bump up version number to 3.3.0 (https://github.com/optuna/optuna-integration/pull/36)
- Bump up version number to 3.3.0dev (4710)
- Bump the version up to v3.3.0 (4860)


Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

Alnusjaponica, HideakiImamura, adjeiv, c-bata, caprest, contramundum53, cross32768, eukaryo, gen740, hrntsm, jckkvs, knshnb, kstoneriv3, nomuramasahir0, not522, nzw0301, rishabsinghh, taniokay, toshihikoyanase, wouterzwerink, xadrianzetx, y0z

3.2.0

This is the release note of [v3.2.0](https://github.com/optuna/optuna/milestone/54?closed=1).

Highlights

Human-in-the-loop optimization

With the latest release, we have incorporated support for human-in-the-loop optimization. It enables an interactive optimization process between users and the optimization algorithm. As a result, it opens up new opportunities for the application of Optuna in tuning Generative AI. For further details, please check out [our human-in-the-loop optimization tutorial](https://optuna-dashboard.readthedocs.io/en/latest/tutorials/hitl.html).

<img width="826" alt="human-in-the-loop-optimization" src="https://github.com/optuna/optuna/assets/3255979/cb03dd4d-2521-499c-bbe6-06dd7144fb4b">

_Overview of human-in-the-loop optimization. Generated images and sounds are displayed on [Optuna Dashboard](https://github.com/optuna/optuna-dashboard), and users can directly evaluate them there._

Automatic optimization terminator(Optuna Terminator)

Optuna Terminator is a new feature that quantitatively estimates room for optimization and automatically stops the optimization process. It is designed to alleviate the burden of figuring out an appropriate value for the number of trials (`n_trials`), or unnecessarily consuming computational resources by indefinitely running the optimization loop. See [4398](https://github.com/optuna/optuna/issues/4398) and [optuna-examples#190](https://github.com/optuna/optuna-examples/pull/190).

![b5b752f2-5d2a-410b-a756-53f3d24acd82](https://github.com/optuna/optuna/assets/3255979/74c8833e-fe30-406e-90fc-ce9fece591f2)

_Transition of estimated room for improvement.  It steadily decreases towards the level of cross-validation errors._

New sampling algorithms

NSGA-III for many-objective optimization

We've introduced the NSGAIIISampler as a new multi-objective optimization sampler. It implements NSGA-III, which is an extended variant of NSGA-II, designed to efficiently optimize even when the dimensionality of the objective values is large (especially when it's four or more). NSGA-II had an issue where the search would become biased towards specific regions when the dimensionality of the objective values exceeded four. In NSGA-III, the algorithm is designed to distribute the points more uniformly. This feature was introduced by 4436. 

![219599007-8dc7a435-10e8-45cd-8b95-2b386b4642d5](https://github.com/optuna/optuna/assets/3255979/10b9c21d-bc24-4673-803b-a2fe9a8b401e)

_Objective value space for multi-objective optimization (minimization problem). Red points represent Pareto solutions found by NSGA-II. Blue points represent those found by NSGA-III. NSGA-II shows a tendency for points to concentrate towards each axis (corresponding to the ends of the Pareto Front). On the other hand, NSGA-III displays a wider distribution across the Pareto Front._


BI-population CMA-ES

Continuing from v3.1, significant improvements have been made to the CMA-ES Sampler. As a new feature, we've added the BI-population CMA-ES algorithm, a kind of restart strategy that mitigates the problem of falling into local optima. Whether the IPOP CMA-ES, which we've been providing so far, or the new BI-population CMA-ES is better depends on the problems. If you're struggling with local optima, please try BI-population CMA-ES as well. For more details, please see 4464.

![221167904-809a1a17-7248-4f81-84fc-396d783e6548](https://github.com/optuna/optuna/assets/3255979/d211a0b9-ea61-4d4c-9c81-bea7987212e4)

New visualization functions

Timeline plot for trial life cycle

The timeline plot visualizes the progress (status, start and end times) of each trial. In this plot, the horizontal axis represents time, and trials are plotted in the vertical direction. Each trial is represented as a horizontal bar, drawn from the start to the end of the trial. With this plot, you can quickly get an understanding of the overall progress of the optimization experiment, such as whether parallel optimization is progressing properly or if there are any trials taking an unusually long time.

Similar to other plot functions, all you need to do is pass the study object to `plot_timeline`. For more details, please refer to 4470 and 4538.
![221496175-3f1b286a-ebdc-48d3-9cd7-2a01284e415a](https://github.com/optuna/optuna/assets/3255979/9183a6fe-4114-4daa-b0fc-0623981905c1)


Rank plot to understand input-output relationship

A new visualization feature, `plot_rank`, has been introduced. This plot provides valuable insights into landscapes of objective functions, i.e., relationship between parameters and objective values. In this plot, the vertical and horizontal axes represent the parameter values, and each point represents a single trial. The points are colored according to their ranks.

Similar to other plot functions, all you need to do is pass the study object to plot_rank. For more details, please refer to 4427 and 4541.

![blog-rank-plot](https://github.com/optuna/optuna/assets/3255979/92026899-f67e-4cdb-8eae-5e150bc7eb56)

Isolating integration modules

We have separated Optuna's integration module into a different package called [optuna-integration](https://github.com/optuna/optuna-integration/). Maintaining many integrations within the Optuna package was becoming costly. By separating the integration module, we aim to improve the development speed of both Optuna itself and its integration module. As of the release of v3.2, we have migrated six integration modules: allennlp, catalyst, chainer, keras, skorch, and tensorflow (excepting for the TensorBoard integration). To use integration module, `pip install optuna-integration` will be necessary. See [#4484](https://github.com/optuna/optuna/issues/4484).

- Move `chainermn` integration (https://github.com/optuna/optuna-integration/pull/1)
- Move `integration/keras.py` (https://github.com/optuna/optuna-integration/pull/5)
- Move `integration/allennlp` (https://github.com/optuna/optuna-integration/pull/8)
- Move Catalyst (https://github.com/optuna/optuna-integration/pull/19)
- Move `tf.keras` integration (https://github.com/optuna/optuna-integration/pull/21)
- Move `skorch` (https://github.com/optuna/optuna-integration/pull/22)
- Move `tensorflow` integration (https://github.com/optuna/optuna-integration/pull/23)
- Partially follow `sklearn.model_selection.GridSearchCV`'s arguments (4336)
- Delete `optuna.integration.ChainerPruningExtension` for migrating to optuna-integration package (4370)
- Delete `optuna.integration.ChainerMNStudy` for migrating to optuna-integration package (4497)
- Delete `optuna.integration.KerasPruningCallback` for migration to optuna-integration (4558)
- Delete `AllenNLP` integration for migration to optuna-integration (4579)
- DeleteCatalyst integration for migration to optuna-integration (4644)
- Remove `tf.keras` integration (4662)
- Delete `skorch` integration for migration to optuna-integration (4663)
- Remove `tensorflow` integration (4666)

Starting support for Mac & Windows

We have started supporting Optuna on Mac and Windows. While many features already worked in previous versions, we have fixed issues that arose in certain modules, such as Storage. See 4457 and 4458.

Breaking Changes

- Update deletion timing of `system_attrs` and `set_system_attr` (https://github.com/optuna/optuna-integration/pull/4)
- Change deletion timing of `system_attrs` and `set_system_attr` (4550)

New Features

- Show custom objective names for multi-objective optimization (4383)
- Support DDP in `PyTorch-Lightning` (4384)
- Implement the evaluator of regret bounds and its GP backend for Optuna Terminator 🤖 (4401)
- Implement the termination logic and APIs of Optuna Terminator 🤖 (4405)
- Add rank plot (4427)
- Implement NSGA-III (4436)
- Add BIPOP-CMA-ES support in `CmaEsSampler` (4464)
- Add timeline plot with plotly as backend (4470)
- Move `optuna.samplers._search_space.intersection.py` to `optuna.search_space.intersection.py` (4505)
- Add timeline plot with matplotlib as backend (4538)
- Add rank plot matplotlib version (4541)
- Support batched sampling with BoTorch (4591, thanks kstoneriv3!)
- Add `plot_terminator_improvement` as visualization of `optuna.terminator` (4609)
- Add import for public API of `optuna.terminator` to `optuna/terminator/__init__.py` (4669)
- Add matplotlib version of `plot_terminator_improvement` (4701)

Enhancements

- Import `cmaes` package lazily (4394)
- Make `BruteForceSampler` stateless (4408)
- Sort studies by study_id (4414)
- Add index study_id column on trials table (4449, thanks Ilevk!)
- Cache all trials in Study with delayed relative sampling (4468)
- Avoid error at import time for `optuna.terminator.improvement.gp.botorch` (4483)
- Avoid standardizing `Yvar` in `_BoTorchGaussianProcess` (4488)
- Change the noise value in `_BoTorchGaussianProcess` to suppress warning messages (4510)
- Change the argument of `intersection_search_space` from `study` to `trials` (4514)
- Improve deprecated messages in the old suggest functions (4562)
- Add support for `distributed>=2023.3.2` (4589, thanks jrbourbeau!)
- Fix `plot_rank` marker lines (4602)
- Sync owned trials when calling `study.ask` and `study.get_trials` (4631)
- Ensure that the plotly version of timeline plot draws a legend even if all TrialStates are the same (4635)

Bug Fixes

- Fix `botorch` dependency (4368)
- Mitigate a blocking issue while running migrations with SQLAlchemy 2.0 (4386)
- Fix `colorlog` compatibility problem (4406)
- Validate length of values in `add_trial` (4416)
- Fix `RDBStorage.get_best_trial` when there are `inf`s (4422)
- Fix bug of CMA-ES with margin on `RDBStorage` or `JournalStorage` (4434)
- Fix CMA-ES Sampler (4443)
- Fix `param_mask` for multivariate TPE with `constant_liar` (4462)
- Make `QMCSampler` samplers reproducible with `seed=0` (4480)
- Fix noise becoming NaN for the terminator module (4512)
- Fix `metric_names` on `_log_completed_trial()` function (4594)
- Fix `ImportError` for `botorch<=0.4.0` (4626)
- Fix index of `n_retries += 1` in `RDBStorage` (4658)
- Fix CMA-ES with margin bug (4661)
- Fix a logic for invalidating the cache in `CachedStorage` (4670)
- Fix 4697 `ValueError`: Rank 0 node expects an `optuna.trial.Trial` instance as the trial argument (4698, thanks keisukefukuda!)
- Fix a bug reported in issue 4699 (4700)
- Add tests for `plot_terminator_improvement` and fix some bugs (4702)

Installation

- Remove codecov dependencies (https://github.com/optuna/optuna-integration/pull/13)
- Migration to `pyproject.toml` for packaging (4164)
- [RFC] Remove specific pytorch version to support the latest stable PyTorch (4585)

Documentation

- Create the document and run the test to create document in each PR (https://github.com/optuna/optuna-integration/pull/2)
- Fix Keras docs (https://github.com/optuna/optuna-integration/pull/12)
- Add links of documents (https://github.com/optuna/optuna-integration/pull/17)
- Load `sphinxcontrib.jquery` explicitly (https://github.com/optuna/optuna-integration/pull/18)
- Add docstring for the `Terminator` class (4596)
- Fix the build on Read the Docs by following optuna 4659 (https://github.com/optuna/optuna-integration/pull/20)
- Add external packages to `intersphinx_mapping` in `conf.py` (4290)
- Minor fix of documents (4360)
- Fix a typo in `MeanDecreaseImpurityImportanceEvaluator` (4385)
- Update to Sphinx 6 (4479)
- Fix URL to the line of optuna-integration file (4498)
- Fix typo (4515, thanks gituser789!)
- Resolve error in compiling PDF documents (4605)
- Add `sphinxcontrib.jquery` extension to `conf.py` (4615)
- Remove an example code of `SkoptSampler` (4625)
- Add links to the optuna-integration document (4638)
- Add manually written index page of tutorial (4640)
- Fix the build on Read the Docs (4659)
- Improve docstring of `rank_plot` function and its matplotlib version (4660)
- Add a link to tutorial of human-in-the-loop optimization (4665)
- Fix typo for progress bar in documentation (4673, thanks gituser789!)
- Add docstrings to `optuna.termintor` (4675)
- Add docstring for `plot_terminator_improvement` (4677)
- Remove `versionadded` directives (4681)
- Add pareto front display example: 2D-plot from 3D-optimization including crop the scale (4685, thanks gituser789!)
- Embed a YouTube video in the docstring of `DaskStorage` (4694)
- List Dashboard in navbar (4708)
- Fix docstring of terminator improvement for `min_n_trials` (4709)

Examples

- An example of using pytorch distributed data parallel on 1 machine with arbitrary multiple GPUs (https://github.com/optuna/optuna-examples/pull/155, thanks li-li-github!)
- Apply `black .` with black 23.1.0 (https://github.com/optuna/optuna-examples/pull/168)
- Add Aim example (https://github.com/optuna/optuna-examples/pull/170)
- Resolve todo and fix docstrings in fastaiv2 example (https://github.com/optuna/optuna-examples/pull/171)
- Update pytorch-lightning version (https://github.com/optuna/optuna-examples/pull/172)
- Add python 3.11 to ray's version matrix (https://github.com/optuna/optuna-examples/pull/174)
- Minor code change suggestions to `pytorch_distributed_spawn.py` (https://github.com/optuna/optuna-examples/pull/175)
- Install `optuna-integration` in `chainer` CI (https://github.com/optuna/optuna-examples/pull/176)
- Add python 3.11 skimage's version matrix and remove warning for inputs data (https://github.com/optuna/optuna-examples/pull/177)
- Execute Ray example in CI (https://github.com/optuna/optuna-examples/pull/178)
- Update pytorch lightning version for ddp (https://github.com/optuna/optuna-examples/pull/179)
- Don't run evaluation twice on the last epoch (https://github.com/optuna/optuna-examples/pull/181, thanks Jendker!)
- Use BoTorch 0.8 or higher (https://github.com/optuna/optuna-examples/pull/185)
- Run catboost example with python 3.11 (https://github.com/optuna/optuna-examples/pull/186)
- Add terminator examples (https://github.com/optuna/optuna-examples/pull/190)
- Use Gymnasium and pre-released Stable-Baselines3 (https://github.com/optuna/optuna-examples/pull/191)
- Fix the AllenNLP CI (https://github.com/optuna/optuna-examples/pull/193)

Tests

- Suppress `FutureWarning` about `Trial.set_system_attr` in storage tests (4323)
- Add test for casting in `test_nsgaii.py` (4387)
- Fix the blocking issue on `test_with_server.py` (4402)
- Fix mypy error about `Chainer` (4410)
- Add unit tests for the _BoTorchGaussianProcess class (4441)
- Implement unit tests for `optuna.terminator.improvement._preprocessing.py` (4506)
- Fix mypy error about `PyTorch Lightning` (4520)

Code Fixes

- Simplify type annotations (https://github.com/optuna/optuna-integration/pull/10)
- Copy `_imports.py` from optuna (https://github.com/optuna/optuna-integration/pull/16)
- Refactor ParzenEstimator (4183)
- Fix mypy error abut `AllenNLP` in Checks (integration) (4277)
- Fix checks integration about pytorch lightning (4322)
- Minor refactoring of `tests/hypervolume_tests/test_hssp.py` (4329)
- Remove unnecessary sklearn version condition (4379)
- Support black 23.1.0 (4382)
- Warn unexpected search spaces for `CmaEsSampler` (4395)
- Fix flake8 errors on sklearn integration (4407)
- Fix mypy error about `PyTorch Distributed` (4413)
- Use `numpy.polynomial` in `_erf.py` (4415)
- Refactor `_ParzenEstimator` (4433)
- Simplify an argument's name of `RegretBoundEvaluator` (4442)
- Fix `Checks(integration)` about `terminator/.../botorch.py` (4461)
- Add an experimental decorator to `RegretBoundEvaluator` (4469)
- Add JSON serializable type (4478)
- Move `optuna.samplers._search_space.group_decomposed.py` to `optuna.search_space.group_decomposed.py` (4491)
- Simplify annotations in `optuna.visualization` (4525, thanks harupy!)
- Simplify annotations in `tests.visualization_tests` (4526, thanks harupy!)
- Remove unused instance variables in `_BoTorchGaussianProcess` (4530)
- Avoid deepcopy in `optuna.visualization.plot_timeline` (4540)
- Use `SingleTaskGP` for Optuna terminator (4542)
- Change deletion timing of `optuna.samplers.IntersectionSearchSpace` and `optuna.samplers.intersection_search_space` (4549)
- Remove `IntersectionSearchSpace` in `optuna.terminator` module (4595)
- Change arguments of `BaseErrorEvaluator` and classes that inherit from it (4607)
- Delete `import Rectangle` in `visualization/matplotlib` (4620)
- Simplify type annotations in `visualize/_rank.py` and `visualization_tests/` (4628)
- Move the function `_distribution_is_log` to `optuna.distributionsP from `optuna/terminator/__init__.py` (4668)
- Separate `_fast_non_dominated_sort()` from the samplers (4671)
- Read trials from remote storage whenever `get_all_trials` of `_CachedStorage` is called (4672)
- Remove experimental label from _ProgressBar (4684, thanks tungbq!)

Continuous Integration

- Fix coverage.yml (https://github.com/optuna/optuna-integration/pull/3)
- Delete labeler.yaml (https://github.com/optuna/optuna-integration/pull/6)
- Fix pypi publish.yaml (https://github.com/optuna/optuna-integration/pull/11)
- Test on an arbitrary branch (https://github.com/optuna/optuna-integration/pull/15)
- Fix the CI with AllenNLP (https://github.com/optuna/optuna-integration/pull/24)
- Update actions/setup-pythonv2 -> v4 (4307, thanks Kaushik-Iyer!)
- Update action versions (4328)
- Update `actions/setup-python` in `mac-tests` (follow-up for 4307) (4343)
- Add type ignore to `ProcessGroup` import from `torch.distributed` (4347)
- Fix label of `pypigh-action-pypi-publish` (4359)
- [Hotfix] Avoid to install SQLAlchemy 2.0 on `checks` (4364)
- [Hotfix] Add version constriant on SQLAlchemy for tests storage with server (4372)
- Disable colored log when `NO_COLOR` env or not tty (4376)
- Output installed packages in Tests CI (4381)
- Output installed packages in mac-test CI (4397)
- Use `ubuntu-latest` in PyPI publish CI (4400)
- Output installed packages in Checks CI (4417, thanks Kaushik-Iyer!)
- Output installed packages in Coverage CI (4423, thanks Kaushik-Iyer!)
- Fix mypy error on checks-integration CI (4424)
- Fix mac-test cache path (4425)
- Add minimum version tests of numpy, tqdm, colorlog, PyYAML (4428)
- Remove ignore test_pytorch_lightning (4432)
- Use `PyYAML==5.1` on `tests-with-minimum-dependencies` (4435)
- Remove trailing spaces in CI configs (4439)
- Output installed packages in all remaining CIs (4445, thanks Kaushik-Iyer!)
- Add windows ci check (4457)
- Make mac-test executed on PRs (4458)
- Add sqlalchemy<2.0.0 in `Checks(integration)` (4482)
- Fix ci test conditions (4496)
- Deploy results of visual regression test on Netlify (4507)
- Pin pytorch lightning version (4522)
- Securely deploy results of visual regression test on Netlify (4532)
- Pin `Distributed` version (4545)
- Delete fragile heartbeat test (4551)
- Ignore AllenNLP test from Mac-CI (4561)
- Delete visual-regression.yml (4597)
- Remove dependency on  `codecov` (4606)
- Install `test` in `checks-integration` CI (4612)
- Fix checks integration (4617)
- Add `Output dependency tree` by pipdeptree to Actions (4624)
- Add a version constraint on `fakeredis` (4637)
- Hotfix and run catboost test w/ python 3.11 except for MacOS (4646)
- Run `mlflow` with Python 3.11 (4647)

Other

- Update repository settings as in optuna/optuna (https://github.com/optuna/optuna-integration/pull/7)
- Bump up version to v3.2.0.dev (4345)
- Remove `cached-path` from `setup.py` (4357)
- Revert a merge commit for 4183 (4429)
- Include both venv and .venv in the exclude setting of the formatters (4476)
- Replace `hacking` with `flake8` (4556)
- Fix Codecov link (4564)
- Add `lightning_logs` to `.gitignore` (4565)
- Fix targets of `black` and `isort` in `formats.sh` (4610)
- Install `benchmark`, `optional`, and `test` in dev Docker image (4611)
- Provide kind error massage for missing `optuna-integration` (4636)

Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

Alnusjaponica, HideakiImamura, Ilevk, Jendker, Kaushik-Iyer, amylase, c-bata, contramundum53, cross32768, eukaryo, g-votte, gen740, gituser789, harupy, himkt, hvy, jrbourbeau, keisuke-umezawa, keisukefukuda, knshnb, kstoneriv3, li-li-github, nomuramasahir0, not522, nzw0301, toshihikoyanase, tungbq

3.2.0b0

This is the release note of [v3.2.0b0](https://github.com/optuna/optuna-integration/milestone/2).

See the [release note of Optuna v3.2.0](TBD) for more details.

3.1.1

This is the release note of [v3.1.1](https://github.com/optuna/optuna/milestone/56?closed=1).

Enhancements

- [Backport] Import `cmaes` package lazily (4573)

Bug Fixes

- [Backport] Fix botorch dependency (4569)
- [Backport] Fix param_mask for multivariate TPE with constant_liar (4570)
- [Backport] Mitigate a blocking issue while running migrations with SQLAlchemy 2.0 (4571)
- [Backport] Fix bug of CMA-ES with margin on `RDBStorage` or `JournalStorage` (4572)
- [Backport] Fix RDBStorage.get_best_trial when there are `inf`s (4574)
- [Backport] Fix CMA-ES Sampler (4581)

Code Fixes

- [Backport] Add `types-tqdm` for lint (4566)

Other

- Update version number to v3.1.1 (4567)

Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

HideakiImamura, contramundum53, not522

3.1.0

This is the release note of [v3.1.0](https://github.com/optuna/optuna/milestone/53?closed=1).

This is not something you have to read from top to bottom to learn about the summary of Optuna v3.1. The recommended way is reading [the release blog](https://medium.com/optuna/announcing-optuna-3-1-7b4c5fac227c).

Highlights

New Features

CMA-ES with Margin

> | CMA-ES | CMA-ES with Margin |
> | ------- | -------- |
> | ![CMA-ES](https://user-images.githubusercontent.com/5564044/195026193-467fc1bc-ec4b-4e98-a09f-2b2623d345bc.gif) | ![CMA-ESwM](https://user-images.githubusercontent.com/5564044/195026228-6e9e433b-9652-4bde-8191-2911db83891f.gif) |
> 
> “The animation is referred from https://github.com/EvoConJP/CMA-ES_with_Margin, which is distributed under the MIT license.”

CMA-ES achieves strong performance for continuous optimization, but there is still room for improvement in mixed-integer search spaces. To address this, we have added support for the "CMA-ES with Margin" algorithm to our `CmaEsSampler`, which makes it more efficient in these cases. You can see the benchmark results [here](https://github.com/CyberAgentAILab/cmaes/pull/121#issuecomment-1296691448). For more detailed information about CMA-ES with Margin, please refer to the paper “CMA-ES with Margin: Lower-Bounding Marginal Probability for Mixed-Integer Black-Box Optimization - [arXiv](https://arxiv.org/abs/2205.13482)”, which has been accepted for presentation at GECCO 2022.


python
import optuna
from optuna.samplers import CmaEsSampler

def objective(trial):
 x = trial.suggest_float("y", -10, 10, step=0.1)
 y = trial.suggest_int("x", -100, 100)
 return x**2 + y

study = optuna.create_study(sampler=CmaEsSampler(with_margin=True))
study.optimize(objective)


Distributed Optimization via NFS

`JournalFileStorage`, a file storage backend based on `JournalStorage`, supports NFS (Network File System) environments. It is the easiest option  for users who wish to execute distributed optimization in environments where it is difficult to set up database servers such as MySQL, PostgreSQL or Redis (e.g. 815, 1330, 1457 and 2216). 

python
import optuna
from optuna.storages import JournalStorage, JournalFileStorage

def objective(trial):
 x = trial.suggest_float("x", -100, 100)
 y = trial.suggest_float("y", -100, 100)
 return x**2 + y

storage = JournalStorage(JournalFileStorage("./journal.log"))
study = optuna.create_study(storage=storage)
study.optimize(objective)


For more information on `JournalFileStorage`, see the blog post [“Distributed Optimization via NFS Using Optuna’s New Operation-Based Logging Storage”](https://medium.com/optuna/distributed-optimization-via-nfs-using-optunas-new-operation-based-logging-storage-9815f9c3f932) written by wattlebirdaz.

A Brand-New Redis Storage

We have replaced the Redis storage backend with a `JournalStorage`-based one. The experimental `RedisStorage` class has been removed in v3.1. The following example shows how to use the new `JournalRedisStorage` class.

python
import optuna
from optuna.storages import JournalStorage, JournalRedisStorage

def objective(trial):
 …

storage = JournalStorage(JournalRedisStorage("redis://localhost:6379"))
study = optuna.create_study(storage=storage)
study.optimize(objective)


Dask.distributed Integration

`DaskStorage`, a new storage backend based on [Dask.distributed](https://distributed.dask.org/en/stable/), is supported. It allows you to leverage distributed capabilities in similar APIs with `concurrent.futures`. `DaskStorage` can be used with `InMemoryStorage`, so you don't need to set up a database server. Here's a code example showing how to use `DaskStorage`:

python
import optuna
from optuna.storages import InMemoryStorage
from optuna.integration import DaskStorage
from distributed import Client, wait

def objective(trial):
 ...

with Client("192.168.1.8:8686") as client:
 study = optuna.create_study(storage=DaskStorage(InMemoryStorage()))
 futures = [
     client.submit(study.optimize, objective, n_trials=10, pure=False)
     for i in range(10)
 ]
 wait(futures)
 print(f"Best params: {study.best_params}")


Setting up a Dask cluster is easy: install `dask` and `distributed`, then run the `dask scheduler` and `dask worker` commands, as detailed in the [Quick Start Guide](https://distributed.dask.org/en/stable/quickstart.html) in the Dask.distributed documentation. 

console
$ pip install optuna dask distributed

$ dark scheduler

3.1.0b0

This is the release note of [v3.1.0-b0](https://github.com/optuna/optuna/milestone/47?closed=1).

Highlights

CMA-ES with Margin support

> | CMA-ES | CMA-ES with Margin |
> | ------- | -------- |
> | ![CMA-ES](https://user-images.githubusercontent.com/5564044/195026193-467fc1bc-ec4b-4e98-a09f-2b2623d345bc.gif) | ![CMA-ESwM](https://user-images.githubusercontent.com/5564044/195026228-6e9e433b-9652-4bde-8191-2911db83891f.gif) |
> 
> “The animation is referred from https://github.com/EvoConJP/CMA-ES_with_Margin, which is distributed under the MIT license.”

CMA-ES achieves strong performance for continuous optimization, but there is still room for improvement in mixed-integer search spaces. To address this, we have added support for the "CMA-ES with Margin" algorithm to our CmaEsSampler, which makes it more efficient in these cases. You can see the benchmark results [here](https://github.com/CyberAgentAILab/cmaes/pull/121#issuecomment-1296691448). For more detailed information about CMA-ES with Margin, please refer to the paper “CMA-ES with Margin: Lower-Bounding Marginal Probability for Mixed-Integer Black-Box Optimization - arXiv”, which has been accepted for presentation at GECCO 2022.


python
import optuna
from optuna.samplers import CmaEsSampler

def objective(trial):
 x = trial.suggest_float("y", -10, 10, step=0.1)
 y = trial.suggest_int("x", -100, 100)
 return x**2 + y

study = optuna.create_study(sampler=CmaEsSampler(with_margin=True))
study.optimize(objective)


Distributed Optimization via NFS

`JournalFileStorage`, a file storage backend based on `JournalStorage`, supports NFS (Network File System) environments. It is the easiest option  for users who wish to execute distributed optimization in environments where it is difficult to set up database servers such as MySQL, PostgreSQL or Redis (e.g. 815, 1330, 1457 and 2216). 

python
import optuna
from optuna.storages import JournalStorage, JournalFileStorage

def objective(trial):
 x = trial.suggest_float("x", -100, 100)
 y = trial.suggest_float("y", -100, 100)
 return x**2 + y

storage = JournalStorage(JournalFileStorage("./journal.log"))
study = optuna.create_study(storage=storage)
study.optimize(objective)


For more information on `JournalFileStorage`, see the blog post [“Distributed Optimization via NFS Using Optuna’s New Operation-Based Logging Storage”](https://medium.com/optuna/distributed-optimization-via-nfs-using-optunas-new-operation-based-logging-storage-9815f9c3f932) written by wattlebirdaz.



Dask Integration

`DaskStorage`, a new storage backend based on [Dask.distributed](https://distributed.dask.org/en/stable/), is supported. It enables distributed computing in similar APIs with `concurrent.futures`. An example code is like the following (The full example code is available in the [optuna-examples](https://github.com/optuna/optuna-examples/blob/main/dask/dask_simple.py) repository).

python
import optuna
from optuna.storages import InMemoryStorage
from optuna.integration import DaskStorage
from distributed import Client, wait

def objective(trial):
 ...

with Client("192.168.1.8:8686") as client:
 study = optuna.create_study(storage=DaskStorage(InMemoryStorage()))
 futures = [
     client.submit(study.optimize, objective, n_trials=10, pure=False)
     for i in range(10)
 ]
 wait(futures)
 print(f"Best params: {study.best_params}")


One of the interesting aspects is the availability of `InMemoryStorage`. You don’t need to set up database servers for distributed optimization. Although you still need to set up the Dask.distributed cluster, it’s quite easy like the following. See [Quickstart of the Dask.distributed documentation](https://distributed.dask.org/en/stable/quickstart.html) for more details.

python
$ pip install optuna dask distributed

$ dark-scheduler

3.0.6

This is the release note of [v3.0.6](https://github.com/optuna/optuna/milestone/55?closed=1).

Installation

- Fix a project metadata for scipy version constraint (4494)

Other

- Bump up version number to v3.0.6 (4493)

Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

c-bata HideakiImamura

3.0.5

This is the release note of [v3.0.5](https://github.com/optuna/optuna/milestone/52?closed=1).

Bug Fixes

- [Backport] Fix bugs in `constant_liar` option (4257)

Other

- Bump up version number to 3.0.5 (4256)


Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

HideakiImamura, eukaryo, toshihikoyanase

3.0.4

This is the release note of [v3.0.4](https://github.com/optuna/optuna/milestone/51?closed=1).


Bug Fixes

- [Backport] Specify object type to numpy array init to avoid unintended str cast (4218)

Other

- Bump up version to v3.0.4 (4214)

Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

HideakiImamura, contramundum53

3.0.3

This is the release note of [v3.0.3](https://github.com/optuna/optuna/milestone/50?closed=1).

Enhancements

- [Backport] Fix the data migration script for v3 release (4053)

Bug Fixes

- [Backport] Skip to add `intermediate_value_type` and `value_type` columns if exists (4052)

Installation

- Backport 4036 and 4044 to pass tests on `release-v3.0.3` branch (4043)

Other

- Bump up version to v3.0.3 (4041)

Thanks to All the Contributors!

This release was made possible by the authors and the people who participated in the reviews and discussions.

c-bata, contramundum53

3.0.2

This is the release note of [v3.0.2](https://github.com/optuna/optuna/milestone/49?closed=1).

Highlights

Bug fix for DB migration with SQLAlchemy v1.3

In v3.0.0 or v3.0.1, DB migration fails with SQLAlchemy v1.3. We fixed this issue in v3.0.2.

Removing typing-extensions from dependency

In v3.0.0, typing-extensions was used for fine-grained type checking. However, that resulted in import failures when using older versions of typing-extensions. We made the dependency optional in v3.0.2.

Bug Fixes

- [Backport] Merge pull request 3967 from c-bata/fix-issue-3966 (4004)

Installation

- [Backport] Merge pull request 3990 from c-bata/make-typing-extensions-optional (4005)

Others

- Bump up version number to v3.0.2 (3991)

Thanks to All the Contributors!

contramundum53, c-bata

This release was made possible by the authors and the people who participated in the reviews and discussions.

3.0.1

This is the release note of [v3.0.1](https://github.com/optuna/optuna/milestone/48?closed=1).

Highlights

Bug fix for `GridSampler` with RDB

In v3.0.0, `GridSampler` with RDB raises an error. This patch fixes this combination.

Bug Fixes

- Backport 3957 (3972)

Others

- Bump up version number to v3.0.1 (3973)

Thanks to All the Contributors!

HideakiImamura, contramundum53, not522

This release was made possible by the authors and the people who participated in the reviews and discussions.

3.0

- `optuna dashboard` command (3058)
- `optuna.structs` module (3057)
- `best_booster` property of `LightGBMTuner` (3057)
- `type_checking` module (3235)

Minor breaking changes:

- Add option to exclude best trials from study summaries (3109)
- Move validation logic from `_run_trial` to `study.tell` (3144)
- Use an enqueued parameter that is out of range from suggest API (3298)
- Fix distribution compatibility for linear and logarithmic distribution (3444)
- Remove `get_study_id_from_trial_id`, the 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant