A hyperparameter optimization framework for machine learning with a define-by-run API for dynamic search spaces.
Optuna is a hyperparameter optimization framework that automates the search for the best parameters in machine learning models. It solves the problem of manually tuning hyperparameters by efficiently exploring search spaces and pruning unpromising trials. The framework is designed to accelerate machine learning experimentation and improve model performance.
Machine learning researchers, data scientists, and engineers who need to optimize model hyperparameters for tasks like classification, regression, and deep learning.
Developers choose Optuna for its define-by-run API, which allows dynamic construction of search spaces and offers high modularity. It provides state-of-the-art optimization algorithms, easy parallelization, and integration with popular ML libraries, making it a versatile and efficient choice.
A hyperparameter optimization framework
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
The define-by-run API lets you construct search spaces with Python conditionals and loops, enabling complex, adaptive tuning beyond static configurations.
It uses state-of-the-art pruning to halt unpromising trials early, saving significant computational time and resources during optimization.
Parallelization to hundreds of workers requires minimal code changes, as highlighted in the distributed optimization examples.
Built-in plotting functions allow quick inspection of optimization histories and hyperparameter importance, aiding in analysis without extra tools.
Seamless integration with frameworks like PyTorch, TensorFlow, and scikit-learn, as listed in the supported integrations, reduces setup friction.
Optuna is tightly coupled to Python, limiting its use in multi-language environments or projects that rely on other programming stacks.
Using databases like SQLite for distributed studies adds setup complexity, as seen in the dashboard example, which can be cumbersome for quick experiments.
For hyperparameter tuning with very few, fixed parameters, the framework introduces unnecessary complexity compared to simpler scripts or manual methods.
Features like multi-objective optimization or custom samplers require diving into extensive documentation, as indicated by the separate tutorials and examples.