An easy-to-use, scalable hyperparameter optimization framework for Keras models with define-by-run syntax and built-in search algorithms.
KerasTuner is a hyperparameter tuning library for Keras that automates the search for optimal model configurations. It solves the pain points of manual hyperparameter optimization by providing scalable search algorithms and an intuitive define-by-run syntax. The framework helps machine learning practitioners efficiently improve model performance through systematic experimentation.
Machine learning engineers and researchers using Keras/TensorFlow who need to optimize model hyperparameters systematically. It's particularly valuable for those building deep learning models where manual tuning is time-consuming.
Developers choose KerasTuner for its seamless Keras integration, built-in search algorithms, and easy-to-use API that reduces the complexity of hyperparameter optimization. Its extensible design also appeals to researchers wanting to experiment with custom search methods.
A Hyperparameter Tuning Library for Keras
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Directly works with TensorFlow 2.0+ and Keras workflows, allowing model definition using the intuitive `hp` argument for hyperparameters, as shown in the quick introduction.
Includes Bayesian Optimization, Hyperband, and Random Search out of the box, reducing implementation effort and providing robust options for different search scenarios.
Enables dynamic configuration of search spaces during model creation, making it flexible and intuitive for varying architectures without pre-defining static grids.
Designed with an architecture that allows easy implementation of custom search algorithms, appealing to researchers experimenting with new optimization methods.
Tightly coupled with Keras and TensorFlow, making it unsuitable for projects using other deep learning frameworks, which limits its versatility in mixed environments.
Lacks built-in advanced parallelism or distributed tuning features, requiring additional setup for large-scale searches, unlike some competitors like Ray Tune.
Requires specific versions (Python 3.8+ and TensorFlow 2.0+), which can lead to compatibility issues in legacy or constrained environments.