How to Use a Genetic Algorithm for Hyperparameter Tuning of ML Models?

aNumak & Company
4 min readJul 16, 2022

--

In essential trial circumstances, all hyperparameters can have unconstrained fundamental values, and the viable set of hyperparameters can be an n-dimensional vector space with actual values. However, because an ML model’s hyper-parameters can take on matters from multiple domains and have distinct constraints, their optimization issues are frequently complicated constrained optimization problems.

For example, in a decision tree, the number of examined features should be in the range of 0 to the number of parts, and the number of clusters in k-means should not be greater than the number of data points. Furthermore, definite characteristics, like the activation function and optimizer of a neural network, can frequently only take a few specific values. As a result, the feasible domain of a collection of hyperparameters often has a complicated structure, increasing the problem’s complexity.

There are four primary components to the hyper-parameter tuning method.

● An estimator with a goal function.

● A search area.

● A method of searching or optimizing for hyper-parameter combinations.

● An evaluation function for comparing the performance of various hyper-parameter combinations.

The working of hyperparameter tunning

Hyperparameter optimization aims to obtain optimal or near-optimal model performance by modifying hyper-parameters within budget constraints. The function’s mathematical formulation varies based on the goal function of the chosen ML algorithm and the performance metric function. Model performance may be measured using a variety of measures, including accuracy, RMSE, F1-score, and false alarm rate. In reality, however, time budgets are an essential restriction for improving hyperparameter optimization models and must be considered. Maximizing the objective function of an ML model with a decent number of hyper-parameter configurations frequently takes a long time.

The primary process of Hyperparameter optimization is as follows:

1. Choose an objective function and performance metrics;

2. Identify the hyper-parameters that need to be tuned, describe their kinds, and identify the best optimization approach.

3. As the baseline model, train the ML model using the default hyper-parameter setup or common values.

4. Begin the optimization process by selecting a broad search space as the likely hyper-parameter domain based on manual testing and domain expertise.

5. Narrow the search space based on the areas of currently-tested well-performing hyper-parameter values or, if required, explore additional search spaces.

6. As the final answer, return the best-performing hyper-parameter configuration.

How is a genetic algorithm used in hyperparameter optimization?

One of the most prevalent met heuristic algorithms is the genetic algorithm (GA), which is based on the evolutionary idea that people with the highest survival potential and adaptation to the environment are more likely to survive and pass on their qualities to future generations. The rates of their parents will be passed on to the following generation, which may include both good and bad people. Better people will be more likely to live and create more capable children, whereas the worst people will progressively fade away. The individual with the most adaptability will be chosen as the global optimum after multiple generations.

To use GA to hyperparameter optimization issues, each chromosome or person represents a hyper-parameter, and its decimal value reflects the hyper-real parameter’s input value in each evaluation. Every chromosome has multiple genes, which are binary digits, and these genes are subsequently subjected to crossover and mutation activities. The population contains all potential values within the initialized chromosome/parameter ranges, whereas the fitness function characterizes the parameter assessment metrics.

Since the spontaneous parameter values frequently do not include the optimal parameter values, additional operations, like selection, crossover, and mutation, must be conducted on the well-performing chromosomes to discover the optimums. Chromosome selection is carried out by choosing chromosomes with high fitness function values. To keep the population size constant, chromosomes with high fitness function values are more likely to be passed on to the next generation, where they develop new chromosomes with the best traits of their parents. A genetic algorithm solves the optimization problem, for example, if you need to find the best parameters to minimize some loss function. Genetic algorithms are part of the bigger group of evolutionary algorithms. The idea is inspired by nature and natural selection.

1. Firstly you generate your initial population of ML models and randomly choose hyperparameters.

2. Calculate your loss function for each model, for example, log-loss.

3. Then choose some amount of models with the lowest error.

4. Now create offspring, so you create a population of new ML models based on the best models from the previous generation and slightly change their hyperparameters. Your new people will be contained from models of the prior population and freshly generated models in some proportion, for example, 50/50.

5. You calculate your loss function, sort your models and repeat the process.

Genetic algorithms are not perfect, and you still need to specify your loss function, your population size, a ratio of offspring with changed parameters, and so on.

For more information:

https://anumak.ai/

--

--

aNumak & Company
aNumak & Company

Written by aNumak & Company

aNumak & Company is a Global Business and Management Consulting firm with expertise in building scalable business models for diverse industry verticals.

No responses yet