Workshop Theory of Randomized Search Heuristics

(colocated with ICALP 2007)


Continuous Randomized Search Heuristics: Theory is Alive and Well

by Anne Auger

During the past decades, randomized search heuristics (RSH) have attracted a growing interest for solving continuous optimization problems, where the search space is a subset of $\mathbb{R}^d$. Compared to classical (deterministic) optimization methods (i.e. gradient based- algorithms), RSH are less prone to being stuck into local optima due to their stochastic nature and to the fact that they evolve a population of potential solutions.

In the recent years, RSH in continuous domain, in the continuation of historical Evolution Strategies, have made significant progress. It is generally assumed that theory is developed alongside practice for classical algorithms, and lags behind for RSH. This tutorial will try to demonstrate that the time lag between theory and practice is getting smaller and smaller for continuous RSH.

In this presentation, we will stress the main theoretical differences between continuous and discrete search spaces and introduce the main ingredients for theory, focusing mainly on two questions: (1) how does the optimization time (e.g. time needed to halve the distance to optimum) scale with the number of steps for a fixed dimension $d$ (2) how does the optimization time scale with the dimension $d$. This talk will detail some simple cases and survey recent advanced contributions.

Back to mainpage