Mpi

Workshop Theory of Randomized Search Heuristics

(colocated with ICALP 2007)

Abstract

Linear Convergence and Premature Convergence of Evolution Strategies in Noisy Spherical Environment

by Mohamed Jebalia, Anne Auger

Evolution Strategies (ES) are Evolutionary Algorithms specifically designed for continuous optimization, i.e. when the search space is a subset of R^d. Noise is present in many (continuous) real-world optimization problems, coming from various origins as measurement limitations or limited accuracy in simulation procedures. Recently, results concerning convergence and complexity of ESs in non-noisy environment have been rigorously obtained. Adaptive Evolution Strategies have been proven to converge (log)-linearly on the class of functions g (f_s(x)) = g (||x||^2), where x 2 R^d, g is a strictly monotonically increasing function and f_s(x) = ||x||^2 the socalled sphere function [4, 3]. In other words, for a fixed dimension d, the log of the distance to the optimum decreases linearly. The dependence in the dimension of the convergence rate has been investigated in [5] and scales as O(1/d). In this work we investigate how those results extend to noisy environment. We consider the noisy sphere function, f (x, \eta) = ||x|| + \eta B, where B is an independent random variable that models the noise and \eta \in R^+. Different adaptation models for \eta are investigated: (1) proportionally to the step-size, (2) proportionally to ||x|| [2, 1]. In a first part, we show that convergence does not always take place. For a (1 + 1)- ES, we show that convergence on the model (1) depends on the noise: (i) if the support of the noise takes negative and positive values and is bounded, premature convergence occurs with probability one; (ii) if the noise is positive with zero as lower bound, linear convergence takes place, as for the non-noisy case; (iii) if the support of the noise is R, sufficient conditions to obtain linear convergence are not satisfied. In a second part we show that the problem of premature convergence observed in model (1) can be avoided using reevaluation or using a comma strategy or using model (2) with bounded noise.


Back to mainpage