• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of pnasPNASInfo for AuthorsSubscriptionsAboutThis Article
Proc Natl Acad Sci U S A. Jan 16, 2007; 104(3): 708–711.
Published online Jan 10, 2007. doi:  10.1073/pnas.0610471104
PMCID: PMC1766340
Applied Mathematics, Evolution

Improved evolutionary optimization from genetically adaptive multimethod search

Abstract

In the last few decades, evolutionary algorithms have emerged as a revolutionary approach for solving search and optimization problems involving multiple conflicting objectives. Beyond their ability to search intractably large spaces for multiple solutions, these algorithms are able to maintain a diverse population of solutions and exploit similarities of solutions by recombination. However, existing theory and numerical experiments have demonstrated that it is impossible to develop a single algorithm for population evolution that is always efficient for a diverse set of optimization problems. Here we show that significant improvements in the efficiency of evolutionary search can be achieved by running multiple optimization algorithms simultaneously using new concepts of global information sharing and genetically adaptive offspring creation. We call this approach a multialgorithm, genetically adaptive multiobjective, or AMALGAM, method, to evoke the image of a procedure that merges the strengths of different optimization algorithms. Benchmark results using a set of well known multiobjective test problems show that AMALGAM approaches a factor of 10 improvement over current optimization algorithms for the more complex, higher dimensional problems. The AMALGAM method provides new opportunities for solving previously intractable optimization problems.

Keywords: evolutionary search, multiple objectives, optimization problems, Pareto front

Evolutionary optimization is a subject of intense interest in many fields of study, including computational chemistry, biology, bioinformatics, economics, computational science, geophysics, and environmental science (18). The goal is to determine values for model parameters or state variables that provide the best possible solution to a predefined cost or objective function, or a set of optimal tradeoff values in the case of two or more conflicting objectives. However, locating optimal solutions often turns out to be painstakingly tedious, or even completely beyond current or projected computational capacity (9).

Here, we consider a multiobjective minimization problem, with n decision variables (parameters) and m objectives: y = f(x) = (f1(x), …, fm(x)), where x denotes the decision vector, and y is the objective space. We restrict attention to optimization problems in which the parameter search space X, although perhaps quite large, is bounded: x = (x1, …, xn) [set membership] X. The presence of multiple objectives in an optimization problem gives rise to a set of Pareto-optimal solutions, instead of a single optimal solution (10, 11). A Pareto-optimal solution is one in which one objective cannot be further improved without causing a simultaneous degradation in at least one other objective. As such, they represent globally optimal solutions to the tradeoff problem.

Numerous approaches have been proposed to efficiently find Pareto-optimal solutions for complex multiobjective optimization problems (1215). In particular, evolutionary algorithms have emerged as the most powerful approach for solving search and optimization problems involving multiple conflicting objectives. Beyond their ability to search intractably large spaces for multiple Pareto-optimal solutions, these algorithms are able to maintain a diverse set of solutions and exploit similarities of solutions by recombination. These attributes lead to efficient convergence to the Pareto-optimal front in a single optimization run (13). Of these, the nondominated sorted genetic algorithm II (NSGA-II) (14) has received the most attention because of its simplicity and demonstrated superiority over other methods.

Although the multiobjective optimization problem has been studied quite extensively, current available evolutionary algorithms typically implement a single algorithm for population evolution. Reliance on a single biological model of natural selection and adaptation presumes that a single method exists that efficiently evolves a population of potential solutions through the parameter space. However, existing theory and numerical experiments have demonstrated that it is impossible to develop a single algorithm for population evolution that is always efficient for a diverse set of optimization problems (16).

In recent years, memetic algorithms (also called hybrid genetic algorithms) have been proposed to increase the search efficiency of population based optimization algorithms (17). These methods are inspired by models of adaptation in natural systems, and use a genetic algorithm for global exploration of the search space, combined with a local search heuristic for exploitation. Memetic algorithms have shown to significantly speed up the evolution toward the global optimal solution for a variety of real-world optimization problems. However, our conjecture is that a search procedure that adaptively changes the way it generates offspring, based on the shape and local peculiarities of the fitness landscape, will further improve the efficiency of evolutionary search. This approach is likely to be productive because the nature of the fitness landscape (objective functions mapped out in the parameter space, also called the response surface) often varies considerably between different optimization problems, and dynamically changes en route to the global optimal solutions.

Drawing inspiration from the field of ensemble weather forecasting (18), we present an innovative procedure employing genetically adaptive evolutionary optimization. The method combines two concepts, simultaneous multimethod search, and self-adaptive offspring creation, to ensure a fast, reliable, and computationally efficient solution to multiobjective optimization problems. We call this approach a multi-algorithm, genetically adaptive multiobjective, or AMALGAM, method, to evoke the image of a procedure that blends the attributes of the best available individual optimization algorithms.

To successfully implement the AMALGAM method, three questions need to be addressed. First, how can we best make multiple algorithms meaningfully communicate with one another, and share their information? Second, what is the most effective method for adaptive offspring creation? Finally, which individual algorithms should be included? These issues will be confronted below.

Materials and Methods

Our multimethod evolutionary optimization implements a population-based elitism search procedure to find a well distributed set of Pareto solutions within a single optimization run. The basic algorithm is presented in supporting information (SI) Fig. 4 (see SI Text), and is described below.

The algorithm is initiated by using a random initial population P0 of size N, generated by using Latin hypercube sampling. Then, each parent is assigned a rank using the fast nondominated sorting (FNS) algorithm (14). A population of offspring Qo, of size N, is subsequently created by using the multimethod search concept that lies at the heart of the AMALGAM method. Instead of implementing a single operator for reproduction, we simultaneously use k individual algorithms to generate the offspring, Q0 = {Q01, …, Q0k}. These algorithms each create a prespecified number of offspring points, N = {Nt1, …, Ntk}, from P0 using different adaptive procedures. After creation of the offspring, a combined population R0 = P0 [union or logical sum] Qo of size 2N is created and R0 ranked using FNS. By comparing the current offspring with the previous generation, elitism is ensured because all previous nondominated members will always be included in R (1214). Finally, members for the next population P1 are chosen from subsequent nondominated fronts of R0 based on their rank and crowding distance (14). The new population P1 is then used to create offspring using the method described below, and the aforementioned algorithmic steps are repeated until convergence is achieved.

Our method for adaptive offspring creation is designed to favor individual algorithms that exhibit the highest reproductive success. To ensure that the “best” algorithms are weighted so that they contribute the most offspring to the new population, we update {Nt1, …, Ntk} according to Nti = N·(Pti/Nt−1i)/Σi=1k(Pti/Nt−1i). The term Pti/Nt−1i is the ratio of the number of offspring points an algorithm contributes to the new population, Pti, and the corresponding number the algorithm created in the previous generation (Nt−1i). The rest of the expression scales the reproductive success of an individual algorithm to the combined success of all of the algorithms. In this study, the minimum values for {Nt1, …, Ntk} were set to 5, to avoid the possibility of inactivating algorithms that may contribute to convergence in future generations.

The final issue is the decision of which individual algorithms to include in the search. In principle, the AMALGAM method is very flexible, and could accommodate any biological model for population evolution. Here we implement the NSGA-II (14), particle swarm optimization (PSO) (19), adaptive metropolis search (AMS) (20), and differential evolution (DE) (21) algorithms. These choices are based on the outcome of numerical experiments demonstrating that these four commonly used optimization methods are mutually consistent and complementary. A detailed description of the individual algorithms and their algorithmic parameters is presented in SI Text.

We anticipate two advantages of the AMALGAM method. First, by facilitating direct information exchange between individual algorithms, the method merges the strengths of different search strategies to increase the speed of evolution toward the Pareto-optimal solutions. Second, by adaptively changing preference to individual search algorithms during the course of the optimization, the method should adapt quickly to the specific difficulties and peculiarities of the optimization problem at hand.

We conducted a wide range of numerical experiments using a set of well known multiobjective benchmark problems. SI Table 2 provides a detailed description and definition of the selected test functions. These functions cover a diverse set of problem features, including high-dimensionality, convexity, nonconvexity, multimodality, isolated optima, nonuniformity, and interdependence. For a given test, each optimization run was repeated 30 times using a population size of 100 points in combination with 150 generations, and average results are reported.

Results and Discussion

To demonstrate the advantages of multimethod optimization, consider Fig. 1, which shows the evolution of the nondominated fronts generated with the individual NSGA-II (squares), PSO (circles), AMS (+) and DE (diamonds) algorithms, and AMALGAM (x) method for test problem ZDT4. SI Movies 1 and 2 show this and another test problem (ROT). In each snapshot during the evolution, the dark line depicts the location of the true Pareto-optimal front. The results highlight the advantages of adaptive multimethod evolutionary search. After only 7,500 function evaluations, AMALGAM has progressed toward the true Pareto-optimal front, and has generated solutions that are far more evenly distributed along the Pareto front than any of the individual algorithms.

Fig. 1.
Generated Pareto-optimal fronts after 25, 50, and 75 generations with the NSGA-II (squares), PSO (circles), AMS (+), DE (diamonds), and AMALGAM (x) optimization algorithms for test problem ZDT4. This benchmark problem has 219 different local Pareto-optimal ...

This improved performance is quantified in Table 1, which compares convergence statistics over 30 different optimization runs for the NSGA-II and AMALGAM methods for all benchmark problems considered. The metrics, described in detail in SI Text, measure the extent of convergence to a known set of Pareto-optimal solutions (Y), and the uniformity (or spread) of the solutions within this distribution (Δ). Smaller values for both metrics indicate better performance.

Table 1.
Number of function evaluations needed to achieve convergence, and values of the convergence metric Y, and diversity measure Δ after 150 generations for the 10 different test functions considered in this study

The improvement of the AMALGAM method over the NSGA-II algorithm is significant for all of the benchmark studies considered. The results in Table 1 demonstrate that AMALGAM is significantly more efficient in locating Pareto-optimal solutions than the current state-of-the-art NSGA-II algorithm, with efficiency gains approaching a factor 10 for the more complex, higher dimensional problems (ZDT1-ZDT6 and ROT). The new method even converges for the rotational problem (ROT) within 150 generations, indicating that our multimethod search can deal with correlated decision variables that classical genetic mutation and selection operators such as the NSGA-II algorithm have difficulty handling.

Fig. 2 depicts AMALGAM's evolution of the number of offspring points of the individual algorithms for test problem ZDT4 (Fig. 2A). This plot illustrates why the multimethod optimization exhibits superior performance. During the first part of the optimization, the NSGA-II algorithm (squares) exhibits the highest reproductive success, owing to the proficiency of its classical genetic operators for crossover and mutation for global optimization. However, after ≈20 generations, the utility of the NSGA-II algorithm abruptly decreases in favor of (in sequential order) the DE (diamonds), AMS (+), and PSO (circles) algorithms. This combination of methods proves to be extremely effective at increasing the diversity of solutions along the Pareto front once the NSGA-II method does its initial work. This result confirms our conjecture that an adaptive strategy of switching algorithms to maintain efficiency during all stages of the optimization will outperform any individual algorithm, and provides strong support for the use of multimethod evolutionary search. The performance of AMALGAM on the other benchmark problems provides further justification for this conclusion.

Fig. 2.
Illustration of the concept of self-adaptive offspring creation. (A) Evolution of the number of offspring points generated with the NSGA-II (squares), PSO (circles), AMS (+), and DE (diamonds) algorithms within AMALGAM's multimethod search as function ...

Although two-objective problems provide a demonstration of the advantages of multimethod evolutionary optimization, it is desirable to investigate the performance of the AMALGAM method for higher dimensional problems. To this end, we examine the three objective, 12 parameter test problem DTLZ6 described in ref. 22, for which it is reported that existing evolutionary algorithms fail to locate solutions on the true Pareto front. Fig. 3A presents the theoretical Pareto-optimal curve and the optimization results for the NSGA-II and AMALGAM methods after 50 generations. The AMALGAM method locates the Pareto-optimal solution set in ≈5,000 function evaluations, whereas the NSGA-II algorithm is unable to exactly find the Pareto set, even after 100,000 iterations. The evolution of the number of offspring points for the four individual algorithms in AMALGAM (Fig. 3B) again illustrates the virtue of genetically adaptive multimethod search.

Fig. 3.
Nondominated solutions found for test problem DTLZ6 (23) with NSGA-II and AMALGAM after 100,000 and 5,000 function evaluations, respectively. (A) The dark line marks the true Pareto-optimal front. Although classical multiobjective methods have difficulty ...

The results presented herein illustrate that multimethod evolutionary optimization with adaptive offspring creation is a powerful new approach for solving complex optimization problems. This finding has some wider implications that go beyond the multiobjective test problems studied here. First, within the optimization and biological realm, our self-adaptive multimethod search provides important new ways to study evolutionary processes. Second, our results demonstrate that competition between individual algorithms and adaptive offspring creation dramatically improves the efficiency of evolutionary search. Combined with anticipated increases in computational power, the AMALGAM method should provide new opportunities for solving previously intractable optimization problems. Our next step is to apply AMALGAM to real world search and optimization problems.

Supplementary Material

Supporting Information:

Acknowledgments

We thank Dr. Steen Rasmussen, Velimir Vesselinov (Los Alamos National Laboratory; LANL), Dr. Patrick Reed (Penn State University, University Park, PA), and the three anonymous reviewers for comments and suggestions. J.A.V. is supported by the LANL Director's Funded Postdoctoral program.

Footnotes

The authors declare no conflict of interest.

This article contains supporting information online at www.pnas.org/cgi/content/full/0610471104/DC1.

References

1. Wales DJ, Scheraga HA. Science. 1999;285:1368–1372. [PubMed]
2. Nowak MA, Sigmund K. Science. 2004;303:793–799. [PubMed]
3. Lemmon AR, Milinkovitch MC. Proc Natl Acad Sci USA. 2002;99:10516–10521. [PMC free article] [PubMed]
4. Glick M, Rayan A, Goldblum A. Proc Natl Acad Sci USA. 2002;99:703–708. [PMC free article] [PubMed]
5. Bounds DG. Nature. 1987;329:215–219.
6. Barhen J, Protopopescu V, Reister D. Science. 1997;276:1094–1097.
7. Schoups GH, Hopmans JW, Young CA, Vrugt JA, Wallender WW, Tanji KK, Panday S. Proc Natl Acad Sci USA. 2005;102:15352–15356. [PMC free article] [PubMed]
8. Holland J. Adaptation in Natural and Artificial Systems. Cambridge, MA: MIT Press; 1975.
9. Achlioptas D, Naor A, Peres Y. Nature. 2005;435:759–763. [PubMed]
10. Goldberg DE. Genetic Algorithms in Search, Optimization and Machine Learning. Reading, MA: Addison–Wesley; 1989.
11. Deb K. Multi-Objective Optimization Using Evolutionary Algorithms. New York: Wiley; 2001.
12. Zitzler E, Deb K, Thiele L. Evol Comp. 2000;8:173–195. [PubMed]
13. Zitzler E, Thiele L. IEEE Trans Evol Comp. 1999;3:257–271.
14. Deb K, Pratap A, Agarwal S, Meyarivan T. IEEE Trans Evol Comp. 2002;6:182–197.
15. Knowles J, Corne D. Proc 1999 Conf Evol Comp; New York: IEEE Press; 1999.
16. Wolpert DH, Macready WG. IEEE Trans Evol Comp. 1997;1:67–82.
17. Hart WE, Krasnogor N, Smith JE. Recent Advances in Memetic Algorithms. Berlin: Springer; 2005.
18. Gneiting T, Raftery AE. Science. 2005;310:248–249. [PubMed]
19. Kennedy J, Eberhart RC, Shi Y. Swarm Intelligence. San Francisco: Morgan Kaufmann; 2001.
20. Haario H, Saksman E, Tamminen J. Bernoulli. 2001;7:223–242.
21. Storn R, Price K. J Global Optimization. 1997;11:341–359.
22. Deb K, Thiele L, Laumanns M, Zitzler E. KanGAL Rep. 2001 2001001.

Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Links

  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...