A Review of Multiobjective Test Problems and a Scalable Test Problem Toolkit

S. Huband, P. Hingston, Luigi Barone, Lyndon While

Research output: Contribution to journalArticlepeer-review

1776 Citations (Scopus)

Abstract

When attempting to better understand the strengths and weaknesses of an algorithm, it is important to have a strong understanding of the problem at hand. This is true for the field of multiobjective evolutionary algorithms (EAs) as it is for any other field.Many of the multiobjective test problems employed in the EA literature have not been rigorously analyzed, which makes it difficult to draw accurate conclusions about the strengths and weaknesses of the algorithms tested on them. In this paper, we systematically review and analyze many problems from the EA literature, each belonging to the important class of real-valued, unconstrained, multiobjective test problems. To support this, we first introduce a set of test problem criteria, which are in turn supported by a set of definitions.Our analysis of test problems highlights a number of areas requiring attention. Not only are many test problems poorly constructed but also the important class of nonseparable problems, particularly nonseparable multimodal problems, is poorly represented. Motivated by these findings, we present a flexible toolkit for constructing well-designed test problems. We also present empirical results demonstrating how the toolkit can be used to test an optimizer in ways that existing test suites do not.
Original languageEnglish
Pages (from-to)477-506
JournalIEEE Transactions on Evolutionary Computation
Volume10
Issue number5
DOIs
Publication statusPublished - 2006

Fingerprint

Dive into the research topics of 'A Review of Multiobjective Test Problems and a Scalable Test Problem Toolkit'. Together they form a unique fingerprint.

Cite this