Benchmarking involves the experimental comparison of optimization algorithms. This activity
is of high practical importance in the design of algorithms and their application in the real
world. The reason is that practically relevant algorithms are more often than not too complex
to be analysed theoretically; nevertheless, a practitioner needs to decide which algorithm to
apply. Moreover, we cannot apply a theoretical algorithm in practice, but only an
implementation of it. And two implementations of the same algorithm might even differ
significantly, due to different handling of numerics, varying internal parameter settings, or
other reasons. The only solution is to experimentally compare algorithms on benchmark
functions which are relevant in the real world in order to recommend the best performing
ones.
This workshop aims to advance the current state of the art in benchmarking of multi-criteria
optimization algorithms. We will seek to identify what can be learned from advances in
benchmarking practices in single-criterion optimization and what aspects are particular to
multi-criteria problems. We will investigate how knowledge from researchers working on
benchmarking can be transferred towards software frameworks for applying algorithms and
how we can help an engineer or analyst with a real-world problem to select an algorithm for
use. The workshop will also explore what changes in the benchmarking process if we alter
key aspects of a multi-criteria problem such as the number of criteria, the number of
variables, the presence and number of constraints, and the nature of interactions with a
human decision maker. The workshop will be organized in an open space setting.