The dramatic increase of computational resources, the availability of enormous data, and the significant advances in machine learning and artificial intelligence are currently shaking up our societies. As part of this change, our use of technology is evolving at a fast pace, with lasting effects on our traditional routines. Decisions that always required considerable expert knowledge can suddenly be taken (or be prepared) by automated procedures. This automation does not only support manual work, but it also drastically changes the objectives of various research domains, including Computer Science itself.
In the Benchmarked: Optimization Meets Machine Learning 2022 workshop we will discuss the impact of automated decision-making on an important sub-domain: optimization. More specifically, we will discuss how the possibility to automatically select, configure, or even design optimization algorithms changes the requirements for their benchmarking.
The key objectives of this Lorentz Center workshop are:
The workshop brings together researchers from different sub-domains in optimization with colleagues from automated machine learning. Together we will discuss what an ideal benchmarking environment would look like, how such an ``ideal tool'' compares to existing software, and how we can close the gap by improving the compatibility between ongoing and future projects.
Concretely, we aim at designing a full benchmarking engine that ranges from modular algorithm frameworks over problem instance generators and landscape analysis tools to automated algorithm configuration and selection techniques, all the way to a statistically sound evaluation of the experimental data.