Description & Aim
Software and hardware verification is a complex task, supported by automated reasoning tools. Tremendous progress has led to a multitude of algorithms and tools that are providing automated and scalable solutions for specific verification instances. The research field has been organizing several series of competitions as means for the objective evaluation and comparison between verification tools on a common set of benchmarks.
So, for now more than a decade, there was an emergence of software contests that assess the capabilities of academic tools on complex and shared benchmarks, so as to identify which theoretical approaches are the most fruitful ones in practice, when applied to realistic examples. This is for example the case in areas such as verification or automated reasoning.
These competitions provide insight in the best solutions for a particular task, but they also motivate researchers to push the boundaries of their tools, improving the state-of-the-art. This is why notable events have significant impact on the involved communities.
The goal of this meeting is to understand and advance automated reasoning and verification competitions as a scientific method. We want to understand the organizational factors that contribute to their scientific impact. This could help existing competitions, as well as new ones to organize a successful event, by answering questions like:
how to setup the competition rules?
how to select benchmarks?
how to execute the competition itself?
how to evaluate the end results?
how to exploit the generated experimental data?
Moreover, the gathering of such an experience can also benefit other communities in Computer Science wanting to evaluate theoretical solutions in practice. Finally, it is an important step to set-up common rules for reproducibility of results and measures.