Description and aim
There exist a plethora of conditions (such as margin conditions in classification, exp-concavity of the losses in sequence prediction and perturbation robustness for clustering) under which learning becomes easier than in the worst-case. This workshop investigates how reasonable such conditions really are, and aims to further develop algorithms that simultaneously exploit easy situations while still being close to worst-case optimal.