Lorentz Center - Compositionality in Brains and Machines from 5 Aug 2019 through 9 Aug 2019
  Current Workshop  |   Overview   Back  |   Home   |   Search   |     

    Compositionality in Brains and Machines
    from 5 Aug 2019 through 9 Aug 2019

 

One of the hallmarks of human intelligence is compositional learning: we can learn solutions to problems by recursively combining component parts. For example, consider someone learning a new verb ‘to dax’. If this person knows the meaning of the words {‘twice’, ‘and’, ‘again’}, she can immediately  combine the words of the sentence ‘dax twice and then dax again’ into a meaningful instruction. Or consider a person, knowing about cars, motorcycles and bicycles, seeing for the first time a new type of vehicle: a rickshaw. Although the vehicle has very different dimensions, she can immediately reason about the function of the wheels, the pedals, the bench and the raincover.

Composition appears to be trivially solved by the human brain: human infants can learn to combine words into sentences with a remarkably low amount of exposure [e.g., Kuhl et al, 2014]. Unlike other species, humans learn to compose elements into functional tools with an unrivaled efficiency. In contrast, compositional operations remain a major challenge to artificial intelligence systems, and, despite the attention it has received in different areas of science [e.g., Von der Malsburg, 1999, Steedman, 1999, Lake et al., 2015, Hupkes et al., 2018], compositionality remains poorly understood. Two distinct AI approaches have been proposed to learn and perform compositional tasks. The first derives from Boolean and, more recently Bayesian modeling, which both define rules and symbols a priori, and identify how their combination can account for a potentially very small training dataset. These systems, however, often struggle with three major limitations. First, it is unclear how they scale up to real-world tasks, where rules and symbols often remain unknown. Second, these approaches are known to be computationally intractable given a large number of symbols. Finally, these approaches remain defined at a “computational” level, and thus cannot easily be linked to the functioning of the human brain.

 

 

The second approach, now often referred to as deep learning, addresses some of these shortcomings. Typically, artificial neural networks can learn compositional tasks from very large numbers of training examples and store them in more neurally plausible high-dimensional, distributed representations. Such models have recently proved to gracefully scale to real-world tasks [LeCun et al., 2015] and in particular to tasks that arguably require compositional skills, such as machine translation, or playing go. In addition, these models , and have been shown to be partially mimic brain activity. For example, the activations of a deep convolutional neural network trained to recognize objects from natural images linearly correlate with the neuronal activity recorded in visual areas of the macaque brain [Yamins et al. 2014]. Similarly, the human brain activity have recently been successfully compared deep artificial neural networks [e.g., Huth et al., 2012].

In spite of these successes, recent work has revealed that neural networks’ ability to compose remains far from humans. Specifically, these systems seems to rely more on memorisation of frequent patterns than on a true capacity to infer the compositional structure of a task [Liška et al., 2018, Lake and Baroni, 2017]. Moreover, these models are difficult to interpret, which limits their scientific value as tools to understand human cognitive processes, and have not yet been compared to human brain activity in explicitly compositional tasks.  Overall, the elementary computations and the neural architecture that allow compositionality remains a major challenge to both artificial intelligence, linguistics and cognitive neuroscience.

In this one-week workshop we will bring together experts from each of these fields, working in both industry and academia, aiming to pool our collective knowledge to address different aspects of the problem. We will organise several discussion sessions led by experts of different fields, as well as working groups tailored around addressing specific questions. To set ourselves a realistic goal, we propose to focus on the creation of a set of challenges to be proposed to different communities, whose solution would bring us closer to solving the puzzle of compositionality.

 

References

·         Dieuwke Hupkes, Sara Veldhoen, and Willem Zuidema. Visualisation and ‘diagnostic classifiers’ reveal how recurrent and recursive neural networks process hierarchical structure. Journal of Artificial Intelligence Research, 61:907–926, 2018.

·         Huth, Alexander G., et al. "A continuous semantic space describes the representation of thousands of object and action categories across the human brain." Neuron 76.6 (2012): 1210-1224.

·         Brenden M. Lake and Marco Baroni. Still not systematic after all these years: On the compositional skills of sequence-to-sequence recurrent networks. CoRR, abs/1711.00350, 2017.

·         Brenden M Lake, Ruslan Salakhutdinov, and Joshua B Tenenbaum. Human-level concept learning through probabilistic program induction. Science, 350(6266):1332–1338, 2015.

·         LeCun, Bengio & Hinton (2015), Deep Learning, Nature volume 521, pages 436–444

·         Adam Liška, Germán Kruszewski, and Marco Baroni. Memorize or generalize? searching for a compositional rnn in a haystack. arXiv preprint arXiv:1802.06467, 2018.

·         Patricia K. Kuhl, Patricia. Early language acquisition: cracking the speech code. Nature Reviews Neuroscience 5, pages 831–843, (2004.

·         Mark Steedman. Connectionist sentence processing in perspective. Cognitive Science, 23(4):615–634, 1999.

·         Christoph Von der Malsburg. The what and why of binding. Neuron, 24(1):95–104, 1999.

·         Yamins, Daniel LK, et al. "Performance-optimized hierarchical models predict neural responses in higher visual cortex." Proceedings of the National Academy of Sciences 111.23 (2014): 8619-8624.



   [Back]