Select Page

Selected Publications

Representations as Language: An Information Theoretic Framework for Interpretability

How should we think about large scale neural models? They show impressive performance across a wide array of linguistic tasks. Despite this they remain, largely, black-boxes – inducing vector-representations of their input that prove difficult to interpret. This limits our ability to understand what they learn, and when the learn it, or describe what kinds of representations generalise well out of distribution. To address this we introduce a novel approach to interpretability that looks at the mapping a model learns from sentences to representations as a kind of language in its own right. In doing so we introduce a set of information-theoretic measures that quantify how structured a model’s representations are with respect to its input, and when during training that structure arises. Our measures are fast to compute, grounded in linguistic theory, and can predict which models will generalise best based on their representations. We use these measures to describe two distinct phases of training a transformer: an initial phase of in-distribution learning which reduces task loss, then a second stage where representations becoming robust to noise. Generalisation performance begins to increase during this second phase, drawing a link between generalisation and robustness to noise. Finally we look at how model size affects the structure of the representational space, showing that larger models ultimately compress their representations more than their smaller counterparts.

Anaphoric Structures Emerge Between Neural Networks

Anaphors are ubiquitous in human language; structures like pronouns and ellipsis are present in virtually every language. This is in spite of the fact that they seem to introduce ambiguity – “they left a parcel for you” could refer to virtually anyone, and needs to be disambiguated by context. Many accounts of why anaphors exist are tied to efficiency: they enable brevity which lowers the effort of communicating. We show that anaphoric structures emerge between communicating neural networks whether or not there’s any pressure for efficiency, with efficiency pressures increasing the prevalence of anaphoric structures already present. Pointing to the relationship between semantics and pragmatics – rather than efficency – as a major causal factor.

Compositionality With Variation Reliably Emerges in Neural Networks

We re-evaluated how to look for compositional structure, in response to recent work claiming compositionality isn’t needed for generalization. While natural languages are compositional they’re also rich with variation – by introducing 4 explicit measures of variation, we showed that models reliably converge to compositional representations just with a degree of variation that skewed previous measures. We also showed that at the start of training variation correlates strongly with generalization – but that this effect goes away as representations become regular enough for the task. Converging to highly-variable representations is similar to what we see in human languages, and in a final set of experiments we show that model capacity, thought to condition variation in human language, has a similar conditioning effect with neural networks.

Meta Learning to Compositionally Generalise

Meta Learning to Compositionally GeneraliseIntroducing domain-general biases via optimisationThis paper appeared as a talk at the Meeting of the Association of Compositional Linguistics (ACL) in 2024.AbstractNatural language is compositional; the meaning of a sentence...