Select Page

hiya.

i’m henry, a post-doctoral research fellow in the AI Lab at Princeton.

 

in the AI Lab, i work with Jon Cohen, Tom Griffiths, and Sarah-Jane Leslie.
before that i did my phd in informatics (computer science) at the University of Edinburgh, where i was advised by Kenny Smith and Ivan Titov

i’m a computational linguist working on learning, and structure. with an emphasis on models of language. if that’s of interest, get in touch

how do deep learning models learn to do so much, so well?

My research tries to understand what learning looks like on a representational level. I focus on Information Theory, building efficient, scalable, approaches to interpretability, that let us better understand how large-scale neural networks, work. This helps to provide insight into how learning may work in humans and other species – and helps us build better models by understanding the representational effects of different design decisions.

selected publications

Meta Learning to Compositionally Generalise

Meta Learning to Compositionally GeneraliseIntroducing domain-general biases via optimisationThis paper appeared as a talk at the Meeting of the Association of Compositional Linguistics (ACL) in 2024.AbstractNatural language is compositional; the meaning of a sentence...