Select Page

hello.

i’m henry, a final year phd student at the university of edinburgh in natural language processing.

 

i’m at Cogsci in Rotterdam giving a talk Friday at 11.30am at the LLM session in Mees II – come say hi

we’ve released code for highly-parallelised entropy estimation for pytorch here

how do deep learning models learn to do so much, so well?

My research tries to understand what learning looks like on a representational level. I focus on Information Theory, building efficient, scalable, approaches to interpretability, that let us better understand how large-scale neural networks, work. This helps to provide insight into how learning may work in humans and other species – and helps us build better models by understanding the representational effects of different design decisions.

selected publications

Meta Learning to Compositionally Generalise

Meta Learning to Compositionally GeneraliseIntroducing domain-general biases via optimisationThis paper appeared as a talk at the Meeting of the Association of Compositional Linguistics (ACL) in 2024.AbstractNatural language is compositional; the meaning of a sentence...