Select Page

2021

Meta-Learning to Compositionally Generalize

Henry Conklin*, Bailin Wang*, Kenny Smith, Ivan Titov

We looked at how to use meta-learning to introduce biases during training that help neural models to generalize out-of-distribution. Using this technique we inhibited models’ memory analogously to humans resulting in substantial improvement on compositional generalization tasks.

presented as a long paper at the ACL 2021 main conference