What are we Transferring Anyway?

Published:

Blogpost on this topic

Code

A Research Project Experimenting with Self-Attention to interpret different source domains for Transfer Learning

Multi-task learning and transfer learning are set to change the way we do NLP, significantly improving model generalization, data efficiency, and accuracy. However, it’s hard to interpret why knowledge particular datasets are more useful to transfer than others, and as we have seen, machine learning systems can unknowingly learn bias. To mitigate both of these problems, I’m experimenting with using the recent method of attention to interpret different kinds of transfer learning.