Task Affinity Weighted Meta-Learning (TAML)

Nick Vasko, Takara Truong
CS 330 Fall 2021: Meta-Learning

About

The success of multi-task learning has been proven to be highly dependent on selecting the correct tasks to train together, commonly referred to as task grouping. This is due to the fact that naively training all members of a set of tasks in one multi-task model may result in negative transfer between unrelated tasks, worsening performance. In task grouping, it is assumed that tasks train well together throughout the entire training process, when this may not be the case.

We propose Task Affinity Weighted Meta-learning, TAML. TAML modulates the amount of transfer within a training cycle by weighting loss based on its affinity towards other tasks. If a batch has low affinity, then taking a large gradient step in that direction results in worse performance on average. Our approach allows us to handle the amount of transfer between tasks on a per iteration basis. To the best of our knowledge, this has not been proposed in any previous work.

We demonstrate the effectiveness of our approach as a meta-learning algorithm. Our training method is able to learn a set of initial parameters that when finetuned outperforms finetuning experiments over baseline multi-task learning on the natural language benchmark ZEST. We see a 3.99 point improvement in C@90 score and a 4.05 improvement in C@70 score with a minimal increase in average F1 scores by 0.97 points.

Full Report: [pdf]

 
Next
Next

CS 238 Fall 2021: World of Cubes