BandiK: Efficient Multi-Task Learning with Multi-Bandits
Analysis
This paper addresses the challenge of efficient auxiliary task selection in multi-task learning, a crucial aspect of knowledge transfer, especially relevant in the context of foundation models. The core contribution is BandiK, a novel method using a multi-bandit framework to overcome the computational and combinatorial challenges of identifying beneficial auxiliary task sets. The paper's significance lies in its potential to improve the efficiency and effectiveness of multi-task learning, leading to better knowledge transfer and potentially improved performance in downstream tasks.
Key Takeaways
- •Proposes BandiK, a novel three-stage multi-task auxiliary task subset selection method.
- •Utilizes a multi-bandit framework to efficiently evaluate candidate auxiliary task sets.
- •Addresses the computational and combinatorial challenges of multi-task learning.
- •Aims to improve knowledge transfer and downstream task performance.
“BandiK employs a Multi-Armed Bandit (MAB) framework for each task, where the arms correspond to the performance of candidate auxiliary sets realized as multiple output neural networks over train-test data set splits.”