Hyoseo Kim1,2*, Dongyoon Han1β , Junsuk Choe2β
* Work done during an internship at NAVER AI Lab, β corresponding authors
1NAVER AI LAB, 2Sogang University
Machine unlearning aims to selectively remove specific knowledge from a trained model. Existing approaches, such as task arithmetic, fine-tune the model on the forget set to create a task vector (i.e., a direction in weight space) for subtraction from the original weight. However, their effectiveness is highly sensitive to hyperparameter selection, requiring extensive validation to identify the optimal vector from many fine-tuned candidates. In this paper, we propose a novel method that utilizes all fine-tuned models trained with varying hyperparameters instead of a single selection. Specifically, we aggregate the computed task vectors by retaining only the elements with consistent shared signs. The merged task vector is then negated to induce unlearning on the original model. Evaluations on zero-shot and standard image recognition tasks across ten datasets and three backbone architectures show that our approach achieves superior unlearning performance. It outperforms state-of-the-art methods while requiring similar or fewer computational resources.
-
(a) shows performance comparisons with competing methods (+ fine-tuned models alone) in task negation;
-
(b) illustrates detailed parameter sensitivity across various datasets:
- (2025/05/01): Our paper has been accepted at ICML 2025ππππ
- (2024/10/09): Our paper has been accepted at NeurIPS 2024 Workshop on Adaptive Foundation Modelsππππ
- (2024/10/03): Code is under internal review.
- (2024/10/03): Preprint has been uploaded.