Skip to content

naver-ai/negmerge

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 

Repository files navigation

Hyoseo Kim1,2*, Dongyoon Han1†, Junsuk Choe2†
* Work done during an internship at NAVER AI Lab, † corresponding authors
1NAVER AI LAB, 2Sogang University

paper

Abstract

Machine unlearning aims to selectively remove specific knowledge from a trained model. Existing approaches, such as task arithmetic, fine-tune the model on the forget set to create a task vector (i.e., a direction in weight space) for subtraction from the original weight. However, their effectiveness is highly sensitive to hyperparameter selection, requiring extensive validation to identify the optimal vector from many fine-tuned candidates. In this paper, we propose a novel method that utilizes all fine-tuned models trained with varying hyperparameters instead of a single selection. Specifically, we aggregate the computed task vectors by retaining only the elements with consistent shared signs. The merged task vector is then negated to induce unlearning on the original model. Evaluations on zero-shot and standard image recognition tasks across ten datasets and three backbone architectures show that our approach achieves superior unlearning performance. It outperforms state-of-the-art methods while requiring similar or fewer computational resources.

Our Motivation: Hyperparameter Sensitivity in Negation Methods

  • (a) shows performance comparisons with competing methods (+ fine-tuned models alone) in task negation;

  • (b) illustrates detailed parameter sensitivity across various datasets:

    image

Updates

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •