Learning with Instance-dependent Noisy Labels by Anchor Hallucination and Hard Sample Label Correction

International Conference on Image Processing (ICIP 2024)

作者

Po-Hsuan Huang*, Chia-Ching Lin*, Chih-Fan Hsu, Ming-Ching Chang, Wei-Chao Chen

發表日期

2024/6/10

Learning with instance-dependent noisy labels

概要

Learning from noisy-labeled data is crucial for real-world applications. Traditional Noisy-Label Learning (NLL) meth-ods categorize training data into clean and noisy sets based on the loss distribution of training samples. However, they often neglect that clean samples, especially those with intricate visual patterns, may also yield substantial losses. This oversight is particularly significant in datasets with Instance-Dependent Noise (IDN), where mislabeling probabilities correlate with visual appearance.

Our approach explicitly distinguishes between clean vs. noisy and easy vs. hard samples. We identify training samples with small losses, assuming they have simple patterns and correct labels. Utilizing these easy samples, we hallucinate multiple anchors to select hard samples for label correction. Corrected hard samples, along with the easy samples, are used as labeled data in subsequent semi-supervised training. Experiments on synthetic and real-world IDN datasets demonstrate the superior performance of our method over other state-of-the-art NLL methods.

關鍵字

  • Noisy Label Learning
  • Semi-supervised Learning

下載與分享