Understanding and mitigating the label noise in pre-training on downstream tasks
Published in arXiv preprint arXiv:2309.17002, 2023
Pre-training on large-scale datasets has become the foundation of modern deep learning, but these datasets often contain significant label noise. This work explores how label noise in pre-training data affects model performance on downstream tasks. We provide theoretical and empirical analysis of the noise propagation mechanism and propose mitigation strategies that can improve the robustness of pre-trained models when transferred to clean downstream tasks.
Recommended citation: @article{chen2023understanding, title={Understanding and Mitigating the Label Noise in Pre-Training on Downstream Tasks}, author={Chen, Hao and Wang, Jiahao and Shah, Ankit and Tao, Ran and Wei, Hongxin and Xie, Xing and Sugiyama, Masashi and Raj, Bhiksha}, journal={arXiv preprint arXiv:2309.17002}, year={2023} }
Download Paper