AIIMix用于標(biāo)簽噪聲學(xué)習(xí)的圖像分類(lèi)方法
中圖分類(lèi)號(hào):TP183文獻(xiàn)標(biāo)志碼:A
Abstract: Datasets collected and annotated manually are inevitably contaminated with label noise, which negatively affects the generalization ability of image classification models. Therefore, designing robust classification algorithms for datasets with label noise has become a hot research topic.The main issue with existing methods is that self-supervised learning pre-training is timeconsuming and still includes a large number of noisy samples after sample selection. This paper introduces the AllMix model, which reduces the time required for pre-training. Based on the DivideMix model, the AllMatch training strategy replaces the original MixMatch training strategy. The AllMatch training strategy uses focal loss and generalized cross-entropy loss to optimize the loss calculation for labeled samples. Additionally, it introduces a high-confidence sample semisupervised learning module and a contrastive learning module to fully learn from unlabeled samples.Experimental results show that on the CIFAR1O dataset, the existing pre-trained label noise classification algorithms are 0.7%,0.7% ,and 5.0% higher in performance than those without pre-training for 50% , 80% ,and 90% symmetric noise ratios, respectively. On the CIFAR100 dataset with 80% and 90% symmetric noise ratios, the model performance is 2.8% and 10.1% (204號(hào) higher, respectively.
Keywords: label noise learning; image classification; semi-supervised learning; contrastive learning
引言
卷積神經(jīng)網(wǎng)絡(luò)(convolutionalneuralnetwork,CNN)等深度學(xué)習(xí)技術(shù)已廣泛應(yīng)用于圖像分類(lèi)領(lǐng)域[1-3]。(剩余11564字)
- 基于掃描光鑷技術(shù)的多粒子力測(cè)量...
- 基于可配置質(zhì)量項(xiàng)的拓?fù)涑瑯?gòu)表面...
- 干燥固體界面上微納物體全光操控...
- 崩塌條件下顆粒重排區(qū)域的堆積結(jié)...
- 基于單模光纖分束器的夏克-哈特...
- AIIMix用于標(biāo)簽噪聲學(xué)習(xí)的...
- Yb+/Er雙摻基氟化物晶體上...
- 光學(xué)全連接神經(jīng)網(wǎng)絡(luò)與光學(xué)卷積神...
- TiO2 納米顆粒擔(dān)持光波導(dǎo)的...
- ZIF-8封裝NV色心金剛石的...
- 《光學(xué)儀器》征稿簡(jiǎn)則...