Python影像辨識筆記(二十八):BalancedMetaSoftmax — Instance Segmentation

在imbalanced dataset訓練且在imbalanced dataset進行Testing的時候,Balance softmax表現可能會不如一般的Softmax本篇論文的設定是在Imbalanced的dataset上訓練;在Balanced的dataset上測試
# PyTorch 1.6版本(CUDA 9.2版)pip install torch==1.6.0+cu92 torchvision==0.7.0+cu92 -f https://download.pytorch.org/whl/torch_stable.html# detectron2 (必須安裝0.2.1版本否則會錯誤)python -m pip install detectron2==0.2.1 -f \
https://dl.fbaipublicfiles.com/detectron2/wheels/cu92/torch1.6/index.html
# lvis api (dataset設定)
pip install git+https://github.com/lvis-dataset/lvis-api.git
# higher
pip install higher
# fvcore(避免這個問題)
pip install fvcore==0.1.1.post20200716
# tensorboard
pip install tensorboard==1.15
# 程式專案
git clone https://github.com/Majiker/BalancedMetaSoftmax-InstanceSeg
Note: LVIS uses the COCO 2017 train, validation, and test image sets. If you have already downloaded the COCO images, you only need to download the LVIS annotations. LVIS val set contains images from COCO 2017 train in addition to the COCO 2017 val split.
coco/
{train,val,test}2017/
lvis/
lvis_v0.5_{train,val}.json
lvis_v0.5_image_info_test.json
lvis_v1_{train,val}.json
lvis_v1_image_info_test{,_challenge}.json
# 下載coco dataset
wget https://raw.githubusercontent.com/ultralytics/yolov3/master/data/scripts/get_coco.sh
sh get_coco.sh#下載完成後,請將coco/images中的train2017和val2017移動到coco/。# 下載lvis annotationwget https://s3-us-west-2.amazonaws.com/dl.fbaipublicfiles.com/LVIS/lvis_v0.5_train.json.zipwget https://s3-us-west-2.amazonaws.com/dl.fbaipublicfiles.com/LVIS/lvis_v1_train.json.zipwget https://s3-us-west-2.amazonaws.com/dl.fbaipublicfiles.com/LVIS/lvis_v0.5_val.json.zipwget https://s3-us-west-2.amazonaws.com/dl.fbaipublicfiles.com/LVIS/lvis_v1_val.json.zipwget https://s3-us-west-2.amazonaws.com/dl.fbaipublicfiles.com/LVIS/lvis_v1_image_info_test_challenge.json.zipwget https://s3-us-west-2.amazonaws.com/dl.fbaipublicfiles.com/LVIS/lvis_v1_image_info_test_dev.json.zip

閱讀 Source Code之後發現可以透過:

wget https://dl.fbaipublicfiles.com/detectron/ImageNetPretrained/MSRA/R-50.pkl
#修改BalancedMetaSoftmax-InstanceSeg/detectron2/engine/defaults.py當中314行的:
checkpoint=self.checkpointer.resume_or_load(self.cfg.MODEL.WEIGHTS, resume=resume)
要改成:
checkpoint =self.checkpointer.resume_or_load("/root/notebooks/nfs/work/yanwei.liu/BalancedMetaSoftmax-InstanceSeg/pretrains/R-50.pkl", resume=resume)
--------------------------------------------------------------------
--------------------------------------------------------------------
或是使用以下指令,即可讓程式讀取到R-50.pkl的pre-trained weight(其實就是在--config-file後面補上MODEL.WEIGHTS的參數),這個方式是在defaults.py的65行發現的:python ./projects/BALMS/train_net.py --num-gpus 4 --config-file XXXX.yaml MODEL.WEIGHTS ./pretrains/R-50.pkl
# Baseline 訓練
python ./projects/BALMS/train_net.py --num-gpus 4 --config-file ./projects/BALMS/configs/feature/sigmoid_resampling_mask_rcnn_R_50_FPN_1x.yaml MODEL.WEIGHTS ./pretrains/R-50.pkl
# BALMS_decouple訓練
# 將baseline訓練好的weight放到pretrains資料夾當中,並且設定.yaml檔案中的weight路徑到訓練完成weight的檔名,即可接續balms_decouple的訓練
python ./projects/BALMS/train_net.py --config-file ./projects/BALMS/configs/classifier/balms_decouple_resampling_mask_rcnn_R_50_FPN_1x.yaml --num-gpus 4# BalancedSoftmax 訓練
python ./projects/BALMS/train_net.py --config-file ./projects/BALMS/configs/classifier/balanced_softmax_decouple_resampling_mask_rcnn_R_50_FPN_1x.yaml --num-gpus 4
使用TESLA V100 GPU的訓練結果
[551463, 246249, 12372, 11586, 10298, 6474]
# 例如原本使用nn.CrossEntropyLoss(),將其註解from BalancedSoftmaxLoss import create_loss#criterion = nn.CrossEntropyLoss().cuda()criterion = create_loss().cuda()  #改用create_loss

--

--

Machine Learning | Deep Learning | https://linktr.ee/yanwei

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store