Install iSH Shell


Install apk (Alpine Linux package management)

wget -qO- | tar -xz sbin/apk.static && ./sbin/apk.static add apk-tools && rm sbin/apk.static && rmdir sbin 2> /dev/null

once you install apk, you can use the following command to install nano text editor.

apk add nano

Install python

apk updateapk add python3python3print('hello world')


Q: what is OSAnet?
the proposed COSA can get a higher AP. Therefore, we finally chose COSA-2x2x which received the best speed/accuracy trade-off in our experiment as the YOLOv4-tiny architecture.
A: OSA module in VoVNet to build OSANet.
Q: YOLOv4-tiny 和YOLOv4-tiny-3l的差別?
A: there are 2 yolo layers in yolov4-tiny, while 3l has 3.

[2]在後面暫時加上 balance = [0.4, 1.0] if np == 2 else balance



GitHub Repo


對比舊版的,這個版本的程式碼支援多GPU訓練、高Batch size訓練(batch = 64,每個epochs約15分鐘)、resume training、支援YOLOv4-tiny(須自己修改程式碼)、支援pre-trained weight訓練。


請使用COCO2017作為訓練集、驗證集、測試集,若用COCO2014的dataset,會可使用這個script來進行dataset下載filename="" …

pip install mammoth              # install packagemammoth input.docx output.html   # docx to htmlmammoth sample.docx — output-format=markdown # docx to md

For more usage, please follow the instruction on official docs:

[Done] [2010.03522] A Survey of Deep Meta-Learning

[1710.03463] Learning to Generalize: Meta-Learning for Domain Generalization

[1912.07200] A Broader Study of Cross-Domain Few-Shot Learning

[2001.08735] Cross-Domain Few-Shot Classification via Learned Feature-Wise Transformation

[2004.14164] MICK: A Meta-Learning Framework for Few-shot Relation Classification with Little Training Data

[2005.10544] Cross-Domain Few-Shot Learning with Meta Fine-Tuning

[2006.11384] A Transductive Multi-Head Model for Cross-Domain Few-Shot Learning

[2010.06498] Cross-Domain Few-Shot Learning by Representation Fusion

[2011.00179] Combining Domain-Specific Meta-Learners in the Parameter Space for Cross-Domain Few-Shot Classification

Explain and Improve: Cross-Domain Few-Shot-Learning Using Explanations

A Broader Study of Cross-Domain Few-Shot Learning

[Done] 【論文閱讀】[meta learning]cross-domain few-shot classification via learned feature-wise transformation

[Done] 小樣本學習跨域(Cross-domain)問題總結- 知乎

[Done] A Survey of Cross-Domain Few-shot Learning — 知乎

[Done] To learn how to learn — — 元學習(meta-learning) 讀書筆記


In this article, I will show you the way to download datasets from Kaggle with kaggle API.

# install kaggle api to fetch dataset
pip install kaggle --upgrade
# move your API token to ~/.kaggle/kaggle.json
# you can check the following link to set up your kaggle api token

# Download the dataset and use unzip command according to your folder structure
kaggle datasets download userName/datasetName
Example: kaggle datasets download kneroma/tacotrashdataset

Image for post
Image for post


Create a new repo with your own username.

My GitHub account is e96031413, so I have to create a new repo named 


Use to create a Profile README


Copy the markdown text generated by and paste it to your file inside your username repo.

For me, it would be this file.


Return to your , and you can see that all the beautiful stuff does appear.


python -m torch.distributed.launch --nproc_per_node 2 --batch-size 64 --data coco.yaml --cfg yolov5l.yaml --weights ''
--nproc_per_node specifies how many GPUs you would like to use. In the example above, it is 2.--batch-size is now the Total batch-size. It will be divided evenly to each GPU. In the example above, it is 64/2=32 per GPU.




  1. 能自動從Google Drive下載缺少的Weight
  2. 能在不需要OpenCV(C++)的環境下進行資料增強及Model訓練
  3. 可以直接進行mAP測試,不須上傳至CodaLab,不過測試時間大概需要25分鐘(以官方weight為例)

除此之外,更多特點可以參考官方GitHub REPO:





與 ultralytics/yolov3的一些差異:




# 基本環境建置(get_coco2014.sh用來取得coco的資料,若不需要進行mAP測試可跳過)
git clone
bash yolov5/data/scripts/ …


Yanwei Liu

Machine Learning / Deep Learning / Python / Flutter

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store