Install iSH Shell

Tutorial

Install apk (Alpine Linux package management)

wget -qO- http://dl-cdn.alpinelinux.org/alpine/v3.12/main/x86/apk-tools-static-2.10.5-r1.apk | tar -xz sbin/apk.static && ./sbin/apk.static add apk-tools && rm sbin/apk.static && rmdir sbin 2> /dev/null

once you install apk, you can use the following command to install nano text editor.

apk add nano

Install python

apk updateapk add python3python3print('hello world')

更新內容

Q: what is OSAnet?
the proposed COSA can get a higher AP. Therefore, we finally chose COSA-2x2x which received the best speed/accuracy trade-off in our experiment as the YOLOv4-tiny architecture.
A: OSA module in VoVNet to build OSANet.
Q: YOLOv4-tiny 和YOLOv4-tiny-3l的差別?
A: there are 2 yolo layers in yolov4-tiny, while 3l has 3.

A:
[1]訓練yolov4-tiny的程式可以參考
[2]在後面暫時加上 balance = [0.4, 1.0] if np == 2 else balance

YOLOv4系列模型效能比較

論文連結

GitHub Repo

新增的功能

對比舊版的,這個版本的程式碼支援多GPU訓練、高Batch size訓練(batch = 64,每個epochs約15分鐘)、resume training、支援YOLOv4-tiny(須自己修改程式碼)、支援pre-trained weight訓練。

資料集準備

請使用COCO2017作為訓練集、驗證集、測試集,若用COCO2014的dataset,會可使用這個script來進行dataset下載filename="coco2017labels.zip" …



pip install mammoth              # install packagemammoth input.docx output.html   # docx to htmlmammoth sample.docx output.md — output-format=markdown # docx to md

For more usage, please follow the instruction on official docs:


[Done] [2010.03522] A Survey of Deep Meta-Learning

[1710.03463] Learning to Generalize: Meta-Learning for Domain Generalization

[1912.07200] A Broader Study of Cross-Domain Few-Shot Learning

[2001.08735] Cross-Domain Few-Shot Classification via Learned Feature-Wise Transformation

[2004.14164] MICK: A Meta-Learning Framework for Few-shot Relation Classification with Little Training Data

[2005.10544] Cross-Domain Few-Shot Learning with Meta Fine-Tuning

[2006.11384] A Transductive Multi-Head Model for Cross-Domain Few-Shot Learning

[2010.06498] Cross-Domain Few-Shot Learning by Representation Fusion

[2011.00179] Combining Domain-Specific Meta-Learners in the Parameter Space for Cross-Domain Few-Shot Classification

Explain and Improve: Cross-Domain Few-Shot-Learning Using Explanations

A Broader Study of Cross-Domain Few-Shot Learning

[Done] 【論文閱讀】[meta learning]cross-domain few-shot classification via learned feature-wise transformation

[Done] 小樣本學習跨域(Cross-domain)問題總結- 知乎

[Done] A Survey of Cross-Domain Few-shot Learning — 知乎

[Done] To learn how to learn — — 元學習(meta-learning) 讀書筆記

[Done]小樣本學習方法(FSL)演變過程


In this article, I will show you the way to download datasets from Kaggle with kaggle API.

# install kaggle api to fetch dataset
pip install kaggle --upgrade
# move your API token to ~/.kaggle/kaggle.json
# you can check the following link to set up your kaggle api token

#
# Download the dataset and use unzip command according to your folder structure
kaggle datasets download userName/datasetName
Example: kaggle datasets download kneroma/tacotrashdataset


Image for post
Image for post

Step1

Create a new repo with your own username.

My GitHub account is e96031413, so I have to create a new repo named 

Step2

Use to create a Profile README

Step3

Copy the markdown text generated by and paste it to your README.md file inside your username repo.

For me, it would be this file.

Step4

Return to your , and you can see that all the beautiful stuff does appear.


2020/10/22:

python -m torch.distributed.launch --nproc_per_node 2 train.py --batch-size 64 --data coco.yaml --cfg yolov5l.yaml --weights ''
--nproc_per_node specifies how many GPUs you would like to use. In the example above, it is 2.--batch-size is now the Total batch-size. It will be divided evenly to each GPU. In the example above, it is 64/2=32 per GPU.

正文:

ultralytics/yolov5是由國外一間公司用PyTorch實現的YOLOv5

特點:

  1. 能自動從Google Drive下載缺少的Weight
  2. 能在不需要OpenCV(C++)的環境下進行資料增強及Model訓練
  3. 可以直接進行mAP測試,不須上傳至CodaLab,不過測試時間大概需要25分鐘(以官方weight為例)

除此之外,更多特點可以參考官方GitHub REPO:

注意事項:

ultralytics的COCO路必須放在yolov5資料夾的平行目錄。

例如:

工作目錄:/work/yanwei.liu/yolov5
COCO目錄:/work/yanwei.liu/coco

與 ultralytics/yolov3的一些差異:

model的參數設定不再使用cfg檔案,而是改用yaml格式的檔案yolov5/data資料夾中的檔案本來是coco.data;改成了coco.yaml格式

使用方式:

請注意自己的python指令預設是python2還是python3版本,如果是python3版本可以放心使用下列指令;若不是,請使用python3指令

# 基本環境建置(get_coco2014.sh用來取得coco的資料,若不需要進行mAP測試可跳過)
git clone
bash yolov5/data/scripts/get_coco.sh …

About

Yanwei Liu

Machine Learning / Deep Learning / Python / Flutter

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store