如何取得PyTorch模型中特定Layer的輸出?

我們通常都只在乎model最終的output,而比較少去關注中間Layer的output。假如想要取得中間Layer的output可以怎麼做?

例如:t-SNE的視覺化就會使用到分類器前一層的output

1. register_forward_hook(CSDN)

取得LeNet中conv2的output(使用list保存數值)
我在dataloader使用這個方法能正常運作,使用2.的出現None type Error

import torch
import torch.nn as nn
import torch.nn.functional as F

class LeNet(nn.Module):
def __init__(self):
super(LeNet, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16*5*5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)

def forward(self, x):
out = self.conv1(x)
out = F.relu(out)
out = F.max_pool2d(out, 2)

out = self.conv2(out)
out = F.relu(out)
out = F.max_pool2d(out, 2)

out = out.view(out.size(0), -1)
out = F.relu(self.fc1(out))
out = F.relu(self.fc2(out))
out = self.fc3(out)
return out

2. register_forward_hook (PyTorch Forum)

取得fc2的output(使用dict保存layer名稱和數值)

class MyModel(nn.Module):
def __init__(self):
super(MyModel, self).__init__()
self.cl1 = nn.Linear(25, 60)
self.cl2 = nn.Linear(60, 16)
self.fc1 = nn.Linear(16, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)

def forward(self, x):
x = F.relu(self.cl1(x))
x = F.relu(self.cl2(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.log_softmax(self.fc3(x), dim=1)
return x

使用內建或被訓練好的模型:

與上述1、2方法結合即可達到目的

使用 PyTorch內建的 ResNet18

import os
import torch
import torchvision.models as models
import torch.optim
from torchvision import transforms

使用已經訓練好的 ResNet18

import os
import torch
import torchvision.models as models
import torch.optim

參考資料

Machine Learning | Deep Learning | https://linktr.ee/yanwei

Machine Learning | Deep Learning | https://linktr.ee/yanwei