# PyTorch如何在訓練過程中對多種不同損失函數的損失值進行反向傳播？

`loss_a = criterion_a(output, target)loss_b = criterion_b(output, target)loss_c = criterion_c(output, target)`

`loss = loss_a + loss_b + loss_closs.backward()`
`loss_a.backward()loss_b.backward()loss_c.backward()`
`loss_a.backward(retain_graph=True)loss_b.backward(retain_graph=True)  # 2022/04/24修正loss_c.backward()`
`# 參考 https://discuss.pytorch.org/t/how-to-have-two-optimizers-such-that-one-optimizer-trains-the-whole-parameter-and-the-other-trains-partial-of-the-parameter/62966# https://blog.csdn.net/weixin_44058333/article/details/99701876# 針對同模型的不同部分參數optim1 = torch.optim.SGD(model.parameters(), lr=0.001)optim2 = torch.optim.Adam(model.conv3.parameters(), lr=0.05)==================================================================# 針對兩個不同模型optimizer1= torch.optim.SGD(net1.parameters(), learning_rate, momentum,weight_decay)optimizer2= torch.optim.SGD(net2.parameters(), learning_rate, momentum,weight_decay).....loss1 = loss()loss2 = loss()optimizer1.zero_grad() #set the grade to zeroloss1.backward(retain_graph=True) #retain graph for second backwardoptimizer1.step()optimizer2.zero_grad() #set the grade to zeroloss2.backward() optimizer2.step()`

--

--

--

## More from Yanwei Liu

Machine Learning | Deep Learning | https://linktr.ee/yanwei

Love podcasts or audiobooks? Learn on the go with our new app.

## Yanwei Liu

Machine Learning | Deep Learning | https://linktr.ee/yanwei