当前位置:网站首页>torchvision. models._ utils. Intermediatelayergetter tutorial
torchvision. models._ utils. Intermediatelayergetter tutorial
2022-06-27 09:51:00 【jjw_ zyfx】
Don't look at the examples
import torch
import torchvision
m = torchvision.models.resnet18(pretrained=True)
print(m)
new_m = torchvision.models._utils.IntermediateLayerGetter(m,
{
'layer1': 'feat1', 'layer3': 'feat2'})
out = new_m(torch.rand(1, 3, 224, 224))
print([(k, v.shape) for k, v in out.items()])
# The output is :
# [('feat1', torch.Size([1, 64, 56, 56])), ('feat2', torch.Size([1, 256, 14, 14]))]
# Because the result is too long, I just want to explain :IntermediateLayerGetter The first parameter of is the model , That is, here is resnet18,
# The second parameter is a dictionary , among 'layer1': 'feat1', Medium layer1 representative resnet18 Medium layer1 That is, only
# resnet18 Medium layer1 Layer can be ,feat1 Is the name of the small module of the network you want to build .'layer3': 'feat2'
# Empathy layer3 Means you want to resnet18 From the beginning to layer3 layer ( contain layer3 layer ) end , And use feat2 As a small module name
Complete output results :
ResNet(
(conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(layer1): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(1): BasicBlock(
(conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(layer2): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(layer3): Sequential(
(0): BasicBlock(
(conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(layer4): Sequential(
(0): BasicBlock(
(conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(downsample): Sequential(
(0): Conv2d(256, 512, kernel_size=(1, 1), stride=(2, 2), bias=False)
(1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(1): BasicBlock(
(conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): ReLU(inplace=True)
(conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
(fc): Linear(in_features=512, out_features=1000, bias=True)
)
[('feat1', torch.Size([1, 64, 56, 56])), ('feat2', torch.Size([1, 256, 14, 14]))]
边栏推荐
- 微信小程序学习之五种页面跳转方法.
- 12 necessary tools for network engineers
- R語言plotly可視化:可視化多個數據集歸一化直方圖(historgram)並在直方圖中添加密度曲線kde、設置不同的直方圖使用不同的分箱大小(bin size)、在直方圖的底部邊緣添加邊緣軸須圖
- 1098 insertion or heap sort (PAT class a)
- 邮件系统(基于SMTP协议和POP3协议-C语言实现)
- [vivid understanding] the meanings of various evaluation indicators commonly used in deep learning TP, FP, TN, FN, IOU and accuracy
- Reorganize common shell scripts for operation and maintenance frontline work
- unity--newtonsoft.json解析
- Stop using system Currenttimemillis() takes too long to count. It's too low. Stopwatch is easy to use!
- leetcode:968. 监控二叉树【树状dp,维护每个节点子树的三个状态,非常难想权当学习,类比打家劫舍3】
猜你喜欢

详细记录YOLACT实例分割ncnn实现

ucore lab3

es 根据索引名称和索引字段更新值

通俗易懂理解朴素贝叶斯分类的拉普拉斯平滑

Quick start CherryPy (1)

Prometheus alarm process and related time parameter description

视频文件太大?使用FFmpeg来无损压缩它

【OpenCV 例程200篇】212. 绘制倾斜的矩形

.NET 中的引用程序集

Une compréhension facile de la simplicité de la classification bayésienne du lissage laplacien
随机推荐
R language plot visualization: visualize the normalized histograms of multiple data sets, add density curve KDE to the histograms, set different histograms to use different bin sizes, and add edge whi
[vivid understanding] the meanings of various evaluation indicators commonly used in deep learning TP, FP, TN, FN, IOU and accuracy
Principle and application of the most complete H-bridge motor drive module L298N
为智能设备提供更强安全保护 科学家研发两种新方法
Quick start CherryPy (1)
Reorganize common shell scripts for operation and maintenance frontline work
JS 文件上传下载
E+H二次表维修PH变送器二次显示仪修理CPM253-MR0005
Pakistani security forces killed 7 terrorists in anti-terrorism operation
[registration] infrastructure design: from architecture hot issues to industry changes | tf63
感应电机直接转矩控制系统的设计与仿真(运动控制matlab/simulink)
运维一线工作常用shell脚本再整理
js中的数组对象
Reading and writing Apache poi
Curiosity mechanism in reinforcement learning
std::memory_order_seq_cst内存序
如何获取GC(垃圾回收器)的STW(暂停)时间?
Use of bufferedwriter and BufferedReader
QT运行显示 This application failed to start because it could not find or load the Qt platform plugin
[STM32] Hal library stm32cubemx tutorial 12 - IIC (read AT24C02)