当前位置:网站首页>【pytorch】nn. AdaptiveMaxPool2d
【pytorch】nn. AdaptiveMaxPool2d
2022-07-01 09:08:00 【Enzo tried to smash the computer】
output_size – the target output size of the image of the form H x W. Can be a tuple (H, W) or a single H for a square image H x H. H and W can be either a int, or None which means the size will be the same as that of the input.
return_indices – if True, will return the indices along with the outputs. Useful to pass to nn.MaxUnpool2d. Default: False
m = torch.nn.AdaptiveMaxPool2d((5, 7))
input = torch.randn(1, 64, 8, 9)
output = m(input)
print(output.shape) # torch.Size([1, 64, 5, 7])
# target output size of 7x7 (square)
m = nn.AdaptiveAvgPool2d(7)
input = torch.randn(1, 64, 10, 9)
print(m(input).shape) # [1, 64, 7, 7]
# target output size of 10x7
m = nn.AdaptiveAvgPool2d((None, 7))
input = torch.randn(1, 64, 10, 9) # [1, 64, 10, 7]
output = m(input)
print(output.shape)
import torch.nn as nn
import torch
input = torch.tensor([[[[2, 4, 8, 15],
[3, 6, 9, 19],
[7, 22, 5, 12],
[1, 66, 1, 77]]]], dtype=torch.float64)
m = nn.AdaptiveMaxPool2d((3, 3))
output = m(input)
print(input)
print(output)
# padding I made it up myself , Convenient observation
# tensor([[[[0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
# [0.0, 2., 4., 8., 15., 0.0],
# [0.0, 3., 6., 9., 19., 0.0],
# [0.0, 7., 22., 5., 12., 0.0],
# [0.0, 1., 66., 1., 77., 0.0],
# [0.0, 0.0, 0.0, 0.0, 0.0, 0.0]]]], dtype=torch.float64)
# tensor([[[[ 6., 9., 19.],
# [22., 22., 19.],
# [66., 66., 77.]]]], dtype=torch.float64)
Global average pooling GAP(Global Average Pooling) The advantage is that :
Categories on Feature Map The connection between them is more intuitive ( Compared with the black box with full connection layer ),Feature Map It is also easier to convert into classification probability , Because in GAP No parameters need to be adjusted in , Therefore, the problem of over fitting is avoided .
GAP Summarizes spatial information , Therefore, it is more robust to the spatial transformation of input , Therefore, the following full connection layers in convolutional networks , Mostly with GAP Replace .
Global pooling occurs in Kreas There are corresponding layers in , Maximize the pool layer for the global (GlobalMaxPooling2D), But in Pytorch Although there is no corresponding pool layer in , But you can use Pytorch Adaptive pooling layer in (==AdaptiveMaxPool2d(1) perhaps AdaptiveMaxPool2d(1)== To achieve )
边栏推荐
- Redis源码学习(29),压缩列表学习,ziplist.c(二)
- Daily office consumables management solution
- AVL树的理解和实现
- Understand shallow replication and deep replication through code examples
- VSYNC+三重缓存机制+Choreographer
- Shell脚本-if else语句
- Nacos - 配置管理
- 【ESP 保姆级教程】疯狂毕设篇 —— 案例:基于阿里云、小程序、Arduino的WS2812灯控系统
- Programming with C language: calculate with formula: e ≈ 1+1/1+ 1/2! …+ 1/n!, Accuracy is 10-6
- 集团公司固定资产管理的痛点和解决方案
猜你喜欢
随机推荐
Principle and application of single chip microcomputer timer, serial communication and interrupt system
Promise异步编程
Daily practice of C language - day 80: currency change
[ESP nanny level tutorial] crazy completion chapter - Case: ws2812 light control system based on Alibaba cloud, applet and Arduino
树结构---二叉树2非递归遍历
It is designed with high bandwidth, which is almost processed into an open circuit?
Personal decoration notes
Shell script - definition, assignment and deletion of variables
How to manage fixed assets well? Easy to point and move to provide intelligent solutions
LogBack
Shell脚本-位置参数(命令行参数)
Can diffusion models be regarded as an autoencoder?
TV size and viewing distance
Why is the Ltd independent station a Web3.0 website!
[MFC development (16)] tree control
序列化、监听、自定义注解
3D打印Arduino 四轴飞行器
In the middle of the year, where should fixed asset management go?
Common interview questions for embedded engineers 2-mcu_ STM32
Flink面试题





![[interview brush 101] linked list](/img/52/d159bc66c0dbc44c1282a96cf6b2fd.png)



