当前位置:网站首页>CNN - nn. Conv1d use
CNN - nn. Conv1d use
2022-06-09 17:55:00 【Brother prawn flying】
CNN - nn.Conv1d Use
One 、Conv1d Parameter setting
torch.nn.Conv1d(in_channels, " Enter the number of channels in the image "
out_channels, " The number of channels generated by convolution "
kernel_size, " The size of the convolution kernel "
stride, " The pace of convolution . The default value is :1"
padding, " Fill added to both sides of the input . The default value is :0"
dilation, " The spacing between kernel elements . The default value is :1"
groups, " The number of blocked connections from the input channel to the output channel . The default value is :1"
bias, "If True, Add learnable deviations to the output . Default :True"
padding_mode "'zeros', 'reflect', 'replicate' or 'circular'. Default :'zeros'"
)
in_channels ( int ) – Enter the number of channels in the image
out_channels ( int ) – The number of channels generated by convolution
kernel_size ( int or tuple ) – The size of the convolution kernel
stride ( int or tuple , optional ) – The pace of convolution . The default value is :1
padding ( int , tuple or str , optional ) – Fill added to both sides of the input . The default value is :0
dilation ( int or tuple , optional ) – The spacing between kernel elements . The default value is :1
groups ( int , optional ) – The number of blocked connections from the input channel to the output channel . The default value is :1
bias ( bool , optional ) – If True, Add learnable deviations to the output . Default :True
padding_mode ( character string , Optional ) – ‘zeros’, ‘reflect’, ‘replicate’ or ’circular’. Default :‘zeros’
Two 、Conv1d I / O and convolution kernel dimension
input – ( minibatch , in_channels , i W ) (\text{minibatch} , \text{in\_channels} , iW) (minibatch,in_channels,iW) ( Batch size , Number of data channels , Data length )
output – ( minibatch , out_channels , i W ) (\text{minibatch} , \text{out\_channels } , iW) (minibatch,out_channels ,iW) ( Batch size , Number of channels generated , Length after convolution )
Convoluted dimension :(n - k + 2 * p ) / s + 1
k: Convolution kernel size ,p: Use boundary fill ,s: step .
Convolution kernel dimension : ( in_channels , k e r n e l _ s i z e , out_channels ) (\text{in\_channels} , kernel\_size, \text{out\_channels }) (in_channels,kernel_size,out_channels )
among :out_channels dimension , Represents the number of convolution kernels , Used to extract multidimensional features .
3、 ... and 、Conv1d The calculation process
1. Test one :in_channels=1, out_channels=1
The convolution is defined as follows : Input channel :1, Output channel :1, Convolution kernel :1 * 3 * 1, step :1, fill :0
Input : Batch size :1, Number of data channels :1, Data length : 5
import torch
import torch.nn as nn
input = torch.randn(1, 1, 5)
conv = nn.Conv1d(in_channels=1, out_channels=1, kernel_size=3, stride=1, padding=0)
out = conv(input)
The first convolution is calculated as follows :
Move the calculation backward according to the step size :
Output : Batch size :1, Number of data channels :1, Data length : 3
Running results :
2. Test two :in_channels=1, out_channels=2
The convolution is defined as follows : Input channel :1, Output channel :2, Convolution kernel :1 3 2 ( Two 13, Extracting two-dimensional features )*, step :1, fill :0
Input : Batch size :1, Number of data channels :1, Data length : 5
import torch
import torch.nn as nn
input = torch.randn(1, 1, 5)
conv = nn.Conv1d(in_channels=1, out_channels=2, kernel_size=3, stride=1, padding=0)
out = conv(input)
Output : Batch size :1, Number of data channels :2, Data length : 3

3. Test three :in_channels=8, out_channels=1
The convolution is defined as follows : Input channel :8, Output channel :1, Convolution kernel :8 * 3 * 1 , step :1, fill :0
Input : Batch size :1, Number of data channels :1, Data length : 7

import torch
import torch.nn as nn
input = torch.randn(1, 8, 7)
conv = nn.Conv1d(in_channels=8, out_channels=1, kernel_size=3, stride=1, padding=0)
out = conv(input)

Output : Batch size :1, Number of data channels :1, Data length : 5
3. Test three :in_channels=8, out_channels=2
The convolution is defined as follows : Input channel :8, Output channel :1, Convolution kernel :8 * 3 * 2 , step :1, fill :0
Input : Batch size :1, Number of data channels :1, Data length : 7
import torch
import torch.nn as nn
input = torch.randn(1, 8, 7)
conv = nn.Conv1d(in_channels=8, out_channels=2, kernel_size=3, stride=1, padding=0)
out = conv(input)

Output : Batch size :1, Number of data channels :2, Data length : 5
Four 、Conv1d Application in text – TextCNN
The paper :Convolutional Neural Networks for Sentence Classification
The model framework is shown in the figure below .
Suppose we need to classify sentences . Every word in a sentence is made up of n It's made up of vector words , That is to say, the size of the input matrix is m*n, among m For sentence length . stay pytorch in , Convolution from left to right , Therefore, you need to exchange the input dimension , namely (n, m)— ( Word vector dimension , Sentence length )
As shown in the figure above : The input dimension is :(5, 7):
import torch
import torch.nn as nn
input = torch.randn(1, 5, 7)
CNN Need to convolute the input sample , For text data , It's kind of like N-gram In extracting the local correlation between words . There are three types of pictures kernel_size Convolution of , Namely 2,3,4, Every kernel_size There are two filter( In practice filter There will be a lot of ). Apply different words in different windows filter, The resulting 6 A convoluted vector .
With kernel_size = 4 For example : Convolution kernel dimension (5, 4)
import torch
import torch.nn as nn
input = torch.randn(1, 5, 7)
conv = nn.Conv1d(in_channels=5, out_channels=1, kernel_size=4, stride=1, padding=0)
out = conv(input)
Output :
Then maximize the pooling of each vector and splice the pooling values , Finally, we get the feature representation of this sentence , Throw this sentence vector to the classifier for classification , This completes the whole process .
边栏推荐
- 【玩转华为云】华为云零代码开发图片压缩工具
- Construction of sheep (rare species) bsgenome reference genome
- build sqllite from amalgamation version
- Who says redis can't save big keys
- AI首席架构师3-AICA-智慧城市中的AI应用实践
- Real topic of the 13th provincial competition of the Blue Bridge Cup in 2022 - block painting
- iis怎么打开md文件(is无法打开md文件报错怎么解决)
- 【玩转华为云】基于华为云图像识别标签实战
- DAY5-T2029&T39 -2022-01-20-非自己作答
- 性能优化方案
猜你喜欢

AI chief architect 3-aica-ai application practice in smart city

How to train your accuracy?
Android 缓存机制 LRUCache

Traversal and cueing of binary tree

Who says redis can't save big keys

图解 Google V8 # 06:原型链:V8是如何实现对象继承的?

Unity-获取XML文件的内容信息

经常弹出:VSCode尝试在目标目录创建文件时发生一个错误 重试 跳过这个文件 关闭安装程序

Influence of K value selection in KNN on Model

word论文格式
随机推荐
AI chief architect 4-aica-baidu CV technology application and industry landing experience
华为云原生之数据仓库服务GaussDB(DWS)的深度使用与应用实践【这次高斯不是数学家】
主动预防-DWS关键工具安装确认
谷歌浏览器书签保存在哪里以及书签导入导出方法
MySQL community server 8.0.29 installation and configuration method graphic tutorial
批量删除功能——后台mapper层sql语句解析
Snap announced that the upgraded camera products and AR ecology will continue to penetrate the Chinese market
sqllite create a database
Abbexa AEC 色原试剂盒使用说明
关于并发和并行,Go和Erlang之父都弄错了?
运行代码,想加个进度条实时看以下代码运行进度,怎么破?
MySQL 8.0.29 解压版安装配置方法图文教程
秒云云原生信创全兼容解决方案,推动信创产业加速落地
图解 Google V8 # 06:原型链:V8是如何实现对象继承的?
What is the expected life of the conductive slip ring
MySQL parallel replication (MTS) principle (full version)
build sqllite from amalgamation version
Redis knowledge points & summary of interview questions
Macro definition CV with parameters in opencv_ Role of assert()
【长时间序列预测】Aotoformer 代码详解之[2]模型部件之时间序列分解