PyTorch笔记 - Convolution卷积运算的原理 (1)

发布于:2023-01-24 ⋅ 阅读:(557) ⋅ 点赞:(0)

Convolution:

  • 元素:input size、kernel size、stride、padding、group、dilation(膨胀)

  • 计算方法:F.conv2d、input展开+矩阵相乘、kernel展开+矩阵相乘

  • 用途:局部建模、下采样

  • 卷积与互相关

Transpose Convolution:

  • 计算方法:F.conv_transpose2d、kernel转置+矩阵相乘
  • 用途:上采样

PyTorch Conv2D:

Conv2D的参数:

  • in_channels(输入通道), out_channels(输出通道), kernel_size(卷积核)
  • stride=1(步长), padding=0(填充), dilation=1(膨胀)

卷积和信号互相关类似,对于一张图像,卷积核沿着图像的不同区域,滑动相乘

Input Feature Map,特征图,4x4

Kernel,卷积核,3x3

stride=1,padding=0

image-20220809195444151

Z字型移动,从左到右,从上到下

bias是标量,卷积之后的再加上标量

二维卷积源码:

import torch
import torch.nn as nn
import torch.nn.functional as F

in_channels = 1
out_channels = 1
kernel_size = 3  # 可以是标量,也可以是元组
batch_size = 1  
bias = False
input_size = (batch_size, in_channels, 4, 4)  # 卷积的输入是4维,需要batch_size

# 初始化卷积
conv_layer = torch.nn.Conv2d(in_channels, out_channels, kernel_size, bias=bias)
# 卷积输入
input_feature_map = torch.randn(input_size)
# 卷积操作
output_feature_map = conv_layer(input_feature_map)
print(f'[Info] input_feature_map: \n{input_feature_map}') # 1x1x4x4
print(f'[Info] conv_layer.weight(kernel): \n{conv_layer.weight}') # 1x1x3x3
print(f'[Info] output_feature_map: \n{output_feature_map}')  # 1x1x2x2
# 输出公式: O=(I-K+2P)/S+1

output_feature_map_ = F.conv2d(input_feature_map, conv_layer.weight)
# F.conv2d和conv_layer的区别,在于传入kernel
print(f'[Info] output_feature_map_: \n{output_feature_map_}')  # 1x1x2x2
本文含有隐藏内容,请 开通VIP 后查看

网站公告

今日签到

点亮在社区的每一天
去签到