基于多头自注意力机制(MHSA)增强的YOLOv11主干网络—面向高精度目标检测的结构创新与性能优化

发布于:2025-05-18 ⋅ 阅读:(18) ⋅ 点赞:(0)

深度学习在计算机视觉领域的快速发展推动了目标检测算法的持续进步。作为实时检测框架的典型代表,YOLO系列凭借其高效性与准确性备受关注。本文提出一种基于多头自注意力机制(Multi-Head Self-Attention, MHSA)增强的YOLOv11主干网络结构,旨在提升模型在复杂场景下的目标特征表达与全局感知能力。通过在主干网络关键层级引入MHSA模块,有效建模长距离依赖关系,增强语义信息融合效率。目标检测作为计算机视觉的核心任务,在智能监控、自动驾驶和图像检索等领域具有广泛应用。YOLO系列模型凭借其端到端架构设计与高效推理能力,成为工业界与学术界的研究热点。YOLOv11作为该系列的最新版本,通过优化检测头结构与特征提取方式,进一步提升了整体性能。然而,在面对遮挡、尺度变化、密集目标等复杂场景时,传统卷积神经网络在局部感受野与固定权重分配方面的局限性日益凸显。近年来,注意力机制在目标检测领域得到广泛应用,其中多头自注意力机制(MHSA)因其出色的长程依赖关系捕捉能力,在图像分类、分割等任务中表现卓越。基于此,本文提出将MHSA模块集成至YOLOv11主干网络的关键阶段,构建具有更强语义表达能力的新型骨干结构,以进一步提升模型在高精度目标检测任务中的性能。

1. 多头自注意力机制(Multi-Head Self-Attention)

论文地址https://arxiv.org/pdf/1706.03762
官方代码https://github.com/tensorflow/tensor2tensor
在这里插入图片描述
Transformer 模型架构
多头自注意力机制(Multi-Head Self-Attention)作为Transformer模型的核心组件,通过并行处理多个注意力头,使模型能够同时捕捉序列数据中不同位置和表示子空间的信息。该机制是对自注意力(Self-Attention,又称内部注意力Intra-Attention)的扩展,其核心思想是在编码序列中每个位置时,通过加权计算充分考虑该位置与序列中其他位置的相关性。
 Multi-Head Attention

MHSA模块源自Vision TransformerViT),其核心思想是通过多头注意力机制对输入特征图进行全局建模:
Attention ( Q , K , V ) = softmax ( Q K T d k ) V \text{Attention}(Q, K, V) = \text{softmax}\left(\frac{QK^T}{\sqrt{d_k}}\right)V Attention(Q,K,V)=softmax(dk QKT)V

其中 Q , K , V Q, K, V Q,K,V 分别表示查询、键与值向量, d k d_k dk为缩放因子。通过将输入特征映射为多个头并并行计算注意力权重,最终拼接输出以增强模型的表达能力。

2. 引入MHSA加入YOLOv11中

1. 将下面代码粘贴到在/ultralytics/ultralytics/nn/modules/block.py

class MHSA(nn.Module):
    def __init__(self, n_dims, width=14, height=14, heads=4, pos_emb=False):
        super(MHSA, self).__init__()
 
        self.heads = heads
        self.query = nn.Conv2d(n_dims, n_dims, kernel_size=1)
        self.key = nn.Conv2d(n_dims, n_dims, kernel_size=1)
        self.value = nn.Conv2d(n_dims, n_dims, kernel_size=1)
        self.pos = pos_emb
        if self.pos:
            self.rel_h_weight = nn.Parameter(torch.randn([1, heads, (n_dims) // heads, 1, int(height)]),
                                             requires_grad=True)
            self.rel_w_weight = nn.Parameter(torch.randn([1, heads, (n_dims) // heads, int(width), 1]),
                                             requires_grad=True)
        self.softmax = nn.Softmax(dim=-1)
 
    def forward(self, x):
        n_batch, C, width, height = x.size()
        q = self.query(x).view(n_batch, self.heads, C // self.heads, -1)
        k = self.key(x).view(n_batch, self.heads, C // self.heads, -1)
        v = self.value(x).view(n_batch, self.heads, C // self.heads, -1)
        content_content = torch.matmul(q.permute(0, 1, 3, 2), k)  # 1,C,h*w,h*w
        c1, c2, c3, c4 = content_content.size()
        if self.pos:
            content_position = (self.rel_h_weight + self.rel_w_weight).view(1, self.heads, C // self.heads, -1).permute(
                0, 1, 3, 2)  # 1,4,1024,64
 
            content_position = torch.matmul(content_position, q)  # ([1, 4, 1024, 256])
            content_position = content_position if (
                    content_content.shape == content_position.shape) else content_position[:, :, :c3, ]
            assert (content_content.shape == content_position.shape)
            energy = content_content + content_position
        else:
            energy = content_content
        attention = self.softmax(energy)
        out = torch.matmul(v, attention.permute(0, 1, 3, 2))  # 1,4,256,64
        out = out.view(n_batch, C, width, height)
        return out

2. 修改modules文件夹下的__init__.py文件,先导入函数
在这里插入图片描述
然后在下面的__all__中声明函数
在这里插入图片描述
3. 在/ultralytics/ultralytics/cfg/models/11下面新建文件yolo11_MHSA.yaml文件

  • 目标检测
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect

# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
  # [depth, width, max_channels]
  n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
  s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
  m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
  l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
  x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs

# YOLO11n backbone
backbone:
  # [from, repeats, module, args]
  - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  - [-1, 2, C3k2, [256, False, 0.25]]
  - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  - [-1, 2, C3k2, [512, False, 0.25]]
  - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
  - [-1, 2, C3k2, [512, True]]
  - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
  - [-1, 2, C3k2, [1024, True]]
  - [-1, 1, SPPF, [1024, 5]] # 9
  - [-1, 2, C2PSA, [1024]] # 10
  - [-1, 1, MHSA, [14,14,4]]

# YOLO11n head
head:
 - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
 - [[-1, 6], 1, Concat, [1]] # cat backbone P4
 - [-1, 2, C3k2, [512, False]] # 13

 - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
 - [[-1, 4], 1, Concat, [1]] # cat backbone P3
 - [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)

 - [-1, 1, Conv, [256, 3, 2]]
 - [[-1, 14], 1, Concat, [1]] # cat head P4
 - [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)

 - [-1, 1, Conv, [512, 3, 2]]
 - [[-1, 11], 1, Concat, [1]] # cat head P5
 - [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)

 - [[17, 20, 23], 1, Detect, [nc]] # Detect(P3, P4, P5)
  • 语义分割
# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
 
# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
  # [depth, width, max_channels]
  n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
  s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
  m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
  l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
  x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs
 
# YOLO11n backbone
backbone:
  # [from, repeats, module, args]
 - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
 - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
 - [-1, 2, C3k2, [256, False, 0.25]]
 - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
 - [-1, 2, C3k2, [512, False, 0.25]]
 - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
 - [-1, 2, C3k2, [512, True]]
 - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
 - [-1, 2, C3k2, [1024, True]]
 - [-1, 1, SPPF, [1024, 5]] # 9
 - [-1, 2, C2PSA, [1024]] # 10
 - [-1, 1, MHSA, [14,14,4]]
 
# YOLO11n head
head:
 - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
 - [[-1, 6], 1, Concat, [1]] # cat backbone P4
 - [-1, 2, C3k2, [512, False]] # 13
 
 - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
 - [[-1, 4], 1, Concat, [1]] # cat backbone P3
 - [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)
 
 - [-1, 1, Conv, [256, 3, 2]]
 - [[-1, 14], 1, Concat, [1]] # cat head P4
 - [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)
 
 - [-1, 1, Conv, [512, 3, 2]]
 - [[-1, 11], 1, Concat, [1]] # cat head P5
 - [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)
 
 - [[17, 20, 23], 1, Segment, [nc, 32, 256]] # Segment(P3, P4, P5)
  • 旋转目标检测

# Ultralytics YOLO 🚀, AGPL-3.0 license
# YOLO11 object detection model with P3-P5 outputs. For Usage examples see https://docs.ultralytics.com/tasks/detect
 
# Parameters
nc: 80 # number of classes
scales: # model compound scaling constants, i.e. 'model=yolo11n.yaml' will call yolo11.yaml with scale 'n'
  # [depth, width, max_channels]
  n: [0.50, 0.25, 1024] # summary: 319 layers, 2624080 parameters, 2624064 gradients, 6.6 GFLOPs
  s: [0.50, 0.50, 1024] # summary: 319 layers, 9458752 parameters, 9458736 gradients, 21.7 GFLOPs
  m: [0.50, 1.00, 512] # summary: 409 layers, 20114688 parameters, 20114672 gradients, 68.5 GFLOPs
  l: [1.00, 1.00, 512] # summary: 631 layers, 25372160 parameters, 25372144 gradients, 87.6 GFLOPs
  x: [1.00, 1.50, 512] # summary: 631 layers, 56966176 parameters, 56966160 gradients, 196.0 GFLOPs
 
# YOLO11n backbone
backbone:
  # [from, repeats, module, args]
  - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  - [-1, 2, C3k2, [256, False, 0.25]]
  - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  - [-1, 2, C3k2, [512, False, 0.25]]
  - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
  - [-1, 2, C3k2, [512, True]]
  - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
  - [-1, 2, C3k2, [1024, True]]
  - [-1, 1, SPPF, [1024, 5]] # 9
  - [-1, 2, C2PSA, [1024]] # 10
  - [-1, 1, MHSA, [14,14,4]]
 
# YOLO11n head
head:
  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 6], 1, Concat, [1]] # cat backbone P4
  - [-1, 2, C3k2, [512, False]] # 13
 
  - [-1, 1, nn.Upsample, [None, 2, "nearest"]]
  - [[-1, 4], 1, Concat, [1]] # cat backbone P3
  - [-1, 2, C3k2, [256, False]] # 16 (P3/8-small)
 
  - [-1, 1, Conv, [256, 3, 2]]
  - [[-1, 14], 1, Concat, [1]] # cat head P4
  - [-1, 2, C3k2, [512, False]] # 19 (P4/16-medium)
 
  - [-1, 1, Conv, [512, 3, 2]]
  - [[-1, 11], 1, Concat, [1]] # cat head P5
  - [-1, 2, C3k2, [1024, True]] # 22 (P5/32-large)
 
  - [[17, 20, 23], 1, OBB, [nc, 1]] # Detect(P3, P4, P5)

本文只是对YOLOv11基础上添加模块,如果要对yolo11n/l/m/x进行添加则只需要指定对应的depth_multiplewidth_multiple。

# YOLOv11n
depth_multiple: 0.50  # model depth multiple
width_multiple: 0.25  # layer channel multiple
max_channel:1024
 
# YOLOv11s
depth_multiple: 0.50  # model depth multiple
width_multiple: 0.50  # layer channel multiple
max_channel:1024
 
# YOLOv11m
depth_multiple: 0.50  # model depth multiple
width_multiple: 1.00  # layer channel multiple
max_channel:512
 
# YOLOv11l 
depth_multiple: 1.00  # model depth multiple
width_multiple: 1.00  # layer channel multiple
max_channel:512 
 
# YOLOv11x
depth_multiple: 1.00  # model depth multiple
width_multiple: 1.50 # layer channel multiple
max_channel:512

4. 在task.py中进行注册,找到def parse_model函数中进行注册,添加MHSA

先在task.py导入函数
在这里插入图片描述

然后在task.py文件下找到parse_model这个函数,添加MHSA
在这里插入图片描述

5. 执行训练train.py

from ultralytics import YOLO
import warnings

warnings.filterwarnings('ignore')
from pathlib import Path

if __name__ == '__main__':
    # 加载模型
    model = YOLO("ultralytics/cfg/models/11/yolo11-MHSA.yaml")  # 你要选择的模型yaml文件地址
    # Use the model
    results = model.train(data=r"你的数据集的yaml文件地址",
                          epochs=100, batch=16, imgsz=640, workers=4, name=Path(model.cfg).stem)  # 训练模型
                   from  n    params  module                                       arguments
  0                  -1  1       464  ultralytics.nn.modules.conv.Conv             [3, 16, 3, 2]
  1                  -1  1      4672  ultralytics.nn.modules.conv.Conv             [16, 32, 3, 2]
  2                  -1  1      6640  ultralytics.nn.modules.block.C3k2            [32, 64, 1, False, 0.25]      
  3                  -1  1     36992  ultralytics.nn.modules.conv.Conv             [64, 64, 3, 2]
  4                  -1  1     26080  ultralytics.nn.modules.block.C3k2            [64, 128, 1, False, 0.25]     
  5                  -1  1    147712  ultralytics.nn.modules.conv.Conv             [128, 128, 3, 2]
  6                  -1  1     87040  ultralytics.nn.modules.block.C3k2            [128, 128, 1, True]
  7                  -1  1    295424  ultralytics.nn.modules.conv.Conv             [128, 256, 3, 2]
  8                  -1  1    346112  ultralytics.nn.modules.block.C3k2            [256, 256, 1, True]
  9                  -1  1    164608  ultralytics.nn.modules.block.SPPF            [256, 256, 5]
 10                  -1  1    249728  ultralytics.nn.modules.block.C2PSA           [256, 256, 1]
 11                  -1  1    197376  ultralytics.nn.modules.block.MHSA            [256, 14, 14, 4]
 12                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 13             [-1, 6]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 14                  -1  1    111296  ultralytics.nn.modules.block.C3k2            [384, 128, 1, False]
 15                  -1  1         0  torch.nn.modules.upsampling.Upsample         [None, 2, 'nearest']
 16             [-1, 4]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 17                  -1  1     32096  ultralytics.nn.modules.block.C3k2            [256, 64, 1, False]
 18                  -1  1     36992  ultralytics.nn.modules.conv.Conv             [64, 64, 3, 2]
 19            [-1, 14]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 20                  -1  1     86720  ultralytics.nn.modules.block.C3k2            [192, 128, 1, False]
 21                  -1  1    147712  ultralytics.nn.modules.conv.Conv             [128, 128, 3, 2]
 22            [-1, 11]  1         0  ultralytics.nn.modules.conv.Concat           [1]
 23                  -1  1    378880  ultralytics.nn.modules.block.C3k2            [384, 256, 1, True]
 24        [17, 20, 23]  1    464912  ultralytics.nn.modules.head.Detect           [80, [64, 128, 256]]
YOLO11_MHSA summary: 324 layers, 2,821,456 parameters, 2,821,440 gradients, 6.8 GFLOPs

在这里插入图片描述

3. 网络结构图

在这里插入图片描述

4. 结论与展望

本文提出了一种基于多头自注意力机制(MHSA)增强的YOLOv11主干网络结构,有效提升了目标检测模型的全局建模能力与检测精度,并在复杂场景中表现出更强的鲁棒性。
未来工作将探索以下方向:

  • 动态MHSA机制,根据输入内容自适应激活注意力模块;
  • 在检测头中引入交叉注意力机制,实现更精细的特征交互;
  • 轻量化MHSA模块设计,进一步压缩模型参数量。

网站公告

今日签到

点亮在社区的每一天
去签到