李沐动手深度学习(pycharm中运行笔记)——12.权重衰退

发布于:2025-06-20 ⋅ 阅读:(22) ⋅ 点赞:(0)

12.权重衰退(与课程对应)


目录

一、权重衰退

1、使用均方范数作为硬性限制

2、使用均方范数作为柔性限制(通常这么做)

3、演示对最优解的影响

4、参数更新法则

5、总结

二、代码实现+从零实现

三、代码实现+简介实现


一、权重衰退

1、使用均方范数作为硬性限制

        (1)通过限制参数值的选择范围来控制模型容量

                        min L(w, b) subject to ||w||^{2}\leqslant \theta

                通常不限制便宜b(限不限制都差不多)

                小的\theta意味着更强的正则项

2、使用均方范数作为柔性限制(通常这么做)

        (1)对每个\theta,都可以找到\lambda使得之前的目标函数等价于下面

                        min L(w, b) + \frac{\lambda }{2} ||w||^{2}

                可以通过拉格朗日乘子来证明

        (2)超参数\lambda控制了正则项的重要程度

                \lambda=0:无作用

                \lambda \rightarrow \inftyw^{*}\rightarrow 0

3、演示对最优解的影响

4、参数更新法则

(1)计算梯度

        \frac{\partial }{\partial w}(L(w, b)) + \frac{\lambda }{2}||w||^{2})=\frac{\partial L(w, b) }{\partial w} + \lambda w

(2)时间t更新参数

        w_{t+1}=(1-\eta \lambda )w_{t} - \eta \frac{\partial L(w_{t}, b_{t}) }{\partial w_{t}}

        通常\eta \lambda < 1,在深度学习中通常叫做权重衰退

5、总结

(1)权重衰退通过L2正则项使得模型参数不会过大,从而控制模型复杂度

(2)正则项权重是控制模型复杂度的超参数

二、代码实现+从零实现

1、生成数据集:训练集越小,越容易过拟合;特征维度200

n_train, n_test, num_inputs, batch_size = 20, 100, 200, 5
true_w, true_b = torch.ones((num_inputs, 1)) * 0.01, 0.05
train_data = d2l.synthetic_data(true_w, true_b, n_train)
train_iter = d2l.load_array(train_data, batch_size)
test_data = d2l.synthetic_data(true_w, true_b, n_test)
test_iter = d2l.load_array(test_data, batch_size, is_train=False)

2、初始化模型参数

def init_params():
    w = torch.normal(0, 1, size=(num_inputs, 1), requires_grad=True)
    b = torch.zeros(1, requires_grad=True)
    return [w, b]

 3、定义L2范数惩罚

def l2_penalty(w):
    return torch.sum(w.pow(2)) / 2

 4、定义训练代码实现

def train(lambd):
    w, b = init_params()  # 初始化权重
    net, loss = lambda X: d2l.linreg(X, w, b), d2l.squared_loss  # 模型,损失
    num_epochs, lr = 100, 0.003
    animator = d2l.Animator(xlabel='epochs', ylabel='loss', yscale='log',
                            xlim=[5, num_epochs], legend=['train', 'test'])  # 绘制
    for epoch in range(num_epochs):
        for X, y in train_iter:
            l = loss(net(X), y) + lambd * l2_penalty(w)
            l.sum().backward()
            d2l.sgd([w, b], lr, batch_size)
        if (epoch + 1) % 5 == 0:
            animator.add(epoch + 1, (d2l.evaluate_loss(net, train_iter, loss),
                                     d2l.evaluate_loss(net, test_iter, loss)))
        print('w的L2范数是:', torch.norm(w).item())

 5、忽略正则化直接训练

train(lambd=0)
d2l.plt.show()

6、使用权重衰减

train(lambd=3)
d2l.plt.show()

7、完整代码 

import torch
from torch import nn
from d2l import torch as d2l


# 权重衰退:从零实现
# 1、生成数据集:训练集越小,越容易过拟合;特征维度200
n_train, n_test, num_inputs, batch_size = 20, 100, 200, 5
true_w, true_b = torch.ones((num_inputs, 1)) * 0.01, 0.05
train_data = d2l.synthetic_data(true_w, true_b, n_train)
train_iter = d2l.load_array(train_data, batch_size)
test_data = d2l.synthetic_data(true_w, true_b, n_test)
test_iter = d2l.load_array(test_data, batch_size, is_train=False)

# 2、初始化模型参数
def init_params():
    w = torch.normal(0, 1, size=(num_inputs, 1), requires_grad=True)
    b = torch.zeros(1, requires_grad=True)
    return [w, b]

# 3、定义L2范数惩罚
def l2_penalty(w):
    return torch.sum(w.pow(2)) / 2

# 4、定义训练代码实现
def train(lambd):
    w, b = init_params()  # 初始化权重
    net, loss = lambda X: d2l.linreg(X, w, b), d2l.squared_loss  # 模型,损失
    num_epochs, lr = 100, 0.003
    animator = d2l.Animator(xlabel='epochs', ylabel='loss', yscale='log',
                            xlim=[5, num_epochs], legend=['train', 'test'])  # 绘制
    for epoch in range(num_epochs):
        for X, y in train_iter:
            l = loss(net(X), y) + lambd * l2_penalty(w)
            l.sum().backward()
            d2l.sgd([w, b], lr, batch_size)
        if (epoch + 1) % 5 == 0:
            animator.add(epoch + 1, (d2l.evaluate_loss(net, train_iter, loss),
                                     d2l.evaluate_loss(net, test_iter, loss)))
        print('w的L2范数是:', torch.norm(w).item())

# 忽略正则化直接训练
train(lambd=0)
d2l.plt.show()

# 使用权重衰减
train(lambd=3)
d2l.plt.show()

三、代码实现+简介实现

1、生成数据集:训练集越小,越容易过拟合;特征维度200

n_train, n_test, num_inputs, batch_size = 20, 100, 200, 5
true_w, true_b = torch.ones((num_inputs, 1)) * 0.01, 0.05
train_data = d2l.synthetic_data(true_w, true_b, n_train)
train_iter = d2l.load_array(train_data, batch_size)
test_data = d2l.synthetic_data(true_w, true_b, n_test)
test_iter = d2l.load_array(test_data, batch_size, is_train=False)

2、权重衰退+简洁实现

def train_concise(wd):
    net = nn.Sequential(nn.Linear(num_inputs, 1))
    for param in net.parameters():
        param.data.normal_()
    loss = nn.MSELoss(reduction='none')
    num_epochs, lr = 100, 0.003
    # 偏置参数没有衰减
    trainer = torch.optim.SGD([
        {"params":net[0].weight, 'weight_decay': wd},
        {"params":net[0].bias}], lr=lr)
    animator = d2l.Animator(xlabel='epochs', ylabel='loss', yscale='log',
                            xlim=[5, num_epochs], legend=['train', 'test'])
    for epoch in range(num_epochs):
        for X, y in train_iter:
            trainer.zero_grad()
            l = loss(net(X), y)
            l.mean().backward()
            trainer.step()
        if (epoch + 1) % 5 == 0:
            animator.add(epoch + 1,
                         (d2l.evaluate_loss(net, train_iter, loss),
                          d2l.evaluate_loss(net, test_iter, loss)))
    print('w的L2范数:', net[0].weight.norm().item())

 3、忽略正则化直接训练

train_concise(0)
d2l.plt.show()

 4、使用权重衰减

train_concise(3)
d2l.plt.show()

5、完整代码 

import torch
from matplotlib.pyplot import xlabel
from torch import nn
from d2l import torch as d2l


# 生成数据集:训练集越小,越容易过拟合;特征维度200
n_train, n_test, num_inputs, batch_size = 20, 100, 200, 5
true_w, true_b = torch.ones((num_inputs, 1)) * 0.01, 0.05
train_data = d2l.synthetic_data(true_w, true_b, n_train)
train_iter = d2l.load_array(train_data, batch_size)
test_data = d2l.synthetic_data(true_w, true_b, n_test)
test_iter = d2l.load_array(test_data, batch_size, is_train=False)


# 权重衰退+简洁实现
def train_concise(wd):
    net = nn.Sequential(nn.Linear(num_inputs, 1))
    for param in net.parameters():
        param.data.normal_()
    loss = nn.MSELoss(reduction='none')
    num_epochs, lr = 100, 0.003
    # 偏置参数没有衰减
    trainer = torch.optim.SGD([
        {"params":net[0].weight, 'weight_decay': wd},
        {"params":net[0].bias}], lr=lr)
    animator = d2l.Animator(xlabel='epochs', ylabel='loss', yscale='log',
                            xlim=[5, num_epochs], legend=['train', 'test'])
    for epoch in range(num_epochs):
        for X, y in train_iter:
            trainer.zero_grad()
            l = loss(net(X), y)
            l.mean().backward()
            trainer.step()
        if (epoch + 1) % 5 == 0:
            animator.add(epoch + 1,
                         (d2l.evaluate_loss(net, train_iter, loss),
                          d2l.evaluate_loss(net, test_iter, loss)))
    print('w的L2范数:', net[0].weight.norm().item())

# 忽略正则化直接训练
train_concise(0)
d2l.plt.show()

# 使用权重衰减
train_concise(3)
d2l.plt.show()

如果此文章对您有所帮助,那就请点个赞吧,收藏+关注 那就更棒啦,十分感谢!!!


网站公告

今日签到

点亮在社区的每一天
去签到