【动手学运动规划】 5.2.c 梯度下降法,牛顿法代码解析

发布于:2025-02-11 ⋅ 阅读:(34) ⋅ 点赞:(0)

我猜中了开头,但我猜不中这结局。— 大话西游 紫霞

🏰代码及环境配置:请参考 环境配置和代码运行!


本节提供了 梯度下降法,牛顿法的代码测试:

python3 tests/optimization/optimize_method_test.py

5.2.c.1 Rosenbrock Function

Rosenbrock Function是一个在数学最优化领域中广泛使用的非凸函数,由Howard Harry Rosenbrock在1960年提出。该函数也被称为Rosenbrock山谷、Rosenbrock香蕉函数或简称为香蕉函数。上图的蓝色山谷型部分就是Rosenbrock Function, 它的定义如下:

f ( x , y ) = ( a − x ) 2 + b ( y − x 2 ) 2 f(x, y)=(a-x)^2+b\left(y-x^2\right)^2 f(x,y)=(ax)2+b(yx2)2

Rosenbrock Function有以下特性:

  • 非凸性:Rosenbrock函数是一个非凸函数,其全局最小值位于一个平滑的碗状形状的底部,但函数表面存在多个局部最小值,这使得优化算法容易陷入局部最优解。
  • 强烈的倾斜性:当参数 b 较大时,函数在 y 方向上的变化比 x 方向更为剧烈,这增加了优化算法找到全局最小值的难度。
  • 长谷底:Rosenbrock函数的图像中存在一个较长的谷底(香蕉型山谷),当优化算法进入这个谷底时,需要更多的步骤才能抵达全局最小点。
  • 梯度稀疏性:在全局最小点附近,函数的梯度非常小,常常趋近于零,这使得基于梯度的优化算法可能需要很多次迭代才能收敛到最小值。

Rosenbrock的全局最小值在 ( 1 , 1 ) (1,1) (1,1)点, 并且在谷底存在多个局部最小值.

接下来, 我们会分别用梯度下降法和牛顿法, 求Rosenbrock Function的极小值.

min ⁡ Γ 100 ( y − x 2 ) 2 + ( 1 − x ) 2 , Γ = [ x y ] ∈ R 2 \min _{\boldsymbol{\Gamma}} 100\left(y-x^2\right)^2+(1-x)^2, \boldsymbol{\Gamma}=\left[\begin{array}{l}x \\y\end{array}\right] \in \mathbb{R}^2 Γmin100(yx2)2+(1x)2,Γ=[xy]R2

目标函数的梯度和Hessian矩阵如下:

∇ f ( Γ ) = [ ∂ f ∂ x ( Γ ) ∂ f ∂ y ( Γ ) ] = [ − 400 x ( y − x 2 ) − 2 ( 1 − x ) 200 ( y − x 2 ) ] \boldsymbol{\nabla} f(\boldsymbol{\Gamma})=\left[\begin{array}{l}\frac{\partial f}{\partial x}(\boldsymbol{\Gamma}) \\\frac{\partial f}{\partial y}(\boldsymbol{\Gamma})\end{array}\right]=\left[\begin{array}{c}-400x\left(y-x^2\right)-2(1-x) \\200\left(y-x^2\right)\end{array}\right] f(Γ)=[xf(Γ)yf(Γ)]=[400x(yx2)2(1x)200(yx2)]

H ( Γ ) = [ ∂ 2 f ∂ x 2 ( Γ ) ∂ 2 f ∂ x ∂ y ( Γ ) ∂ 2 f ∂ y ∂ x ( Γ ) ∂ 2 f ∂ y 2 ( Γ ) ] = [ − 400 y + 1200 x 2 + 2 − 400 x − 400 x 200 ] \mathbf{H}(\boldsymbol{\Gamma})=\left[\begin{array}{ll}\frac{\partial^2 f}{\partial x^2}(\boldsymbol{\Gamma}) & \frac{\partial^2 f}{\partial x \partial y}(\boldsymbol{\Gamma}) \\\frac{\partial^2 f}{\partial y \partial x}(\boldsymbol{\Gamma}) & \frac{\partial^2 f}{\partial y^2}(\boldsymbol{\Gamma})\end{array}\right]=\left[\begin{array}{cc}-400 y+1200 x^2+2 & -400 x \\-400 x & 200\end{array}\right] H(Γ)=[x22f(Γ)yx2f(Γ)xy2f(Γ)y22f(Γ)]=[400y+1200x2+2400x400x200]

我们定义了optimize(method) , 它接受一个优化方法作为输入(梯度下降法或者牛顿法), 初始解 ( x , y ) = ( − 1.2 , 1 ) (x, y)=(-1.2,1) (x,y)=(1.2,1), 求解rosenbrock_function 的最小值.

def rosenbrock_function(x, y):
    return 100 * (y - x**2) ** 2 + (1 - x) ** 2

def optimize(method):
    x, y = sm.symbols("x y")
    symbols = [x, y]

    solution, x_star = method(rosenbrock_function(x, y), symbols, {x: -1.2, y: 1})

5.2.c.2 梯度下降法

我们调用了sympy来自动计算梯度:

def get_gradient(
    function: sm.core.expr.Expr,
    symbols: list[sm.core.symbol.Symbol],
    x0: dict[sm.core.symbol.Symbol, float],  # Add x0 as argument
) -> np.ndarray:
    d1 = {}
    gradient = np.array([])

    for i in symbols:
        d1[i] = sm.diff(function, i, 1).evalf(subs=x0)  # add evalf method
        gradient = np.append(gradient, d1[i])

    return gradient.astype(np.float64)  # Change data type to float
    
def gradient_descent(
    function: sm.core.expr.Expr,
    symbols: list[sm.core.symbol.Symbol],
    x0: dict[sm.core.symbol.Symbol, float],
    learning_rate: float = 0.001,
    iterations: int = 10000,
) -> dict[sm.core.symbol.Symbol, float] or None:
    x_star = {}
    x_star[0] = np.array(list(x0.values()))

    print(f"Starting Values: {x_star[0]}")

    for i in range(iterations):

        gradient = get_gradient(function, symbols, dict(zip(x0.keys(), x_star[i])))

        x_star[i + 1] = x_star[i].T - learning_rate * gradient.T

        if np.linalg.norm(x_star[i + 1] - x_star[i]) < 10e-5:
            solution = dict(zip(x0.keys(), x_star[i + 1]))
            print(f"\nConvergence Achieved ({i+1} iterations): Solution = {solution}")
            break
        else:
            solution = None

        print(f"Step {i+1}: {x_star[i+1]}")

    return solution, x_star

在固定步长(学习率)learning_rate的前提下, 面对非凸的Rosenbrock函数, 梯度下降法极易陷入局部最优解. 并且, 在接近谷底之后, 梯度非常小, 需要非常多的迭代次数才能达到.

……
Step 4210: [0.89674526 0.80371261]
Step 4211: [0.89679414 0.8038005 ]
Step 4212: [0.89684299 0.80388834]
Step 4213: [0.89689182 0.80397614]
Step 4214: [0.89694062 0.8040639 ]
Step 4215: [0.89698939 0.80415161]
Step 4216: [0.89703813 0.80423928]
Step 4217: [0.89708685 0.80432691]
Step 4218: [0.89713554 0.80441449]
Step 4219: [0.8971842 0.80450203]
Step 4220: [0.89723284 0.80458952]
Step 4221: [0.89728145 0.80467697]

经过4千多次迭代, 梯度下降法在局部最小值点停止了迭代

5.2.c.3 牛顿法

同样使用sympy计算Hessian矩阵, 并实现牛顿法:

def get_hessian(
    function: sm.core.expr.Expr,
    symbols: list[sm.core.symbol.Symbol],
    x0: dict[sm.core.symbol.Symbol, float],
) -> np.ndarray:
    d2 = {}
    hessian = np.array([])

    for i in symbols:
        for j in symbols:
            d2[f"{i}{j}"] = sm.diff(function, i, j).evalf(subs=x0)
            hessian = np.append(hessian, d2[f"{i}{j}"])

    hessian = np.array(np.array_split(hessian, len(symbols)))

    return hessian.astype(np.float64)

def newton_method(
    function: sm.core.expr.Expr,
    symbols: list[sm.core.symbol.Symbol],
    x0: dict[sm.core.symbol.Symbol, float],
    iterations: int = 100,
) -> dict[sm.core.symbol.Symbol, float] or None:
    x_star = {}
    x_star[0] = np.array(list(x0.values()))

    print(f"Starting Values: {x_star[0]}")

    for i in range(iterations):

        gradient = get_gradient(function, symbols, dict(zip(x0.keys(), x_star[i])))
        hessian = get_hessian(function, symbols, dict(zip(x0.keys(), x_star[i])))

        x_star[i + 1] = x_star[i].T - np.linalg.inv(hessian) @ gradient.T

        if np.linalg.norm(x_star[i + 1] - x_star[i]) < 10e-5:
            solution = dict(zip(x0.keys(), x_star[i + 1]))
            print(f"\nConvergence Achieved ({i+1} iterations): Solution = {solution}")
            break
        else:
            solution = None

        print(f"Step {i+1}: {x_star[i+1]}")

执行测试, 牛顿仅需5次就抵达了全局最小值, 可视化可见文首的动图.

Starting Values: [-2 2]
Step 1: [-1.9925187 3.97007481]
Step 2: [ 0.96687269 -7.82315462]
Step 3: [0.96689159 0.93487935]
Step 4: [1. 0.99890383]
Step 5: [1. 1.]

参考链接

推荐阅读


网站公告

今日签到

点亮在社区的每一天
去签到