ceph 14.2.22 nautilus Balancer 数据平衡

发布于:2025-08-03 ⋅ 阅读:(10) ⋅ 点赞:(0)

Ceph Balancer (upmap 模式) 启用与配置

在 Ceph Nautilus (14.2.22) 版本中启用和配置 Balancer 的完整步骤

1. 前提检查

检查集群的初始状态和版本。

集群状态 (ceph -s)

  cluster:
    id:     xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3 (age 4w)
    mgr: ceph-node1(active, since 4w)
    mds: cephfs_ec:1 {0=ceph-node1=up:active} 1 up:standby
    osd: N osds: N up (since 3w), N in (since 3w)
 
  data:
    pools:   X pools, Y pgs
    objects: A objects, B TiB
    usage:   C TiB used, D PiB / D PiB avail
    pgs:     Y active+clean

Ceph 版本 (ceph -v)

ceph version 14.2.22 (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx) nautilus (stable)

2. 启用 Balancer 模块

启用 balancer 模块。系统提示该模块已默认启用。

[root@ceph-node1 ~]# ceph mgr module enable balancer
module 'balancer' is already enabled (always-on)

查看 Balancer 初始状态,此时模式为 none,且未激活。

[root@ceph-node1 ~]# ceph balancer status
{
    "last_optimize_duration": "", 
    "plans": [], 
    "mode": "none", 
    "active": false, 
    "optimize_result": "", 
    "last_optimize_started": ""
}

3. 配置 Balancer 模式为 upmap

我们选择 upmap 模式,因为它效率高且对集群性能影响小。

步骤 3.1: 解决兼容性问题

尝试设置 upmap 模式时,系统报错,提示需要最低的客户端兼容版本为 luminous

[root@ceph-node1 ~]# ceph balancer mode upmap
Error EPERM: min_compat_client "jewel" < "luminous", which is required for pg-upmap. Try "ceph osd set-require-min-compat-client luminous" before enabling this mode

根据错误提示,执行以下命令更新客户端兼容性要求:

[root@ceph-node1 ~]# ceph osd set-require-min-compat-client luminous
set require_min_compat_client to luminous

步骤 3.2: 成功设置 upmap 模式

解决兼容性问题后,再次尝试设置模式,命令成功执行。

[root@ceph-node1 ~]# ceph balancer mode upmap

4. 开启 Balancer 并验证

现在,正式开启 Balancer。

[root@ceph-node1 ~]# ceph balancer on

开启后,立即查看状态,可以看到 active 已变为 truemodeupmap,并且系统已成功创建优化计划。

[root@ceph-node1 ~]# ceph balancer status
{
    "last_optimize_duration": "0:00:00.xxxxxx", 
    "plans": [], 
    "mode": "upmap", 
    "active": true, 
    "optimize_result": "Optimization plan created successfully", 
    "last_optimize_started": "YYYY-MM-DD HH:MM:SS"
}

5. 观察集群状态变化

Balancer 开始工作后,会进行 PG 的重映射(remap)和数据迁移。此时通过 ceph -s 查看集群状态,会发现健康状态变为 HEALTH_WARN

[root@ceph-node1 ~]# ceph -s
  cluster:
    id:     xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
    health: HEALTH_WARN
            Degraded data redundancy: X/Y objects degraded (Z%), A pgs degraded
 
  services:
    mon: 3 daemons, quorum ceph-node1,ceph-node2,ceph-node3 (age 4w)
    mgr: ceph-node1(active, since 4w)
    mds: cephfs_ec:1 {0=ceph-node1=up:active} 1 up:standby
    osd: N osds: N up (since 3w), N in (since 3w); M remapped pgs
 
  data:
    pools:   X pools, Y pgs
    objects: A objects, B TiB
    usage:   C TiB used, D PiB / D PiB avail
    pgs:     X/Y objects degraded (Z%)
             A/B objects misplaced (C%)
             D active+clean
             E active+recovery_wait+undersized+degraded+remapped
             F active+recovering+undersized+remapped
 
  io:
    recovery: X MiB/s, Y objects/s

注意: HEALTH_WARN 状态是预期现象,因为数据正在根据优化计划进行迁移。degradedmisplacedremapped 等状态表明 PG 正在被移动到更合适的 OSD 上。等待数据恢复(recovery)和回填(backfilling)完成后,集群状态将恢复到 HEALTH_OK

6. 开启balancer后 限制recovery恢复速度

recovery: 8.9 GiB/s, 2.28k objects/s
# ceph tell osd.1 config get osd_max_backfills
1
# ceph tell osd.1 config get osd_recovery_max_active
3
# ceph tell osd.1 config get osd_recovery_max_single_start
1
# 客户端 I/O 默认优先级为 63,此参数默认值为 3,值越小优先级越低。
# ceph tell osd.1 config get osd_recovery_op_priority
1
# ceph tell osd.1 config get osd_recovery_sleep
0.000000
# 当以上并发数限制仍无法有效降低 I/O 时
# 最有效的方法是引入休眠时间。这会在两次 recovery/backfill 操作之间插入一个短暂的延迟(单位:秒),
# 从而直接降低整体带宽。可以从 0.1 开始尝试,根据实际情况调整。
ceph tell 'osd.*' config set osd_recovery_sleep 0.1
# ceph tell osd.1 config get osd_recovery_sleep
0.100000
   recovery: 3.4 GiB/s, 865 objects/s

网站公告

今日签到

点亮在社区的每一天
去签到