ceph 客户端配置

发布于:2022-12-25 ⋅ 阅读:(409) ⋅ 点赞:(0)

前端运维人员已经搭建ceph分布式存储系统环境,使用的设备分别是211、212、213,结构如图:

登录系统服务端,创建用户

 [root@node-2 ~]# ceph auth get-or-create client.zyj mon 'allow r' osd 'allow rw pool=tmVideoPool' -o zyj.keyring

[root@node-2 ~]# ceph auth ls

client.zyj
        key: AQCeTBhjfjuSChAA+C9cwKEpKP+AElI4U6X0Qw==
        caps: [mon] allow r
        caps: [osd] allow rw pool=tmVideoPool
这里使用了admin用户,从服务端/etc/ceph目录内以下这两个配置文件拷贝到客户端的/etc/ceph目录下

 centos7系统下ceph客户端环境搭建:

添加yum源,/etc/yum.repos.d/ceph.repo,内容为:

[Ceph]
name=Ceph packages for $basearch
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
 
[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
 
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc

执行yum install ceph,安装完毕后Ceph –v

[root@bogon StorageMedia]# ceph -v
ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)

查看ceph运行状态:

[root@bogon StorageMedia]# ceph -s
  cluster:
    id:     1805e523-5bde-422a-a0b1-26ee22bee039
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum node-1,node-2,node-3
    mgr: node-2(active), standbys: node-3
    osd: 9 osds: 9 up, 9 in
    rgw: 3 daemons active

  data:
    pools:   9 pools, 128 pgs
    objects: 8.23 k objects, 27 GiB
    usage:   77 GiB used, 823 GiB / 900 GiB avail
    pgs:     128 active+clean


网站公告

今日签到

点亮在社区的每一天
去签到