当前位置:首页 >> 中药养生 >> 万字长书之CEPH-quincy 装设及K8S集成

万字长书之CEPH-quincy 装设及K8S集成

发布时间:2024-01-19

[""],"data-root": "/data/docker-root","exec-opts": ["native.cgroupdriver=systemd"]}'| tee /etc/docker/daemon.jsonsystemctl daemon-reload && systemctl start dockercephadm Bootstrap 便是mon-ip 172.16.213.137

4.cephadm 重新安装一个大

[root@pve-svr-ceph-01 data]# ./cephadm bootstrap 便是mon-ip 172.16.213.137Verifying podman|docker is present...Verifying lvm2 is present...Verifying time synchronization is in place...Unit chronyd.service is enabled and runningRepeating the final host check...docker (/usr/bin/docker) is presentsystemctl is presentlvcreate is presentUnit chronyd.service is enabled and runningHost looks OKCluster fsid: 9e977242-2ed3-11ee-9e35-b02628aff3e6Verifying IP 172.16.213.137 port 3300 ...Verifying IP 172.16.213.137 port 6789 ...Mon IP 人口为120人172.16.213.137人口为120人 is in CIDR network 人口为120人172.16.213.0/24人口为120人Mon IP 人口为120人172.16.213.137人口为120人 is in CIDR network 人口为120人172.16.213.0/24人口为120人Internal network (便是cluster-network) has not been provided, OSD replication will default to the public_networkPulling container image quay.io/ceph/ceph:v17...Ceph version: ceph version 17.2.6 (d7ff0d10654d2280e08f1ab989c7cdf3064446a5) quincy (stable)Extracting ceph user uid/gid from container image...Creating initial keys...Creating initial monmap...Creating mon...firewalld readyWaiting for mon to start...Waiting for mon...mon is ailableAssimilating anything we can from ceph.conf...Generating new minimal ceph.conf...Restarting the monitor...Setting mon public_network to 172.16.213.0/24Wrote config to /etc/ceph/ceph.confWrote keyring to /etc/ceph/ceph.client.admin.keyringCreating mgr...Verifying port 9283 ...firewalld readyfirewalld readyWaiting for mgr to start...Waiting for mgr...mgr not ailable, waiting (1/15)...mgr not ailable, waiting (2/15)...mgr not ailable, waiting (3/15)...mgr is ailableEnabling cephadm module...Waiting for the mgr to restart...Waiting for mgr epoch 5...mgr epoch 5 is ailableSetting orchestrator backend to cephadm...Generating ssh key...Wrote public SSH key to /etc/ceph/ceph.pubAdding key to root@localhost authorized_keys...Adding host pve-svr-ceph-01...Deploying mon service with default placement...Deploying mgr service with default placement...Deploying crash service with default placement...Deploying prometheus service with default placement...Deploying grafana service with default placement...Deploying node-exporter service with default placement...Deploying alertmanager service with default placement...Enabling the dashboard module...Waiting for the mgr to restart...Waiting for mgr epoch 9...mgr epoch 9 is ailableGenerating a dashboard self-signed certificate...Creating initial admin user...Fetching dashboard port number...firewalld readyCeph Dashboard is now ailable at:URL: User: adminPassword: or2qgjs2o7Enabling client.admin keyring and conf on hosts with "admin" labelSing cluster configuration to /var/lib/ceph/9e977242-2ed3-11ee-9e35-b02628aff3e6/config directoryEnabling autotune for osd_memory_targetYou can access the Ceph CLI as following in case of multi-cluster or non-default config:sudo ./cephadm shell 便是fsid 9e977242-2ed3-11ee-9e35-b02628aff3e6 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyringOr, if you are only running a single cluster on this host:sudo ./cephadm shellPlease consider enabling telemetry to help improve Ceph:ceph telemetry onFor more information see:Bootstrap complete.

重新安装启动后可以到访dashboard

URL:

User: admin

Password: or2qgjs2o7

5.1始创osd 则会移除所有满足条件的OSD

#假设ceph 公钥ceph cephadm get-pub-key> ~/ceph.pub#复制到链表ssh-copy-id -f -i ~/ceph.pub root@pve-svr-ceph-02ssh-copy-id -f -i ~/ceph.pub root@pve-svr-ceph-03#移除osdceph orch host add pve-svr-ceph-02ceph orch host add pve-svr-ceph-03

5.2 则会扫描全部可移除的SSD作为osd

ceph orch apply osd 便是all-ailable-devices#也可以手工指定的方法移除OSDceph orch daemon add osd pve-svr-ceph-01:/dev/sdbceph orch daemon add osd pve-svr-ceph-02:/dev/sdbceph orch daemon add osd pve-svr-ceph-03:/dev/sdb#拍照状态[root@pve-svr-ceph-01 ~]# ceph orch device lsHOST PATH TYPE DEVICE ID SIZE AVAILABLE REFRESHED REJECT REASONSpve-svr-ceph-01 /dev/sda hdd HGST_HUS728T8TAL5200_VAJ1WYEL 8001G Yes 80s agopve-svr-ceph-01 /dev/sdb hdd HGST_HUS728T8TAL5200_VAJ15R6L 8001G Yes 80s agopve-svr-ceph-01 /dev/sdc hdd HGST_HUS728T8TAL5200_VAJ15NZL 8001G Yes 80s agopve-svr-ceph-01 /dev/sdd hdd HGST_HUS728T8TAL5200_VAJ15T7L 8001G Yes 80s agopve-svr-ceph-01 /dev/sde hdd HGST_HUS728T8TAL5200_VAJ1YGLL 8001G Yes 80s agopve-svr-ceph-01 /dev/sdf hdd HGST_HUS728T8TAL5200_VAJ1N47L 8001G Yes 80s agopve-svr-ceph-01 /dev/sdg hdd HGST_HUS728T8TAL5200_VAJ1N3NL 8001G Yes 80s agopve-svr-ceph-01 /dev/sdh hdd HGST_HUS728T8TAL5200_VAJ15R5L 8001G Yes 80s agopve-svr-ceph-01 /dev/sdi hdd HGST_HUS728T8TAL5200_VAJ1ZXTL 8001G Yes 80s agopve-svr-ceph-01 /dev/sdj hdd HGST_HUS728T8TAL5200_VAJ15PWL 8001G Yes 80s agopve-svr-ceph-02 /dev/sda hdd HGST_HUS728T8TAL5200_VAJD42ZR 8001G Yes 79s agopve-svr-ceph-02 /dev/sdb hdd HGST_HUS728T8TAL5200_VAJDE8MR 8001G Yes 79s agopve-svr-ceph-02 /dev/sdc hdd HGST_HUS728T8TAL5200_VAJBWV3R 8001G Yes 79s agopve-svr-ceph-02 /dev/sdd hdd HGST_HUS728T8TAL5200_VAJBVJKR 8001G Yes 79s agopve-svr-ceph-02 /dev/sde hdd HGST_HUS728T8TAL5200_VAJAZLYR 8001G Yes 79s agopve-svr-ceph-02 /dev/sdf hdd HGST_HUS728T8TAL5200_VAJB942R 8001G Yes 79s agopve-svr-ceph-02 /dev/sdg hdd HGST_HUS728T8TAL5200_VAJBW1UR 8001G Yes 79s agopve-svr-ceph-02 /dev/sdh hdd HGST_HUS728T8TAL5200_VAJBWXUR 8001G Yes 79s agopve-svr-ceph-02 /dev/sdi hdd HGST_HUS728T8TAL5200_VAJ1YH4L 8001G Yes 79s agopve-svr-ceph-02 /dev/sdj hdd HGST_HUS728T8TAL5200_VAJ1YEKL 8001G Yes 79s agopve-svr-ceph-03 /dev/sda hdd TOSHIBA_MG06SCA800EY_1090A0Q0F1GF 8001G Yes 81s agopve-svr-ceph-03 /dev/sdb hdd TOSHIBA_MG06SCA800EY_1090A0Q1F1GF 8001G Yes 81s agopve-svr-ceph-03 /dev/sdc hdd TOSHIBA_MG06SCA800EY_1090A0Q8F1GF 8001G Yes 81s agopve-svr-ceph-03 /dev/sdd hdd TOSHIBA_MG06SCA800EY_1090A017F1GF 8001G Yes 81s agopve-svr-ceph-03 /dev/sde hdd TOSHIBA_MG06SCA800EY_1090A01YF1GF 8001G Yes 81s agopve-svr-ceph-03 /dev/sdf hdd TOSHIBA_MG06SCA800EY_1090A0PYF1GF 8001G Yes 81s agopve-svr-ceph-03 /dev/sdg hdd TOSHIBA_MG06SCA800EY_1090A0QSF1GF 8001G Yes 81s agopve-svr-ceph-03 /dev/sdh hdd TOSHIBA_MG06SCA800EY_1090A003F1GF 8001G Yes 81s agopve-svr-ceph-03 /dev/sdi hdd TOSHIBA_MG06SCA800EY_1090A0Q7F1GF 8001G Yes 81s agopve-svr-ceph-03 /dev/sdj hdd TOSHIBA_MG06SCA800EY_1090A0Q2F1GF 8001G Yes 81s ago#重启nfsceph dashboard set-ganesha-clusters-rados-pool-namespace [/](图形界面也可以重启)

正常新应用程序、新SSD(未做raid)都可以则会始创osd,但是若局限性应用程序原始数据盘始创了中区或者中区原来始创了末期删除了,发现无法则会始创osd 原因就是 比如我这里原来是gptSSD,删除中区后,还存有gpt原始数据结构,因此须要制订下列步骤展开清空(若可以则会发现忽视5.3)

5.3 sgdisk清空中区以下内容

#如果是gptSSD,gpt SSD删除中区还存有 GPT 原始数据结构,常用 sgdisk 擅自展开清空(原来应用程序为haoop一个大,SSD被做了中区)。

yum install gdisk -ysgdisk 便是zap-all /dev/sdasgdisk 便是zap-all /dev/sdbsgdisk 便是zap-all /dev/sdcsgdisk 便是zap-all /dev/sddsgdisk 便是zap-all /dev/sdesgdisk 便是zap-all /dev/sdfsgdisk 便是zap-all /dev/sdgsgdisk 便是zap-all /dev/sdhsgdisk 便是zap-all /dev/sdisgdisk 便是zap-all /dev/sdj#其他PSP夜之星值得注意操作者

6. 始创cephfs(则会始创meta、mds)

ceph fs volume create cephfs

7.ceph的常用与k8s集成

7.1 mount挂在常用

人口为120人人口为120人人口为120人

常用mount.ceph 装载cephfs

#配置PSP名

cat>> /etc/hosts << EOF172.16.213.137 pve-svr-ceph-01172.16.213.55 pve-svr-ceph-02172.16.213.141 pve-svr-ceph-03EOF

#client故又称重新安装ceph-common(重新安装mount.ceph擅自)

#获取 ceph secret

Rancher 连通 ceph 一个大须要 ceph secret,在 ceph 应用程序中制订以下擅自生成 ceph secret:

mkdir /etc/ceph[root@pve-svr-ceph-01 data]# ceph auth get-key client.admin |base64QVFEVFZjWms2QWxCSkJBQVdoSEdtdnAvSG5IZU1oWXFpbmpyQlE9PQ==#写入文件echo>> /etc/ceph/admin.secret << EOFQVFEVFZjWms2QWxCSkJBQVdoSEdtdnAvSG5IZU1oWXFpbmpyQlE9PQ==EOFmkdir /cephfs_data

#装载

mount -t ceph pve-svr-ceph-03:6789,pve-svr-ceph-03:6789,pve-svr-ceph-03:6789:/ /cephfs_data -o name=admin,secretfile=/etc/ceph/admin.secret

7.2 ceph nfs-ganesha (通过nfs始创pv/pvc/sc)

7.2.1 装载

人口为120人人口为120人人口为120人

mount -t nfs -o nfsvers=4.1,proto=tcp 172.16.213.55:/hy_data /mnt/hy_k8s_data/[root@rke-k8s-worker1 ~]# df -h|grep mnt172.16.213.55:/hy_data 70T 0 70T 0% /mnt/hy_k8s_data人口为120人人口为120人人口为120人

7.2.2始创k8s pv

cat> pv.yml <7.2.3 始创k8s pvc

cat> pvc.yml < pvcuse.yml <7.2.4 始创读取类

cd /data/ceph_k8s_ymlgit clone cd nfs-subdir-external-provisioner#常用的匹配名称空间NS=$(kubectl config get-contexts|grep -e "_*" |awk '{print $5}')NAMESPACE=${NS:-default}sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml ./deploy/deployment.yaml#RBAC授权kubectl create -f deploy/rbac.yaml#根据实际持续性在deploy/deployment中改动反向、名称、nfs地址和装载kubectl create -f deploy/deployment.yaml

7.2.5 配置读取类

#根据nfs 改动反向和地址 deployment.ymlcat> storageclassdeploy.yml <<'EOF'apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: ceph-nfs-clientprovisioner: unixtommy/ceph-nfs-storageallowVolumeExpansion: trueparameters:pathPattern: "${.PVC.namespace}-${.PVC.name}"onDelete: delete

制订始创sc

[root@rke-k8s-master1 ceph_k8s_yml]#cat> storageclassdeploy.yml <<'EOF'> apiVersion: storage.k8s.io/v1> kind: StorageClass> metadata:> name: ceph-nfs-client> provisioner: unixtommy/ceph-nfs-storage> allowVolumeExpansion: true> parameters:> pathPattern: "${.PVC.namespace}-${.PVC.name}"> onDelete: delete> EOF[root@rke-k8s-master1 ceph_k8s_yml]# ll总用量 20drwxr-xr-x 10 root root 4096 7月 30 22:43 nfs-subdir-external-provisioner-rw-r便是r便是 1 root root 385 7月 30 22:37 pvcuse.yml-rw-r便是r便是 1 root root 271 7月 30 22:32 pvc.yml-rw-r便是r便是 1 root root 366 7月 30 22:30 pv.yml-rw-r便是r便是 1 root root 215 7月 30 23:00 storageclassdeploy.yml[root@rke-k8s-master1 ceph_k8s_yml]# vim storageclassdeploy.yml[root@rke-k8s-master1 ceph_k8s_yml]# kubectl create -f storageclassdeploy.ymlstorageclass.storage.k8s.io/nfs-client created[root@rke-k8s-master1 ceph_k8s_yml]# kubectl get scNAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGEceph-nfs-client unixtommy/ceph-nfs-storage Delete Immediate true 4ssc-nfs1-1 driver.longhorn.io Delete Immediate true 21d[root@rke-k8s-master1 ceph_k8s_yml]# kubectl get storageclassNAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGEceph-nfs-client unixtommy/ceph-nfs-storage Delete Immediate true 18ssc-nfs1-1 driver.longhorn.io Delete Immediate true 21d#只须要在pvc.spec中制订storageClassName: ceph-nfs-client就可以了

7.2.6 记号匹配读取类

#记号匹配读取类kubectl patch storageclass ceph-nfs-client -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

7.3 块电源(storageclass kubernetes)

7.3.1 始创kuberntest pool

ceph osd pool create kubernetesrbd pool init kubernetes#csi#拍照mon信息[root@pve-svr-ceph-01 data]# ceph mon dumpdumped monmap epoch 3epoch 3fsid 9e977242-2ed3-11ee-9e35-b02628aff3e6last_changed 2023-07-30T12:27:17.082792+0000created 2023-07-30T12:21:40.511162+0000min_mon_release 17 (quincy)election_strategy: 10: [v2:172.16.213.137:3300/0,v1:172.16.213.137:6789/0] mon.pve-svr-ceph-011: [v2:172.16.213.55:3300/0,v1:172.16.213.55:6789/0] mon.pve-svr-ceph-022: [v2:172.16.213.141:3300/0,v1:172.16.213.141:6789/0] mon.pve-svr-ceph-03

7.3.2 始创csi-config-map.yml

cat /data/kubernetes/csi-config-map.yaml便是-apiVersion: v1kind: ConfigMapdata:config.json: |-[{"clusterID": "9e977242-2ed3-11ee-9e35-b02628aff3e6","monitors": ["172.16.213.137:6789","172.16.213.55:6789","172.16.213.141:6789"]}]metadata:name: ceph-csi-configEOF

7.3.3始创kubernetes独立用户

[root@pve-svr-ceph-01 kubernetes]# ceph auth get-or-create client.kubernetes mon 'profile rbd' osd 'profile rbd pool=kubernetes' mgr 'profile rbd pool=kubernetes'[client.kubernetes]key = AQBNZMZkDc8zFBAATBsiKBdQWLeFPtTS6y8VRA==#kms csi-ceph 测试版有要求始创一个kms,可以以下内容为空但是要有cat /data/kubernetes/csi-kms-config-map.yaml便是-apiVersion: v1kind: ConfigMapdata:config.json: |-{}metadata:name: ceph-csi-encryption-kms-configEOF

7.3.4 始创kms

kubectl apply -f csi-kms-config-map.yaml#ceph-config-map.yaml假设Ceph configuration 到 ceph.confcat ceph-config-map.yaml便是-apiVersion: v1kind: ConfigMapdata:ceph.conf: |[global]auth_cluster_required = cephxauth_service_required = cephxauth_client_required = cephx# keyring is a required key and its value should be emptykeyring: |metadata:name: ceph-configEOF#制订kubectl apply -f ceph-config-map.yaml# 用户secret(上面始创的)cat csi-rbd-secret.yaml便是-apiVersion: v1kind: Secretmetadata:name: csi-rbd-secretnamespace: defaultstringData:userID: kubernetesuserKey: AQBNZMZkDc8zFBAATBsiKBdQWLeFPtTS6y8VRA==EOFkubectl apply -f csi-rbd-secret.yaml

7.3.5 ceph csi 应用程序重新安装 要

人口为120人人口为120人人口为120人

kubectl apply -f kubectl apply -f wget kubectl apply -f csi-rbdplugin-provisioner.yamlwget kubectl apply -f csi-rbdplugin.yaml

7.3.6 常用块电源

cat csi-rbd-sc.yaml便是-apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: csi-rbd-scprovisioner: rbd.csi.ceph.comparameters:clusterID: 9e977242-2ed3-11ee-9e35-b02628aff3e6pool: kubernetesimageFeatures: layeringcsi.storage.k8s.io/provisioner-secret-name: csi-rbd-secretcsi.storage.k8s.io/provisioner-secret-namespace: defaultcsi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secretcsi.storage.k8s.io/controller-expand-secret-namespace: defaultcsi.storage.k8s.io/node-stage-secret-name: csi-rbd-secretcsi.storage.k8s.io/node-stage-secret-namespace: defaultreclaimPolicy: DeleteallowVolumeExpansion: truemountOptions:- discardEOF# kubectl apply -f csi-rbd-sc.yaml

7.3.7 pvc 始创

cat raw-block-pvc.yaml便是-apiVersion: v1kind: PersistentVolumeClaimmetadata:name: raw-block-pvcspec:accessModes:- ReadWriteOncevolumeMode: Blockresources:requests:storage: 1GistorageClassName: csi-rbd-scEOF$ kubectl apply -f raw-block-pvc.yaml# pod中常用cat raw-block-pod.yaml便是-apiVersion: v1kind: Podmetadata:name: pod-with-raw-block-volumespec:containers:- name: fc-containerimage: fedora:26command: ["/bin/sh", "-c"]args: ["tail -f /dev/null"]volumeDevices:- name: datadevicePath: /dev/xvdolumes:- name: datapersistentVolumeClaim:claimName: raw-block-pvcEOF$ kubectl apply -f raw-block-pod.yaml人口为120人人口为120人人口为120人

# 7.ceph 自带

没学会重新安装,再次你得学会自带[老虎]

#清空一个大中所有PSP的 ceph 守护进程 每个链表都要制订

ceph fsid./cephadm rm-cluster 便是force 便是zap-osds 便是fsid 88bf45d8-2ecb-11ee-a554-b02628aff3e6。

益生菌与肠炎宁颗粒的区别是什么,这些要点得记住
记忆力越来越差怎么办好
睡觉老打呼噜吃什么能治好
髋关节肿胀怎么治疗
迈普新胸腺法新效果怎么样
标签:
友情链接: