DevOps
Kettle
Kettle(Pentaho Data Integration)
GitLab
01.使用二进制部署 Gitlab
02.Git 分支模型
03.install gitlab for docker
04.unicorn & Puma
05.重置用户的密码
06.OpenLDAP+Gitlab配置
07.备份恢复与迁移
08.Git 提交过程
如何使用phpLdapAdmin创建CN用户、OU用户组?
svn 代码迁移到 git
Git如何更新远程仓库代码到本地
Jenkins
01.安装 Jenkins
02.使用email-ext替换Jenkins的默认邮件通知
03.插件
04.基于Role-based认证权限管理
05.jenkins 分布式
离线安装rackshift V1.0.0
xmlstarlet
FlinkX
SaltStack
01.SaltStack运行原理
12.Saltstack远程执行-编写执行模块
13.Saltstack配置管理-状态模块
14.YAML和Jinja2
15.Job 管理
16.Saltstack不使用master
17.Saltstack多master
18.Salt-syndic
19.salt-ssh
20.Saltstack_API
11.远程执行-返回程序
10.远程执行
Note
02.SaltStack组件
03.部署Saltstack
04.Saltstack配置文件
05.Saltstack_module
06.Saltstack远程执行
07.Saltstack配置管理
08.Grains
09.Pillar
21.Saltstack_Web管理
Saltstack 3005
Cobbler
01.PXE + Kickstart
08.电源管理
07.Cobbler安装操作系统
06.yum仓库配置管理
修改initrd.img引导镜像
05.Cobbler Web管理
Cobbler 3.2.1
SaltStack
04.Importing Distribution
03.Configuration Files
02.安装Cobbler
09.Cobbler 常见错误
Prometheus
02.prometheus 与 jmx_exporter
Service Discovery
Prometheus+mysqld_exporter搭建Mysql监控
Pushgateway
AlertManager
PromQL
blackbox_exporter
Exporter
Prometheus+clickhouse_exporter 监控 Clickhouse
Prometheus+jmx-exporter 监控 Tomcat
Prometheus + cAdvisor 监控 Docker
Prometheus+ elasticsearch_exporter 搭建 Elastic 监控
Prometheus+kafka_exporter搭建Kafka监控
Prometheus+jmx_exporter+Grafana 监控 Hadoop
01.Prometheus 收集器和采集到的指标(Metric)
03.install Prometheus
04.Prometheus 存储
05.Prometheus 联邦
06.规则(rule)、模板配置
07.Prothetheus查询
Prometheus+oracledb_exporter 监控 Oracle
Prometheus + Grafana 监控部署
08.label
Mimir
备份软件
文件内容比较工具
Grafana
Zabbix
00.install Zabbbix 5.0 for Centos 8.0
01.CentOS7 离线安装 ZABBIX 5.0
02.个性参数
03.添加中文字支持
Oracle桌面虚拟化产品阵容
快速创建一个Windows Service
Data Backup and Recovery
01.备份的发展
02.备份软件体系架构解析
03.备份软件分布式(二级)索引架构
04.数据重删压缩
05.备份介质可靠性
06.虚拟机备份原理和架构
07.备份存储配置原理和实践
08.无代理备份
09.磁带库技术
10.mhvtl --- 虚拟带库
NetBackup
01.安装 master server
02.安装media server 服务器软件
03.配置存储
04.Client 安装
05.NetBackup 备份恢复oracle数据库
06.VMWARE的备份与恢复
07.Storage Lifecycle Policy
08.Netbackup的其它属性设置
08.备份恢复catalog
Generate reissue token
status 13 and errno = 62
Ceph
01.CentOS 7.4 部署 Ceph 集群
02.Ceph 管理命令
03.管理 OSD
04.Ceph RDB
05. Ceph 对象存储
06.Ceph 文件存储
07.Ceph 集群操作和管理
08.Ceph基础知识
09.Ceph 性能测试
10.Ceph 监控
11.Cephfs 元数据库服务器多活
12.Ceph PG
13.OSD
14.Ceph Pool
error and solution
HEALTH_ERR 1 pgs inconsistent; 1 scrub errors
HEALTH_ERR 1 scrub errors
ceph-objectstore-tool
Ceph-dash
HEALTH_WARN clock skew detected
ceph 修改 mon IP 地址
POOL_APP_NOT_ENABLED
daemons have recently crashed
install Ceph-octopus-15.2.13 using ceph-deploy
install Ceph-pacific-15.2.13 using cephadm
install Ceph-pacific-16.2.4 for manual
install Ceph-pacific-16.2.4 using rook
install Ceph-pacific-16.2.4 using rook + yaml
install Ceph-pacific-16.2.4 using rook + Helm
Configuration storage
cephfs metadata
delete pool / cephfs / osd
install rook ceph 1.8.2 using yaml
06.如何使用这块设备、文件系统、对象存储?
02.Block Storage
01.Create Ceph Cluster
05.Ceph Client CRD
03.Ceph Shared Filesystem
04.Ceph Object Store
07.Ceph OSD Management
Deploy Ceph(v17.2.5 Quincy) cluster to use Cephadm
Upgrade CEPH with Cephadm
nexus
ClickHouse-Keeper
Tivoli Storage Manager
01.install IBM Spectrum Protect for Linux
08.Installing Data Protection for Oracle
07.多个客户机接收器服务
06.如何实现ORACLE备份?
05.Install client management service
04.Install Linux backup-archive clients
03.install Operations Center
02.IBM Db2 命令
备份linux文件系统测试
Backup & Archive
09.日常维护
GLPI+OCS Inventory ITIL资产管理
CloudCanal
TrueNAS SCALE
01.ACL权限
02.WebDAV
03.TrueCommand
04.Apps
05.Virtualization Tutorials
06.pygvpn 蒲公英
07.heimdall
08.filebrowser
09.自动更新社区应用
Accessing PVC Data
Backup and Restore
Traefik
orabbix-1.2.3 监控 oracle 数据库
zabbix 通过 SNMP V3 监控 华为5720 交换机
Ansible
Ansible Web UI的部署与使用
Ansible 搭建hadoop3.3.4 HA
离线安装 Ansible
Bacula
本文档使用 MrDoc 发布
-
+
home page
install Ceph-pacific-16.2.4 using rook + Helm
# 1.环境配置 Rook 已为 operator 发布了 Helm chart 。 其他 Helm chart也可能为所有 Rook 存储后端的每个 CRD 开发。 | Host_Name | ip | OS | KubeSpeher_version | kubernetes_version | docker_version | role | | --- | --- | --- | --- | --- | --- | --- | | rook-ceph01 | 193.169.100.80 | Centos_7.6 | | | k8s-master,etcd,helm3,ceph | | rook-ceph02 | 193.169.100.81 | Centos_7.6 | | | k8s-workdnode,ceph | | rook-ceph03 | 193.169.100.82 | Centos_7.6 | | | k8s-workdnode,ceph | ## 1.1.配置防火墙和selinux(所有节点) ```bash systemctl stop firewalld systemctl disable firewalld setenforce 0 sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config ``` ## 1.2.配置 hosts(所有节点) 修改主机名 ```bash nmcli g hostname kubesphere-01 ``` 配置主机名解析 ```bash cat >> /etc/hosts << EOF # Ceph-stroage 193.169.100.80 rook-ceph01 193.169.100.81 rook-ceph02 193.169.100.82 rook-ceph03 EOF ``` ## 1.3.配置SSH互信(所有节点,可选) 管理节点创建ssh密钥 ```bash [root@localhost yum.repos.d]# ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:YCK3yFzTa7qi+10Jwy0Og5wKeUrjMKQfeLZuG0JtnSo root@ceph-stroage01 The key's randomart image is: +---[RSA 2048]----+ | | | . | | .. = + | |+*o*.*.o | |O*@o*o+ S | |BBo*.* . | |+E+.o o | | o+o o | |o=+oo | +----[SHA256]-----+ [root@localhost yum.repos.d]# ``` 分发密钥,认证 ```bash [root@localhost yum.repos.d]# ssh-copy-id ceph-stroage01 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host 'ceph-stroage01 (193.169.100.78)' can't be established. ECDSA key fingerprint is SHA256:R0iFewa45i6a74ftDpgS5VdhzOm6Cihfd5cIbs7jJPA. ECDSA key fingerprint is MD5:42:4c:19:03:db:a6:e1:de:e1:98:7d:84:12:7f:b8:36. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@ceph-stroage01's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'ceph-stroage01'" and check to make sure that only the key(s) you wanted were added. [root@localhost yum.repos.d]# ``` ## 1.4.配置 yum 源(所有节点,可选) ## 1.5.网络配置 step 1.清空网卡配置 ``` ifdown em1 ifdown em2 rm -rf /etc/sysconfig/network-scripts/ifcfg-em1 rm -rf /etc/sysconfig/network-scripts/ifcfg-em2 ``` step 2.创建 bond 网卡 ``` nmcli con add type bond con-name bond0 ifname bond0 mode 802.3ad ip4 192.168.60.152/24 gw4 192.168.60.254 ``` step 3.设置 bond 模式 ``` nmcli con mod id bond0 bond.options mode=802.3ad,miimon=100,lacp_rate=fast,xmit_hash_policy=layer2+3 ``` step 4.将物理网卡绑定至 bond ``` nmcli con add type bond-slave ifname em1 con-name em1 master bond0 nmcli con add type bond-slave ifname em2 con-name em2 master bond0 ``` step 5.修改网卡模式 ``` vi /etc/sysconfig/network-scripts/ifcfg-bond0 BOOTPROTO=static ``` step 6.重启 Network Manager ``` systemctl restart NetworkManager # Display NIC information nmcli con ``` step 7.修改主机名和 DNS ``` hostnamectl set-hostname worker-1 vim /etc/resolv.conf ``` ## 1.6.时间同步配置(所有节点) step 1.所有节点安装 chrony ```bash yum install chrony -y ``` step 2.设置时区 ```bash timedatectl set-timezone Asia/Shanghai ``` step 3.配置时间服务器 ```bash [root@localhost yum.repos.d]# cat /etc/chrony.conf # Use public servers from the pool.ntp.org project. # Please consider joining the pool (http://www.pool.ntp.org/join.html). server 193.169.100.58 iburst # Record the rate at which the system clock gains/losses time. driftfile /var/lib/chrony/drift # Allow the system clock to be stepped in the first three updates # if its offset is larger than 1 second. makestep 1.0 3 # Enable kernel synchronization of the real-time clock (RTC). rtcsync # Enable hardware timestamping on all interfaces that support it. #hwtimestamp * # Increase the minimum number of selectable sources required to adjust # the system clock. #minsources 2 # Allow NTP client access from local network. #allow 192.168.0.0/16 allow 193.169.100.0/24 # Serve time even if not synchronized to a time source. #local stratum 10 local stratum 3 # Specify file containing keys for NTP authentication. #keyfile /etc/chrony.keys # Specify directory for log files. logdir /var/log/chrony # Select which information is logged. #log measurements statistics tracking [root@localhost yum.repos.d]# ``` 检查 ntp-server 是否可用。 ```bash chronyc activity -v ``` step 4.配置客户端 ```bash [root@localhost ~]# cat /etc/chrony.conf # Use public servers from the pool.ntp.org project. # Please consider joining the pool (http://www.pool.ntp.org/join.html). server 193.169.100.58 iburst # Record the rate at which the system clock gains/losses time. driftfile /var/lib/chrony/drift # Allow the system clock to be stepped in the first three updates # if its offset is larger than 1 second. makestep 1.0 3 # Enable kernel synchronization of the real-time clock (RTC). rtcsync # Enable hardware timestamping on all interfaces that support it. #hwtimestamp * # Increase the minimum number of selectable sources required to adjust # the system clock. #minsources 2 # Allow NTP client access from local network. #allow 192.168.0.0/16 # Serve time even if not synchronized to a time source. #local stratum 10 # Specify file containing keys for NTP authentication. #keyfile /etc/chrony.keys # Specify directory for log files. logdir /var/log/chrony # Select which information is logged. #log measurements statistics tracking [root@localhost ~]# ``` ## 1.7.更新系统和依赖(所有节点,可选) 执行以下命令更新系统包并安装依赖项: ```bash ``` ## 1.8.优化系统 step 1.添加所需的内核引导参数(**K8s节点**) ``` /sbin/grubby --update-kernel=ALL --args='cgroup_enable=memory cgroup.memory=nokmem swapaccount=1' ``` step 2.启用 overlay2 内核模块(**K8s节点**) ``` echo "overlay2" | sudo tee -a /etc/modules-load.d/overlay.conf ``` step 3.刷新动态生成的 grub2 配置(**K8s节点**) ``` sudo grub2-set-default 0 ``` step 4.调整内核参数并使修改生效(**K8s节点**) ``` cat <<EOF | sudo tee -a /etc/sysctl.conf vm.max_map_count = 262144 fs.may_detach_mounts = 1 net.ipv4.ip_forward = 1 vm.swappiness=1 kernel.pid_max =1000000 fs.inotify.max_user_instances=524288 EOF sudo sysctl -p ``` step 5.调整系统限制(**所有节点**) ``` vim /etc/security/limits.conf * soft nofile 1024000 * hard nofile 1024000 * soft memlock unlimited * hard memlock unlimited root soft nofile 1024000 root hard nofile 1024000 root soft memlock unlimited ``` step 6.删除旧的限制配置(**所有节点**) ``` sudo rm /etc/security/limits.d/20-nproc.conf ``` step 7.重启系统 # 2.部署环境依赖 ## 2.1.部署 Kubernetes install k8s for CentOS 7.6: http://dbaselife.com/project-8/doc-667/ 0202.节点管理: http://dbaselife.com/project-8/doc-666/ ## 2.2.部署 Helm v3.5.4 - 在线安装 Helm 现在具有一个安装程序脚本,该脚本将自动获取最新版本的 Helm 并将其本地安装。 ```bash $ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 $ chmod 700 get_helm.sh $ ./get_helm.sh Downloading https://get.helm.sh/helm-v3.5.4-linux-amd64.tar.gz Preparing to install helm into /usr/local/bin helm installed into /usr/local/bin/helm ``` 官方参考:https://helm.sh/docs/intro/quickstart/ - 下载安装 访问 Helm Github 下载页面 https://github.com/helm/helm/releases 找到最新的客户端,里面有不同系统下的包,这里我们选择 Linux amd64,然后在 Linux 系统中使用 Wget 命令进行下载。 step 1.下载Helm客户端 ```bash wget https://get.helm.sh/helm-v3.5.4-linux-amd64.tar.gz tar -zxvf helm-v3.5.4-linux-amd64.tar.gz ``` step 2.复制客户端执行文件到 bin 目录下 ``` cp linux-amd64/helm /usr/local/bin/helm3 ``` step 3.查看当前版本 ```bash [root@k8s-master ~]# helm3 version version.BuildInfo{Version:"v3.5.4", GitCommit:"1b5edb69df3d3a08df77c9902dc17af864ff05d1", GitTreeState:"clean", GoVersion:"go1.15.11"} [root@k8s-master ~]# ``` # 3.安装 rook ceph Ceph Operator helm chart 将安装为 Kubernetes 集群创建存储平台所需的基本组件。 1. 安装 Helm chart 2. 创建 Rook 集群 helm install 命令以默认配置在 Kubernetes 集群上部署 rook。 配置部分列出了在安装过程中可以配置的参数。 建议将 rook Operator安装到 rook-ceph 命名空间中。 step 1.下载资源文件 [【附件】rook-1.6.6.zip](/media/attachment/2021/07/rook-1.6.6.zip) ```bash $ git clone --single-branch --branch v1.6.6 https://github.com/rook/rook.git cd rook/cluster/examples/kubernetes/ceph ``` step 2.下载镜像 ```bash # 下载镜像 docker pull rook/ceph:v1.6.7 docker pull ceph/ceph:v15.2.13 docker pull quay.io/cephcsi/cephcsi:v3.1.1 docker pull k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0 docker pull k8s.gcr.io/sig-storage/csi-provisioner:v2.2.2 docker pull k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1 docker pull k8s.gcr.io/sig-storage/csi-attacher:v3.2.1 docker pull k8s.gcr.io/sig-storage/csi-resizer:v1.2.0 docker pull quay.io/csiaddons/volumereplication-operator:v0.1.0 docker pull quay.io/cephcsi/cephcsi:v3.3.1 # 保存镜像 docker save rook/ceph:v1.6.7 -o rook_ceph_v1.6.7.tar.gz docker save ceph/ceph:v15.2.13 -o ceph_ceph_v15.2.13.tar.gz docker save quay.io/cephcsi/cephcsi:v3.1.1 -o cephcsi_cephcsi_v3.1.1.tar.gz docker save k8s.gcr.io/sig-storage/csi-node-driver-registrar:v2.2.0 -o sig-storage_csi-node-driver-registrar_v2.2.0.tar.gz docker save k8s.gcr.io/sig-storage/csi-provisioner:v2.2.2 -o sig-storage_csi-provisioner_v2.2.2.tar.gz docker save k8s.gcr.io/sig-storage/csi-snapshotter:v4.1.1 -o sig-storage_csi-snapshotter_v4.1.1.tar.gz docker save k8s.gcr.io/sig-storage/csi-attacher:v3.2.1 -o sig-storage_csi-attacher_v3.2.1.tar.gz docker save k8s.gcr.io/sig-storage/csi-resizer:v1.2.0 -o sig-storage_csi-resizer_v1.2.0.tar.gz docker save quay.io/csiaddons/volumereplication-operator:v0.1.0 -o csiaddons_volumereplication-operator_v0.1.0.tar.gz docker save quay.io/cephcsi/cephcsi:v3.3.1 -o cephcsi_v3.3.1.tar.gz # load 镜像 docker load -i rook_ceph_v1.6.7.tar.gz docker load -i ceph_ceph_v15.2.13.tar.gz docker load -i cephcsi_cephcsi_v3.1.1.tar.gz docker load -i sig-storage_csi-node-driver-registrar_v2.2.0.tar.gz docker load -i sig-storage_csi-provisioner_v2.2.2.tar.gz docker load -i sig-storage_csi-snapshotter_v4.1.1.tar.gz docker load -i sig-storage_csi-attacher_v3.2.1.tar.gz docker load -i sig-storage_csi-resizer_v1.2.0.tar.gz docker load -i csiaddons_volumereplication-operator_v0.1.0.tar.gz docker load -i cephcsi_v3.3.1.tar.gz ``` ## 3.1.Ceph Operator Rook currently publishes builds of the Ceph operator to the release and master channels. Ceph Operator helm chart 将安装为 Kubernetes 集群创建存储平台所需的基本组件 1.安装 Helm chart 2.创建 Rook 集群。 ### 3.1.1.install step 1.添加 helm 仓库 ```bash helm3 repo add rook-release https://charts.rook.io/release ``` step 2.创建 csi 配置文件,修改csi镜像地址为国内 ``` cat > csi-conf.yaml << EOF csi: registrar: image: raspbernetes/csi-node-driver-registrar2.0.1 provisioner: image: raspbernetes/csi-external-provisioner2.0.0 snapshotter: image: raspbernetes/csi-external-snapshotter2.1.1 attacher: image: raspbernetes/csi-external-attacher3.0.0 resizer: image: raspbernetes/csi-external-resizer1.0.0 EOF ``` step 3.创建 namespace ```bash kubectl create namespace rook-ceph ``` step 4.修改资源配置文件 在下载的源文件目录获取,修改 /rook-1.6.6/cluster/charts/rook-ceph/values.yaml文件,指定image.tag为v1.6.7,且pullPolicy为IfNotPresent。 step 5.执行安装 ```bash cd /root/rook-1.6.6/cluster/charts/rook-ceph helm3 install --namespace rook-ceph rook-ceph . ``` 可以在安装chart时提供指定上述参数值的 yaml 文件 (values.yaml) ```bash cd /root/rook-1.6.6/cluster/charts/rook-ceph helm3 install --namespace rook-ceph rook-ceph -f ./value.yaml ``` 执行结果如下 : ```bash [root@rook-ceph01 rook-ceph]# helm3 install --namespace rook-ceph rook-ceph . W0704 00:08:28.185717 27077 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ W0704 00:08:28.601579 27077 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ NAME: rook-ceph LAST DEPLOYED: Sun Jul 4 00:08:27 2021 NAMESPACE: rook-ceph STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The Rook Operator has been installed. Check its status by running: kubectl --namespace rook-ceph get pods -l "app=rook-ceph-operator" Visit https://rook.io/docs/rook/master for instructions on how to create and configure Rook clusters Important Notes: - You must customize the 'CephCluster' resource in the sample manifests for your cluster. - Each CephCluster must be deployed to its own namespace, the samples use `rook-ceph` for the namespace. - The sample manifests assume you also installed the rook-ceph operator in the `rook-ceph` namespace. - The helm chart includes all the RBAC required to create a CephCluster CRD in the same namespace. - Any disk devices you add to the cluster in the 'CephCluster' must be empty (no filesystem and no partitions). [root@rook-ceph01 rook-ceph]# ``` > Important Notes: - 您必须在集群的示例清单中自定义“CephCluster”资源。 - 每个 CephCluster 必须部署到自己的命名空间,示例使用 `rook-ceph` 作为命名空间。 - 示例清单假设您还在 `rook-ceph` 命名空间中安装了 rook-ceph 操作符。 - helm chart 包含在同一命名空间中创建 CephCluster CRD 所需的所有 RBAC。 - 您在“CephCluster”中添加到集群的任何磁盘设备都必须为空(没有文件系统和分区)。 step 6.查看执行结果 ```bash root@rook-ceph01 rook-ceph-images]# kubectl --namespace rook-ceph get pods -l "app=rook-ceph-operator" NAME READY STATUS RESTARTS AGE rook-ceph-operator-6c548b479b-7mtlf 1/1 Running 0 38m [root@rook-ceph01 rook-ceph-images]# ``` 查看创建的角色 ```bash [root@rook-ceph01 rook-ceph-images]# kubectl get role -n rook-ceph NAME CREATED AT cephfs-external-provisioner-cfg 2021-07-03T18:33:47Z rbd-external-provisioner-cfg 2021-07-03T18:33:48Z rook-ceph-cmd-reporter 2021-07-03T18:33:47Z rook-ceph-mgr 2021-07-03T18:33:47Z rook-ceph-osd 2021-07-03T18:33:47Z rook-ceph-system 2021-07-03T18:33:47Z [root@rook-ceph01 rook-ceph-images]# kubectl get rolebinding -n rook-ceph NAME ROLE AGE cephfs-csi-provisioner-role-cfg Role/cephfs-external-provisioner-cfg 39m rbd-csi-provisioner-role-cfg Role/rbd-external-provisioner-cfg 39m rook-ceph-cluster-mgmt ClusterRole/rook-ceph-cluster-mgmt 39m rook-ceph-cmd-reporter Role/rook-ceph-cmd-reporter 39m rook-ceph-cmd-reporter-psp ClusterRole/psp:rook 39m rook-ceph-default-psp ClusterRole/psp:rook 39m rook-ceph-mgr Role/rook-ceph-mgr 39m rook-ceph-mgr-psp ClusterRole/psp:rook 39m rook-ceph-mgr-system ClusterRole/rook-ceph-mgr-system 39m rook-ceph-osd Role/rook-ceph-osd 39m rook-ceph-osd-psp ClusterRole/psp:rook 39m rook-ceph-system Role/rook-ceph-system 39m [root@rook-ceph01 rook-ceph-images] ``` 查看策略 ```bash [root@rook-ceph01 rook-ceph-images]# kubectl get psp Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES 00-rook-ceph-operator true SYS_ADMIN RunAsAny RunAsAny RunAsAny RunAsAny false configMap,downwardAPI,emptyDir,persistentVolumeClaim,secret,projected,hostPath,flexVolume [root@rook-ceph01 rook-ceph-images]# kubectl get sa NAME SECRETS AGE default 1 25h [root@rook-ceph01 rook-ceph-images]# ``` ### 3.1.2.Uninstalling the Chart step 1.查看当前安装的 Rook chart: ```bash helm ls --namespace rook-ceph ``` step 2.卸载/删除 rook-ceph 部署: ```bash helm delete --namespace rook-ceph rook-ceph ``` 该命令删除与图表关联的所有 Kubernetes 组件并删除发布。卸载后,您可能需要按照拆卸文档中的说明清理 CRD。 ## 3.2.Ceph Cluster 使用 Helm 包管理器安装 Rook Ceph 集群。 Rook currently publishes builds of this chart to the master channel. 环境依赖: - Kubernetes 1.13+ - Helm 3.x - Preinstalled Rook Operator. See the Helm Operator topic to install. 如果 operator 安装在 rook-ceph 以外的命名空间中,则必须在 operatorNamespace 变量中设置命名空间。 Rook v1.6.5 发布,现在可以通过 Helm chart 来配置 CephCluster ;你可以使用类似下面的命令来进行配置: ```bash helm repo add rook-master https://charts.rook.io/master helm install --create-namespace --namespace rook-ceph rook-ceph-cluster \ --set operatorNamespace=rook-ceph rook-master/rook-ceph-cluster -f values-override.yaml ``` 这种方式相比之前每次都要写个 YAML 要方便多了,另外值得注意的一点是 当前的 Helm chart 是实验性的,预计在 v1.7 中达到 stable 。 ### 3.2.1.installing step 1.修改rook-master/cluster/charts/rook-ceph-cluster/values.yaml 在安装之前,查看 values.yaml 以确认是否需要更新默认设置 step 2.添加 helm 仓库 ```bash helm3 repo add rook-master https://charts.rook.io/master ``` step 3.执行安装 ```bash cd /root/rook-master/cluster/charts/rook-ceph-cluster helm3 install --create-namespace --namespace rook-ceph rook-ceph-cluster \ --set operatorNamespace=rook-ceph rook-ceph-cluster -f values.yaml ``` 安装过程如下: ```bash [root@rook-ceph01 rook-ceph-cluster]# helm3 install --create-namespace --namespace rook-ceph rook-ceph-cluster --set operatorNamespace=rook-ceph rook-ceph-cluster -f values.yaml NAME: rook-ceph-cluster LAST DEPLOYED: Wed Jul 7 23:22:18 2021 NAMESPACE: rook-ceph STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The Ceph Cluster has been installed. Check its status by running: kubectl --namespace rook-ceph get cephcluster Visit https://rook.github.io/docs/rook/master/ceph-cluster-crd.html for more information about the Ceph CRD. Important Notes: - You can only deploy a single cluster per namespace - If you wish to delete this cluster and start fresh, you will also have to wipe the OSD disks using `sfdisk` [root@rook-ceph01 rook-ceph-cluster]# pwd /root/rook-master/cluster/charts/rook-ceph-cluster [root@rook-ceph01 rook-ceph-cluster]# ``` >i Important Notes: - 每个命名空间只能部署一个集群 - 如果你想删除这个集群并重新开始,你还必须使用 `sfdisk` 擦除 OSD 磁盘 step 4.查看 cephcluster ```bash [root@rook-ceph01 rook-ceph-cluster]# kubectl --namespace rook-ceph get cephcluster NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL rook-ceph /var/lib/rook 3 20m Progressing Configuring Ceph Mons [root@rook-ceph01 rook-ceph-cluster]# ``` step 5.等待初始完成 ```bash [root@rook-ceph01 rook-ceph-images]# kubectl --namespace rook-ceph get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES rook-ceph-crashcollector-rook-ceph01-5d65f8f4b9-w6phw 1/1 Running 0 2m39s 192.168.172.202 rook-ceph01 <none> <none> rook-ceph-crashcollector-rook-ceph02-78b9f9d96d-6rcmh 1/1 Running 0 2m48s 192.168.63.122 rook-ceph02 <none> <none> rook-ceph-crashcollector-rook-ceph03-6456dcc567-7brwv 1/1 Running 0 2m17s 192.168.131.168 rook-ceph03 <none> <none> rook-ceph-csi-detect-version-6thl8 0/1 Completed 0 41m 192.168.63.106 rook-ceph02 <none> <none> rook-ceph-mgr-a-fcdf86dd9-gmhzj 1/1 Running 0 2m49s 192.168.131.164 rook-ceph03 <none> <none> rook-ceph-mon-a-59487f997d-p2jn6 1/1 Running 0 4m20s 192.168.131.163 rook-ceph03 <none> <none> rook-ceph-mon-b-74f79cf9c5-x758f 1/1 Running 0 3m48s 192.168.63.121 rook-ceph02 <none> <none> rook-ceph-mon-c-7f56ffbdd8-bcpg8 1/1 Running 0 3m24s 192.168.172.201 rook-ceph01 <none> <none> rook-ceph-operator-6c548b479b-f4xd2 1/1 Running 0 97m 192.168.63.105 rook-ceph02 <none> <none> rook-ceph-osd-0-58874d8dfb-vj7fs 1/1 Running 0 2m18s 192.168.131.167 rook-ceph03 <none> <none> rook-ceph-osd-1-64cbbdf44-wdqpm 1/1 Running 0 2m13s 192.168.63.124 rook-ceph02 <none> <none> rook-ceph-osd-2-7465c5bdff-wq6d9 1/1 Running 0 2m11s 192.168.172.204 rook-ceph01 <none> <none> rook-ceph-osd-prepare-rook-ceph01-9s84l 0/1 Completed 0 51s 192.168.172.205 rook-ceph01 <none> <none> rook-ceph-osd-prepare-rook-ceph02-bpstv 0/1 Completed 0 49s 192.168.63.126 rook-ceph02 <none> <none> rook-ceph-osd-prepare-rook-ceph03-vpkz6 0/1 Completed 0 46s 192.168.131.169 rook-ceph03 <none> <none> [root@rook-ceph01 rook-ceph-images]# ``` 查看 service ```bash [root@rook-ceph01 rook-ceph-cluster]# kubectl -n rook-ceph get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE rook-ceph-mgr ClusterIP 10.10.211.135 <none> 9283/TCP 6m15s rook-ceph-mgr-dashboard ClusterIP 10.10.217.50 <none> 8443/TCP 6m15s rook-ceph-mon-a ClusterIP 10.10.250.70 <none> 6789/TCP,3300/TCP 7m56s rook-ceph-mon-b ClusterIP 10.10.56.132 <none> 6789/TCP,3300/TCP 7m24s rook-ceph-mon-c ClusterIP 10.10.169.87 <none> 6789/TCP,3300/TCP 7m [root@rook-ceph01 rook-ceph-cluster]# ``` ### 3.2.2.Uninstalling the Chart 查看当前安装的 Rook Chart: ```bash helm ls --namespace rook-ceph ``` 要卸载/删除 rook-ceph-cluster 图表: ```bash helm delete --namespace rook-ceph rook-ceph-cluster ``` 该命令删除与chart 关联的所有 Kubernetes 组件并删除发布。 删除 cluster chart不会删除 Rook operator。 此外,Rook 数据目录(默认为 /var/lib/rook)中的主机和 OSD 原始设备上的所有数据都会保留。 要重用磁盘,您必须在重新创建集群之前擦除它们。 # 4.安装工具包 参考其中章节: http://dbaselife.com/modify_doc/760/ # 5.配置 ceph dashboard 参考其中章节: http://dbaselife.com/modify_doc/760/ http://dbaselife.com/modify_doc/760/
Seven
Feb. 16, 2023, 10:49 p.m.
转发文档
Collection documents
Last
Next
手机扫码
Copy link
手机扫一扫转发分享
Copy link
Markdown文件
share
link
type
password
Update password