1、如果你不熟悉ansible,那么前后操作大概需要近两个小时的时间,需要执行的命令一共四条。 2、基础环境的操作我们使用ansible进行处理。 3、kubernetes的安装 我们使用的rke集群管理工具,自定义的pod资源创建使用的helm模板。 4、整体安装操作是离线状态,但是安装包需要提前下载,我提供有下载好的安装包,并附带了下载网盘地址。 5、本文结构分为四个部分 离线安装软件已上传至个人百度云盘 下载很慢的童鞋可以选择下面的 rancher-save-images.sh 和 rancher-images.txt 通过这两个脚本下载kubernetes离线安装所需的镜像,当然镜像的拉取是需要网络的 ansible.tar 内是ansile的离线安装文件 21.56Mb ansible 文件内是编写好的剧本,拿下来直接运行即可,大小1.3GB 因为内置的Harbor1.5版本的离线安装包比较大。 rancher-load-images.sh 是用来将本地镜像加载到harbor仓库的 这里使用本地三台虚拟机进行演示, 将所需软件包下载到本地,选择kubernetes集群任意节点作为ansible主机,并将ansible安装包上传至该服务器 在/mnt/ 下新建ansible目录 并将安装文件ansible.tar上传至该目录执行安装命令 执行安装命令 ansible安装完成 ansible默认的执行文件在 /etc/ansible 目录下 从百度云盘下载的文件内容 服务器上安装的ansible默认文件内容 备份原有的ansible.cfg文件 将本地离线文件上传至服务器 hosts文件除外 以百度云盘下载的hosts文件为模板 修改服务器上的hosts文件 本地服务器修改内容 请耐心等待20分钟左右 该剧本做了以下操作 假定这里的每台服务器都是k8s集群内的节点 将百度云盘下载的镜像tar.gz包和脚本上传至任意kubernetes任意节点 这里我选择上传至harbor仓库所在服务器,并放置在/mnt/ 目录下 1、登录harbor 创建目录rancher并设置为public权限 账户 admin 密码 Harbor12345 这里我们使用了默认密码 2、创建工程目录 耐心等待大约20分钟 为每台服务器配置rancher用户的ssh互信 ssh设置完成之后 继续执行剧本 大概等待20分钟左右的时间,安装完成 该剧本做了以下操作 安装完成后继续执行剧本 该剧本做了以下操作 至此,kubernetes集群和Rancher已经安装完成。 Docker版本请与我保持一致,或使用17.0.0以上的版本 由于是内网,这里我将防火墙全部关闭了,如果必须要开启防火墙 多次部署的环境必须清理历史环境信息 通常这个问题会在安装kubernetes集群时出现。 会有Failed to dial to /var/run/docker.sock 他会明确告诉很多可能会解决当前问题的操作,其中一条会是openssh的版本需要在6.8以上,不过我还是建议你将openssh版本升级到7.0+ 在使用Centos/Redhat 时是不允许root用户进行ssh tunnel操作的 因此我们需要使用普通用户,同时将其加入Docker组。 当然你可以自定义用户组,但是他必须拥有操作Docker的权限 如果你在操作过程中遇到了问题或是在文中发现了任何问题,欢迎给我留言或者私信,我会及时更正。 那就带上金帽子,如果可以打动她;
[前言] 你需要了解的地方
环境要求
系统
Openssh
Centos/Redhat7.5+
version: 6.8+
kubernetes集群内相关组件版本
组件
版本
kubernetes
v1.17.5
calico-cni
v3.13.0
coredns-coredns
1.6.5
coreos-etcd
v3.4.3-rancher1
coreos-flannel
v0.11.0-rancher1
fluentd
v0.1.19
hyperkube
v1.17.5-rancher1
istio-kubectl
1.4.7
nginx-ingress-controller-defaultbackend
1.5-rancher1
pause
3.1
kubectl
v1.17.0
…
…
部署环境
env
status
firewalld
disable
selinux
disable
iptables
clean
swap
disable
UseDNS
off
部署工具
incident
tools
env
ansible v2.9.7
kubernetes
rke v1.1
PrRegistery
Harbor1.5
dashboard
rancher2.4.3
pod
helm3.0
https://pan.baidu.com/s/11nImNp4x5qXAuHT3GWrWww 提取码:ith3
rancher-images.tar.gz 是kubernetes的离线安装包 大小3.88GB一、准备三台及其以上服务器
IP
CPU
MEM
NODE
192.168.91.231
4核
4GB
Master
192.168.91.232
4核
4GB
Master
192.168.91.233
4核
4GB
Master
192.168.91.233
4核
4GB
Harbor
1、安装ansible
这里选择192.168.91.231作为ansible客户端mkdir /mnt/ansible
[root@localhost mnt]# tree . └── ansible └── ansible.tar 1 directory, 1 file
[root@localhost mnt]#cd /mnt/ansible && tar -xvf *.tar && rpm -ivh *.rpm --force --nodeps
[root@localhost ansible]# ansible --version ansible 2.9.7 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Oct 30 2018, 23:45:53) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
2、将下载好的ansible剧本复制到服务器上的ansible中
[root@localhost ansible]# tree . ├── ansible.cfg ├── hosts └── roles 1 directory, 2 files
[root@k1 ansible]# tree . ├── ansible.cfg ├── ansible.cfg.bak ├── hosts ├── k8sinstall.yml ├── postposition.yml ├── preposition.yml └── roles ├── docker │ ├── files │ │ └── docker-19.03.0 │ │ ├── bin │ │ │ ├── containerd │ │ │ ├── containerd-shim │ │ │ ├── ctr │ │ │ ├── docker │ │ │ ├── dockerd │ │ │ ├── docker-init │ │ │ ├── docker-proxy │ │ │ └── runc │ │ └── docker.service │ ├── tasks │ │ ├── main.yml │ │ └── main.yml.bak1 │ └── templates │ └── daemon.json ├── env │ ├── files │ ├── tasks │ │ └── main.yml │ └── templates │ └── hosts.j2 ├── harbor │ ├── files │ │ └── harbor1.5 │ │ ├── common │ │ │ ├── config │ │ │ │ ├── adminserver │ │ │ │ │ └── env │ │ │ │ ├── db │ │ │ │ │ └── env │ │ │ │ ├── jobservice │ │ │ │ │ ├── config.yml │ │ │ │ │ └── env │ │ │ │ ├── log │ │ │ │ │ └── logrotate.conf │ │ │ │ ├── nginx │ │ │ │ │ ├── client_body_temp │ │ │ │ │ ├── conf.d │ │ │ │ │ ├── fastcgi_temp │ │ │ │ │ ├── nginx.conf │ │ │ │ │ ├── proxy_temp │ │ │ │ │ ├── scgi_temp │ │ │ │ │ └── uwsgi_temp │ │ │ │ ├── registry │ │ │ │ │ ├── config.yml │ │ │ │ │ └── root.crt │ │ │ │ └── ui │ │ │ │ ├── app.conf │ │ │ │ ├── certificates │ │ │ │ ├── env │ │ │ │ └── private_key.pem │ │ │ └── templates │ │ │ ├── adminserver │ │ │ │ └── env │ │ │ ├── clair │ │ │ │ ├── clair_env │ │ │ │ ├── config.yaml │ │ │ │ ├── postgres_env │ │ │ │ └── postgresql-init.d │ │ │ │ └── README.md │ │ │ ├── db │ │ │ │ └── env │ │ │ ├── jobservice │ │ │ │ ├── config.yml │ │ │ │ └── env │ │ │ ├── log │ │ │ │ └── logrotate.conf │ │ │ ├── nginx │ │ │ │ ├── nginx.http.conf │ │ │ │ ├── nginx.https.conf │ │ │ │ ├── notary.server.conf │ │ │ │ └── notary.upstream.conf │ │ │ ├── notary │ │ │ │ ├── mysql-initdb.d │ │ │ │ │ ├── initial-notaryserver.sql │ │ │ │ │ └── initial-notarysigner.sql │ │ │ │ ├── notary-signer-ca.crt │ │ │ │ ├── notary-signer.crt │ │ │ │ ├── notary-signer.key │ │ │ │ ├── server-config.json │ │ │ │ ├── signer-config.json │ │ │ │ └── signer_env │ │ │ ├── registry │ │ │ │ ├── config_ha.yml │ │ │ │ ├── config.yml │ │ │ │ └── root.crt │ │ │ └── ui │ │ │ ├── app.conf │ │ │ ├── env │ │ │ └── private_key.pem │ │ ├── docker-compose │ │ ├── docker-compose.clair.yml │ │ ├── docker-compose.notary.yml │ │ ├── docker-compose.yml │ │ ├── ha │ │ │ ├── docker-compose.clair.tpl │ │ │ ├── docker-compose.clair.yml │ │ │ ├── docker-compose.tpl │ │ │ ├── docker-compose.yml │ │ │ ├── registry.sql │ │ │ └── sample │ │ │ ├── active_active │ │ │ │ ├── check.sh │ │ │ │ └── keepalived_active_active.conf │ │ │ └── active_standby │ │ │ ├── check_harbor.sh │ │ │ └── keepalived_active_standby.conf │ │ ├── harbor.cfg │ │ ├── harbor.v1.5.0.tar.gz │ │ ├── install.sh │ │ ├── LICENSE │ │ ├── NOTICE │ │ ├── prepare │ │ └── start.sh │ ├── tasks │ │ └── main.yml │ └── templates │ └── harbor.cfg.j2 ├── k8s │ ├── files │ │ ├── create_self-signed-cert.sh │ │ ├── rancher2.4.3 │ │ │ ├── Chart.yaml │ │ │ ├── rancherInstall.sh │ │ │ ├── templates │ │ │ │ ├── clusterRoleBinding.yaml │ │ │ │ ├── deployment.yaml │ │ │ │ ├── _helpers.tpl │ │ │ │ ├── ingress.yaml │ │ │ │ ├── issuer-letsEncrypt.yaml │ │ │ │ ├── issuer-rancher.yaml │ │ │ │ ├── NOTES.txt │ │ │ │ ├── serviceAccount.yaml │ │ │ │ └── service.yaml │ │ │ └── values.yaml │ │ └── rkeup.sh │ ├── tasks │ │ └── main.yml │ └── templates │ ├── caproduction.sh.j2 │ └── helminstallrancher.sh.j2 ├── rke │ ├── files │ │ └── rkesh │ │ ├── helm │ │ ├── kubectl │ │ ├── rke_linux-amd64 │ │ └── rkeup.sh │ ├── tasks │ │ └── main.yml │ └── templates │ ├── helm │ ├── login.sh.j2 │ ├── rancher-cluster.yml │ └── rke_linux-amd64 ├── rkepost │ ├── files │ │ └── config │ ├── tasks │ │ └── main.yml │ └── templates └── ssh ├── files ├── tasks │ └── main.yml └── templates 67 directories, 109 files
模板内容 [Allip] 10.96.240.5 10.96.240.6 10.96.240.7 10.96.240.8 [master] 10.96.240.5 hostname=k1 nodename=10.96.240.5 10.96.240.6 hostname=k2 nodename=10.96.240.6 10.96.240.7 hostname=k3 nodename=10.96.240.7 [worker] 10.96.240.8 hostname=k4 nodename=10.96.240.8 [Harbor] 192.168.91.233 [AllNodes:children] master Harbor worker [ssh] 10.96.240.5 ansible_ssh_user=root ansible_ssh_pass=123 ansible_sudo_pass=123 10.96.240.6 ansible_ssh_user=root ansible_ssh_pass=123 ansible_sudo_pass=123 10.96.240.7 ansible_ssh_user=root ansible_ssh_pass=123 ansible_sudo_pass=123 10.96.240.8 ansible_ssh_user=root ansible_ssh_pass=123 ansible_sudo_pass=123 [ansible] 10.96.240.5 [rke:children] ansible Harbor
[Allip] 192.168.91.231 192.168.91.232 192.168.91.233 [master] 192.168.91.231 hostname=k1 nodename=192.168.91.231 192.168.91.232 hostname=k2 nodename=192.168.91.232 192.168.91.233 hostname=k3 nodename=192.168.91.233 [worker] #这里我们没有单独的worker节点将这里空置 #如果有则以master的格式添加即可 #这里不能注释掉,不然后面的脚本会报错 [Harbor] 192.168.91.233 [AllNodes:children] master Harbor [ssh] 192.168.91.231 ansible_ssh_user=root ansible_ssh_pass=123 ansible_sudo_pass=123 192.168.91.232 ansible_ssh_user=root ansible_ssh_pass=123 ansible_sudo_pass=123 192.168.91.233 ansible_ssh_user=root ansible_ssh_pass=123 ansible_sudo_pass=123 [ansible] 192.168.91.231 [rke:children] ansible Harbor
3、 设置ansible客户端root用户ssh到其他机器
[root@localhost ansible]# ssh-keygen -t rsa -N '' -f /root/.ssh/id_rsa -q [root@localhost ansible]# ssh-copy-id 192.168.91.231 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host '192.168.91.231 (192.168.91.231)' can't be established. ECDSA key fingerprint is SHA256:a9m0kn3f82CgKbeaExfsyI9gA1eNtjNxHrsF7Pt6c8E. ECDSA key fingerprint is MD5:6e:a7:9f:98:4b:30:15:0c:81:cb:8c:12:21:4a:d7:a2. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@192.168.91.231's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh '192.168.91.231'" and check to make sure that only the key(s) you wanted were added. [root@localhost ansible]# ssh-copy-id 192.168.91.232 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host '192.168.91.232 (192.168.91.232)' can't be established. ECDSA key fingerprint is SHA256:cWBnOseD+d8luVa9rpoxpyymVjsQ0UKgEpOYmGvf8s4. ECDSA key fingerprint is MD5:f2:23:d6:cb:f6:a5:d0:23:8b:3e:91:3d:64:3d:60:aa. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@192.168.91.232's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh '192.168.91.232'" and check to make sure that only the key(s) you wanted were added. [root@localhost ansible]# ssh-copy-id 192.168.91.233 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" The authenticity of host '192.168.91.233 (192.168.91.233)' can't be established. ECDSA key fingerprint is SHA256:mlX1xTguF377Tz93YicCJOSFN3Byl3iJJK80kXuvF0U. ECDSA key fingerprint is MD5:c6:48:24:ae:ad:d8:ab:a6:6d:a9:d7:01:eb:1e:9c:2d. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@192.168.91.233's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh '192.168.91.233'" and check to make sure that only the key(s) you wanted were added.
4、执行ansible剧本
[root@localhost ansible]# ansible-playbook preposition.yml
二、 将kubernetes离线安装镜像推送至harbor私仓
[root@k3 mnt]# cd /mnt && chmod +x rancher-load-images.sh && ./rancher-load-images.sh -l ./rancher-images.txt -i rancher-images.tar.gz -r 192.168.91.233:88
三、安装kubernetes集群
建议此操作在每个节点上都要执行,执行安装命令的机器必须执行,如果你只设置一台,请使用安装了ansible的机器 [root@k1 ansible]# su rancher [rancher@k1 ansible]$ ssh-keygen -t rsa -N '' -f /home/rancher/.ssh/id_rsa -q [rancher@k1 ansible]$ ssh-copy-id -i /home/rancher/.ssh/id_rsa.pub rancher@192.168.91.231 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/rancher/.ssh/id_rsa.pub" The authenticity of host '192.168.91.231 (192.168.91.231)' can't be established. ECDSA key fingerprint is SHA256:a9m0kn3f82CgKbeaExfsyI9gA1eNtjNxHrsF7Pt6c8E. ECDSA key fingerprint is MD5:6e:a7:9f:98:4b:30:15:0c:81:cb:8c:12:21:4a:d7:a2. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys rancher@192.168.91.231's password: Permission denied, please try again. rancher@192.168.91.231's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'rancher@192.168.91.231'" and check to make sure that only the key(s) you wanted were added. [rancher@k1 ansible]$ ssh-copy-id -i /home/rancher/.ssh/id_rsa.pub rancher@192.168.91.232 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/rancher/.ssh/id_rsa.pub" The authenticity of host '192.168.91.232 (192.168.91.232)' can't be established. ECDSA key fingerprint is SHA256:cWBnOseD+d8luVa9rpoxpyymVjsQ0UKgEpOYmGvf8s4. ECDSA key fingerprint is MD5:f2:23:d6:cb:f6:a5:d0:23:8b:3e:91:3d:64:3d:60:aa. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys rancher@192.168.91.232's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'rancher@192.168.91.232'" and check to make sure that only the key(s) you wanted were added. [rancher@k1 ansible]$ ssh-copy-id -i /home/rancher/.ssh/id_rsa.pub rancher@192.168.91.233 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/rancher/.ssh/id_rsa.pub" The authenticity of host '192.168.91.233 (192.168.91.233)' can't be established. ECDSA key fingerprint is SHA256:mlX1xTguF377Tz93YicCJOSFN3Byl3iJJK80kXuvF0U. ECDSA key fingerprint is MD5:c6:48:24:ae:ad:d8:ab:a6:6d:a9:d7:01:eb:1e:9c:2d. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys rancher@192.168.91.233's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'rancher@192.168.91.233'" and check to make sure that only the key(s) you wanted were added.
[root@k1 ansible]# ansible-playbook k8sinstall.yml
[root@k1 ansible]# ansible-playbook postposition.yml
每台服务器均可以执行kubectl命令四、部署过程中可能会遇到的问题
1、关于Docker版本问题
2、 关于防火墙问题
你需要开放以下端口
protocol
port
purpose
tcp
22
ssh
tcp
80
Rancher Server/ingress
tcp
443
Rancher Server/ingress
tcp
6443
Kubernetes ApiServer
tcp
2379-2380
etcd Server Client Api
tcp
10250-10256
kubernetes Components
tcp
30000-32767
nodePort Service
udp
8472
canal
3、对于经过多次部署的环境
df -h|grep kubelet |awk -F % '{print $2}'|xargs umount rm /var/lib/kubelet/* -rf rm /etc/kubernetes/* -rf rm /var/lib/rancher/* -rf rm /var/lib/etcd/* -rf rm /var/lib/cni/* -rf iptables -F && iptables -t nat –F ip link del flannel.1 docker ps -a|awk '{print $1}'|xargs docker rm -f docker volume ls|awk '{print $2}'|xargs docker volume rm
4、关于openssh版本问题
5、关于创建的rancher用户
[尾语]
[最后送上]
倘若你能跳得高,也请为她跳起来;
直到她大声喊:“亲爱的爱人,戴着金帽子、跳得高高的爱人,我一定要拥有你”
— 托马斯·帕克·丹维里埃 [ 菲茨杰拉德 –《人间天堂》]
本网页所有视频内容由 imoviebox边看边下-网页视频下载, iurlBox网页地址收藏管理器 下载并得到。
ImovieBox网页视频下载器 下载地址: ImovieBox网页视频下载器-最新版本下载
本文章由: imapbox邮箱云存储,邮箱网盘,ImageBox 图片批量下载器,网页图片批量下载专家,网页图片批量下载器,获取到文章图片,imoviebox网页视频批量下载器,下载视频内容,为您提供.
阅读和此文章类似的: 全球云计算