一、组件情势检查

Kubernetes集群安装配置

Centos七.三 Kubernetes集群铺排

kubernetes是google公司基于docker所做的三个分布式集群,有以下主件组成

  1. Master节点:

Kubernetes集群组件:

时间:2017年11月21日 浏览量:308

etcd: 高可用存款和储蓄共享配置和劳务意识

root>> kubectl get cs

-Master节点

1、环境介绍及准备

flannel: 互联网布局扶助

澳门金沙国际 1

  -
etcd 3个高可用的K/V键值对存储和劳动意识系统

一.1 物理机操作系统

kube-apiserver: 不论通过kubectl依旧采纳remote api
直接控制,都要因而apiserver

  1. Node 节点:

  -
kube-apiserver 提供kubernetes集群的API调用

  物理机操作系统采取Centos7.三 陆11个人,细节如下。

kube-controller-manager: 对replication controller, endpoints controller,
namespace controller, and serviceaccounts
controller的大循环控制,与kube-apiserver交互,保险这个controller工作

  -
kube-controller-manager 确定保证集群服务

[root@localhost ~]# uname -a

kube-scheduler: Kubernetes
scheduler的职能就是依照特定的调度算法将pod调度到钦命的行事节点(minion)上,这一经过也叫绑定(bind)

 

  -
kube-scheduler 调度容器,分配到Node

Linux localhost.localdomain 3.10.0-514.6.1.el7.x86_64 #1 SMP Wed Jan 18 13:06:36 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

kubelet: Kubelet运转在Kubernetes Minion Node上. 它是container
agent的逻辑继承者

 2、服务格局检查

-Minion节点

[root@localhost ~]# cat /etc/redhat-release 

kube-proxy: kube-proxy是kubernetes 里运转在minion节点上的二个零部件,
它起的作用是叁个劳动代办的剧中人物

  1. Master 节点:

  -
flannel 实现夸主机的器皿互连网的通讯

CentOS Linux release 7.3.1611 (Core)

kubernetes架构图 如下:

root>> systemctl status etcd
root>> systemctl status kube-apiserver
root>> systemctl status kube-controller-manager
root>> systemctl status kube-scheduler

  -
kubelet 在Node节点上遵循计划文件中定义的容器标准运行容器

一.2 主机音讯

澳门金沙国际 2

澳门金沙国际 3

  -
kube-proxy 提供网络代理服务

  本文准备了三台机器用于布署k八s的运营条件,细节如下:

环境:Centos7 X86_64

 

集群示意图

澳门金沙国际 4

下载地址:

  1. Node 节点

  Kubernetes工作格局server-client,Kubenetes
Master提供集中国化学工业进出口总企管Minions。

一.3 环境准备

master:192.168.50.130

root>> systemctl status flanneld
root>> systemctl status kube-proxy
root>> systemctl status kubelet
root>> systemctl status docker

部署1台Kubernetes
Master节点和3台Minion节点,

一.3.1 主机名修改

monion01:192.168.50.131

澳门金沙国际 5

192.168.137.142
cmmaster

Master:

monion02:192.168.50.132

 

192.168.137.148
cmnode1

[root@localhost ~]#  hostnamectl –static set-hostname  k8s-master

monion03:192.168.50.133

三、进度情势检查

192.168.137.199
cmnode2

Node1:

master部署:

  1. Master 节点:

192.168.137.212
cmnode3

[root@localhost ~]#  hostnamectl –static set-hostname  k8s-node1

1.闭馆防火墙

root>> ps -ef | grep etcd

设置EPEL源,在拥有节点上

Node2:

#systemctl stop firewalld

root>> yum list installed | grep kube

#
yum -y install epel-release

[root@localhost ~]#  hostnamectl –static set-hostname  k8s-node2

#systemctl disable firewalld

澳门金沙国际 6

设置配备Kubernetes
Master,在Master节点上

1.3.2  安装docker及iptables

2.禁用selinux

 

1.使用yum安装etcd和kubernetes-master

yum install docker iptables-services.x86_64 -y

setenforce 0

  1. Node 节点:

#
yum -y install etcd kubernetes-master

1.三.三 关闭暗许firewalld运维iptables并清除默许规则

3.安装ntp

root>> ps -ef | grep flannel

2.编辑/etc/etcd/etcd.conf文件

systemctl stop firewalld

yum -y install ntp

root>> ps -ef | grep kube

ETCD_NAME=default

systemctl disable firewalld

systemctl start ntpd

澳门金沙国际 7

ETCD_DATA_DIR=”/var/lib/etcd/default.etcd”

systemctl start iptables

systemctl enable ntpd

 

ETCD_LISTEN_CLIENT_URLS=””

systemctl enable iptables

4.安装etcd与kubernete

 

ETCD_ADVERTISE_CLIENT_URLS=””

iptables -F

yum -y install etcd kubernetes

4、安装包形式检查

3.编辑/etc/kubernetes/apiserver文件

service iptables save

伍.修改etcd配置文件

  1. Master 节点:

KUBE_API_ADDRESS=”–insecure-bind-address=0.0.0.0″

一.三.四 运维docker并参与开机自运维

vi /etc/etcd/etcd.conf

root>> yum list installed | grep etcd

KUBE_API_PORT=”–port=8080″

 systemctl start docker

ETCD_NAME=default

root>> yum list installed | grep kube

KUBELET_PORT=”–kubelet-port=10250″

 systemctl enable docker

ETCD_DATA_DIR=”/var/lib/etcd/default.etcd”

澳门金沙国际 8

KUBE_ETCD_SERVERS=”–etcd-servers=”

2、K八S集群计划

ETCD_LISTEN_CLIENT_URLS=””

 

【澳门金沙国际】K8s集群安装和自小编批评,Kubernetes集群安排。KUBE_SERVICE_ADDRESSES=”–service-cluster-ip-range=10.254.0.0/16″

MASTER

ETCD_ADVERTISE_CLIENT_URLS=””

  1. Node 节点:

KUBE_ADMISSION_CONTROL=”–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota”

2.1   master安装kubernetes etcd

6.修改kubernetes apiserver

root>> yum list installed | grep flannel

KUBE_API_ARGS=””

[root@k8s-master ~]# yum install kubernetes etcd -y

vi /etc/kubernetes/apiserver

root>> yum list installed | grep kube

肆.起动etcd、kube-apiserver、kube-controller-manager、kube-scheduler等服务,并设置开机运营。

2.2 配置

KUBE_API_ADDRESS=”–address=0.0.0.0″

澳门金沙国际 9

起头etcd、kube-apiserver、kube-controller-manager、kube-scheduler等劳动,并设置开机运转。

etcd

KUBE_API_PORT=”–port=8080″

 

for
SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler;
do systemctl restart $SERVICES;systemctl enable $SERVICES;systemctl
status $SERVICES ; done

#修改

KUBELET_PORT=”–kubelet_port=10250″

 

#
systemctl status  etcd.service

[root@k8s-master ~]# cd /etc/etcd/

KUBE_ETCD_SERVERS=”–etcd_servers=”

5,附上第3次安装k八s集群失利后,后边重新安装k八s的片段环境重置的吩咐。

#
systemctl status  kube-apiserver.service

[root@k8s-master etcd]# vim etcd.conf 

KUBE_SERVICE_ADDRESSES=”–service-cluster-ip-range=10.254.0.0/16″

5.1 Master 节点

#
systemctl status  kube-controller-manager.service

9:ETCD_LISTEN_CLIENT_URLS=””

KUBE_ADMISSION_CONTROL=”–admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota”

  1. 卸载以前组件

#
systemctl status  kube-scheduler.service

20:ETCD_ADVERTISE_CLIENT_URLS=””

KUBE_API_ARGS=””

[root@CNT7XDCK01 ~]# yum list installed | grep kube  #首先查询组件
kubernetes-client.x86_64                1.5.2-0.7.git269f928.el7      
@extras  
kubernetes-master.x86_64                1.5.2-0.7.git269f928.el7      
@extras  
[root@CNT7XDCK01 ~]# yum remove -y kubernetes-client.x86_64
[root@CNT7XDCK01 ~]# yum remove -y kubernetes-master.x86_64

5.在etcd中定义flannel网络

#开发银行并进入开机自运营

7.启动kube-apiserver  kube-controller-manager  kube-scheduler

澳门金沙国际 10

[root@cmmaster
~]# etcdctl mk /atomic.io/network/config
‘{“Network”:”172.17.0.0/16″}’

[root@k8s-master etcd]# systemctl start etcd

forSERVICESinetcd kube-apiserver kube-controller-manager
kube-scheduler;do

 

安装配置Kubernetes
Node

[root@k8s-master etcd]# systemctl enable etcd

systemctl restart $SERVICES

  1. 重新安装组件
    [root@CNT7XDCK01 ~]# yum -y install etcd
    [root@CNT7XDCK01 ~]# yum -y install kubernetes-master

正如操作在cmnode1、cmnode②、cmnode三上举行

kubernetes

systemctl enable $SERVICES

 

1.安装flannel
kubernetes-node

#修改

systemctl status $SERVICES

  1. 布局相关kube的布署文件

yum
-y install flannel kubernetes-node

[root@k8s-master ~]# cd /etc/kubernetes/

done

编辑/etc/etcd/etcd.conf文件

二.为flannel网络内定etcd服务,修改/etc/sysconfig/flanneld文件

[root@k8s-master kubernetes]# ll

捌.创办互连网

ETCD_NAME="default"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"

FLANNEL_澳门金沙国际,ETCD=””

total 24

etcdctl mk /atomic.io/network/config'{“Network”:”172.17.0.0/16″}’

编辑/etc/kubernetes/apiserver文件

FLANNEL_ETCD_KEY=”/atomic.io/network”

-rw-r–r– 1 root root 767 Jul  3 23:33 apiserver          #内需布署

玖.查看节点

###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#

# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port minions listen on
KUBELET_PORT="--kubelet-port=10250"

# Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

# default admission control policies
# KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""

3.修改/etc/kubernetes/config文件

-rw-r–r– 1 root root 655 Jul  3 23:33 config             #急需安插

kubectlgetnodes

 

KUBE_LOGTOSTDERR=”–logtostderr=true”

-rw-r–r– 1 root root 189 Jul  3 23:33 controller-manager #内需配备

澳门金沙国际 11

  1. 重新登记/运营/检查:组件的体系服务
    [root@CNT7XDCK01 ~]# systemctl enable etcd
    [root@CNT7XDCK01 ~]# systemctl enable kube-apiserver
    [root@CNT7XDCK01 ~]# systemctl enable kube-controller-manager
    [root@CNT7XDCK01 ~]# systemctl enable kube-scheduler

KUBE_LOG_LEVEL=”–v=0″

-rw-r–r– 1 root root 615 Jul  3 23:33 kubelet

到此master端配置完结

[root@CNT7XDCK01 ~]# systemctl restart etcd
[root@CNT7XDCK01 ~]# systemctl restart kube-apiserver
[root@CNT7XDCK01 ~]# systemctl restart kube-controller-manager
[root@CNT7XDCK01 ~]# systemctl restart kube-scheduler

KUBE_ALLOW_PRIV=”–allow-privileged=false”

-rw-r–r– 1 root root 103 Jul  3 23:33 proxy

客户端配置

[root@CNT7XDCK01 ~]# systemctl status etcd
[root@CNT7XDCK01 ~]# systemctl status kube-apiserver
[root@CNT7XDCK01 ~]# systemctl status kube-controller-manager
[root@CNT7XDCK01 ~]# systemctl status kube-scheduler

KUBE_MASTER=”–master=”

-rw-r–r– 1 root root 111 Jul  3 23:33 scheduler          #亟需布置

1.在monion01、monion02、monion03上部署

 

四.依照如下内容改动对应node的布署文件/etc/kubernetes/kubelet

#配置config

yum -y install flannel kubernetes

====================================================================

KUBELET_ADDRESS=”–address=0.0.0.0″ 
                                 #将127.0.0.1修改成0.0.0.0

[root@k8s-master kubernetes]# vim config 

2.配置flanneld

 

KUBELET_PORT=”–port=10250″

22:KUBE_MASTER=”–master=”

1

5.2 Node 节点

KUBELET_HOSTNAME=”–hostname-override=192.168.137.148″ 
           #修改成对应Node的IP

# apiserver

vi /etc/sysconfig/flanneld
FLANNEL_ETCD=””

  1. 卸载在此之前组件

KUBELET_API_SERVER=”–api-servers=” 
   #指定Master节点的API Server

[root@k8s-master kubernetes]# vim apiserver 

3.配置kubernetes

[root@CNT7XDCK02 ~]# yum list installed | grep kube
kubernetes-client.x86_64             1.5.2-0.7.git269f928.el7         
@extras  
kubernetes-node.x86_64               1.5.2-0.7.git269f928.el7         
@extras
[root@CNT7XDCK02 ~]# yum remove -y kubernetes-client.x86_64
[root@CNT7XDCK02 ~]# yum remove -y kubernetes-node.x86_64

KUBELET_POD_INFRA_CONTAINER=”–pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest”

8:KUBE_API_ADDRESS=”–insecure-bind-address=0.0.0.0″

vi /etc/kubernetes/config

澳门金沙国际 12

KUBELET_ARGS=””

11:KUBE_API_PORT=”–port=8080″

KUBE_MASTER=”–master=”

 

5.在具备Node节点上运转kube-proxy,kubelet,docker,flanneld等劳动,并安装开机运营

14:KUBELET_PORT=”–kubelet-port=10250″

4.配置kubelet

  1. 重新安装组件
    [root@CNT7XDCK02 ~]# yum -y install flannel
    [root@CNT7XDCK02 ~]# yum -y install kubernetes-node

#
for SERVICES in kube-proxy kubelet docker flanneld;do systemctl restart
$SERVICES;systemctl enable $SERVICES;systemctl status $SERVICES;
done

17:KUBE_ETCD_SERVERS=”–etcd-servers=”

monion01

 

•验证集群是或不是安装成功

23:KUBE_ADMISSION_CONTROL=”–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota”

vi /etc/kubernetes/kubelet

  1. 安插相关kube的计划文件

在master上执行如下命令

#controller-manager 

KUBELET_ADDRESS=”–address=0.0.0.0″

修改/etc/sysconfig/flanneld文件

[root@cmmaster
~]# kubectl get node

[root@k8s-master kubernetes]# vim  controller-manager 

KUBELET_PORT=”–port=10250″

# Flanneld configuration options  

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://192.168.3.96:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

NAME 
            STATUS    AGE

8:KUBELET_ADDRESSES=”–machines=192.168.56.140,192.168.56.150″  #日增安顿

# change the hostname to this host’s IP address

修改/etc/kubernetes/config文件

192.168.137.147 
 Ready     7m

早先服务并加入开机自运营

KUBELET_HOSTNAME=”–hostname_override=192.168.50.131″

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
#   kube-apiserver.service
#   kube-controller-manager.service
#   kube-scheduler.service
#   kubelet.service
#   kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://192.168.3.96:8080"

192.168.137.148 
 Ready     1m

[root@k8s-master ~]# systemctl list-unit-files |grep kube

KUBELET_API_SERVER=”–api_servers=”

修改/etc/kubernetes/kubelet文件

192.168.137.199 
 Ready     7m

kube-apiserver.service                      disabled     #内需运营

KUBELET_ARGS=””

###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.3.97" # 这里是node机器的IP

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://192.168.3.96:8080" # 这里是master机器的IP

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""

上述节点通常呈现,状态为Ready,则证实集群搭建成功

kube-controller-manager.service             disabled     #亟待运营

monion02

 

澳门金沙国际 13

kube-proxy.service                          disabled

vi /etc/kubernetes/kubelet

  1. 再度注册/运行/检查:组件的系统服务
    [root@CNT7XDCK02 ~]# systemctl enable flanneld
    [root@CNT7XDCK02 ~]# systemctl enable kube-proxy
    [root@CNT7XDCK02 ~]# systemctl enable kubelet
    [root@CNT7XDCK02 ~]# systemctl enable docker

kube-scheduler.service                      disabled     #亟待运行 

KUBELET_ADDRESS=”–address=0.0.0.0″

[root@CNT7XDCK02 ~]# systemctl restart flanneld
[root@CNT7XDCK02 ~]# systemctl restart kube-proxy
[root@CNT7XDCK02 ~]# systemctl restart kubelet
[root@CNT7XDCK02 ~]# systemctl restart docker

kubelet.service                             disabled    

KUBELET_PORT=”–port=10250″

[root@CNT7XDCK02 ~]# systemctl status flanneld
[root@CNT7XDCK02 ~]# systemctl status kube-proxy
[root@CNT7XDCK02 ~]# systemctl status kubelet
[root@CNT7XDCK02 ~]# systemctl status docker

#启动

# change the hostname to this host’s IP address

 

[root@k8s-master ~]# systemctl start kube-apiserver.service kube-controller-manager.service  kube-scheduler.service  

KUBELET_HOSTNAME=”–hostname_override=192.168.50.132″

 

#查看是不是运营

KUBELET_API_SERVER=”–api_servers=”

陆、最后,在Master机器,查看K八s安装结果

[root@k8s-master ~]# systemctl is-active  kube-apiserver.service kube-controller-manager.service kube-scheduler.service 

KUBELET_ARGS=””

[root@CNT7XDCK01 ~]# kubectl get nodes
NAME            STATUS    AGE
192.168.3.100   Ready     35d
192.168.3.97    Ready     35d
192.168.3.98    Ready     35d
192.168.3.99    Ready     35d

active

monion03

1般来说,能够看来master拥有七个node节点机器,状态是Ready不荒谬的。澳门金沙国际 14

active

KUBELET_ADDRESS=”–address=0.0.0.0″

 

active

KUBELET_PORT=”–port=10250″

#开机运营

# change the hostname to this host’s IP address

[root@k8s-master ~]# systemctl enable kube-apiserver.service kube-controller-manager.service  kube-scheduler.service

KUBELET_HOSTNAME=”–hostname_override=192.168.50.133″

专注:运营顺序 etcd–>kubernetes*

KUBELET_API_SERVER=”–api_servers=”

SLAVE 八个节点相同

KUBELET_ARGS=””

2.3 slave 安装 kubernetes-node

5.起动服务

yum install kubernetes-node.x86_64 flannel -y

forSERVICESinkube-proxy kubelet docker flanneld;do

2.4 slave配置

systemctl restart $SERVICES

[root@k8s-node1 ~]# cd /etc/kubernetes/

systemctl enable $SERVICES

[root@k8s-node1 kubernetes]# ll

systemctl status $SERVICES

total 12

done

-rw-r–r– 1 root root 655 Jul  3 23:33 config  #内需安顿

6.验证

-rw-r–r– 1 root root 615 Jul  3 23:33 kubelet #需求配置

monion01

-rw-r–r– 1 root root 103 Jul  3 23:33 proxy

ip a | grep flannel | grep inet

#config 

澳门金沙国际 15

[root@k8s-node1 kubernetes]# vim config 

在monion0贰和monion三上实施查看

22:KUBE_MASTER=”–master=”

master

# kubelet

kubect lget nodes

5:KUBELET_ADDRESS=”–address=0.0.0.0″

NAME             LABELS                                  STATUS

8:KUBELET_PORT=”–port=10250″

192.168.50.131   kubernetes.io/hostname=192.168.50.131   Ready

11:KUBELET_HOSTNAME=”–hostname-override=192.168.56.140″

192.168.50.132   kubernetes.io/hostname=192.168.50.132   Ready

14:KUBELET_API_SERVER=”–api-servers=”

192.168.50.133   kubernetes.io/hostname=192.168.50.133   Ready

启航并加入开机自运转

测试成功!

[root@k8s-node1 ~]# systemctl list-unit-files |grep kube

kube-proxy.service                          disabled

kubelet.service                             disabled

[root@k8s-node1 ~]# systemctl start kube-proxy.service kubelet.service 

[root@k8s-node1 ~]# systemctl is-active kube-proxy.service kubelet.service 

active

active

[root@k8s-node1 ~]# systemctl enable kube-proxy.service kubelet.service 

#flannel配置

[root@k8s-node1 kubernetes]# cd /etc/sysconfig/

[root@k8s-node1 sysconfig]# vim flanneld 

4:FLANNEL_ETCD_ENDPOINTS=””

开头并参加开机自动:

systemctl start flanneld.service

systemctl enable flanneld.service

#查看flannel状态

[root@k8s-node1 sysconfig]# systemctl is-active flanneld.service

active

注意:此时flannel运维不了,之所以运营不起来是因为etcd里面未有flannel所须要的互连网音讯,此时我们须要在etcd里面创造flannel所急需的互连网消息

master创制 flannel所须求的网络音信:

[root@k8s-master ~]# etcdctl set /atomic.io/network/config ‘{ “Network”: “172.17.0.0/16” }’

{ “Network”: “172.17.0.0/16” }

2.5 集群检查

[root@k8s-master ~]# kubectl get node

NAME             STATUS    AGE

192.168.56.140   Ready     56m

192.168.56.150   Ready     54m

kubernetes集群配置完毕!

相关文章