Kubernetes集群组件:
  - etcd 一个高可用的K/V键值对存储和劳动意识系统
  - flannel 完成夸主机的器皿互联网的通讯
  - kube-apiserver 提供kubernetes集群的API调用
  - kube-controller-manager 确保集群服务
  - kube-scheduler 调度容器,分配到Node
  - kubelet 在Node节点上坚守计划文件中定义的器皿标准启动容器
  - kube-proxy 提供互联网代理服务

Kubernetes集群安装配置

kubernetes是google公司按照docker所做的一个分布式集群,有以下主件组成

前情表明:
三台CentOS7体系的虚拟机(1个master+2个node),三台机器上的防火墙,SELINUX全体闭合。我的尝试坏境可以上网,默许的YUM源就足以用。

一.安装配置Kubernetes Master 如下操作在master上执行
1.使用yum安装etcd和kubernetes-master

Kubernetes集群组件:

etcd: 高可用存储共享配置和劳动意识

1.什么是kubernetes   

# yum -y install etcd kubernetes-master

-Master节点

flannel: 网络布局接济

  Kubernetes(k8s)是谷歌开源的器皿集群管理种类(谷歌里面:Borg)。在Docker技术的基础上,为容器化的应用提供安排运行、资源调度、服务意识和动态伸缩等一与日俱增完整意义,进步了宽广容器集群管理的便捷性。 
 

2.编辑/etc/etcd/etcd.conf文件

  -
etcd 一个高可用的K/V键值对存储和劳务意识系统

kube-apiserver: 不论通过kubectl仍然采用remote api
直接控制,都要因而apiserver

  Kubernetes优势:       

ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"                
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"             #监听客户地址的端口
ETCD_ADVERTISE_CLIENT_URLS="http://主ip:2379"        #通知客户的地址及端口

  -
kube-apiserver 提供kubernetes集群的API调用

kube-controller-manager: 对replication controller, endpoints controller,
namespace controller, and serviceaccounts
controller的大循环控制,与kube-apiserver交互,保险这么些controller工作

    - 容器编排         

3.编辑/etc/kubernetes/apiserver文件

  -
kube-controller-manager 确保集群服务

kube-scheduler: Kubernetes
scheduler的功用就是依照特定的调度算法将pod调度到指定的干活节点(minion)上,这一进程也叫绑定(bind)

     - 轻量级         

KUBE_API_ADDRESS=”–insecure-bind-address=0.0.0.0″                              #服务器的监听地址
KUBE_API_PORT=”–port=8080″                                                                  #下安装入门级其余Kubernetes集群,k8s入门系列之集群安装篇。监听端口
KUBELET_PORT=”–kubelet-port=10250″                                            

  -
kube-scheduler 调度容器,分配到Node

kubelet: Kubelet运行在Kubernetes Minion Node上. 它是container
agent的逻辑继任者

    - 开源          

KUBE_ETCD_SERVERS=”–etcd-servers=”                     #etcd
服务的地点及端口
KUBE_SERVICE_ADDRESSES=”–service-cluster-ip-range=10.254.0.0/16″
KUBE_ADMISSION_CONTROL=”–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota”
KUBE_API_ARGS=””

-Minion节点

kube-proxy: kube-proxy是kubernetes 里运行在minion节点上的一个零部件,
它起的功力是一个劳动代办的剧中人物

    - 弹性伸缩         

cp /etc/kubernetes/config
/etc/kubernetes/config.bak

  -
flannel 已毕夸主机的容器互连网的通讯

kubernetes架构图 如下:

    - 负载均衡

vim /etc/kubernetes/config

  -
kubelet 在Node节点上坚守布置文件中定义的器皿标准启动容器

澳门金沙国际 1

      •Kubernetes的骨干概念

KUBE_LOGTOSTDERR=``"--logtostderr=true"

  -
kube-proxy 提供互连网代理服务

环境:Centos7 X86_64

  1)Pod   

KUBE_LOG_LEVEL=``"--v=0"

集群示意图

下载地址:

   
运行于Node节点上,若干相关容器的整合。Pod内包罗的容器运行在同一宿主机上,使用同一的互连网命名空间、IP地址和端口,可以通过localhost举行通。Pod是Kurbernetes举行创办、调度和保管的细微单位,它提供了比容器更高层次的悬空,使得安插和管制尤其灵活。一个Pod可以分包一个容器只怕三个相关容器。
                      2)Replication Controller
      Replication
Controller用来治本Pod的副本,有限支撑集群中存在指定数量的Pod副本。集群中副本的多寡超过指定数量,则会终止指定数量之外的剩余容器数量,反之,则会启动少于指定数量个数的容器,保险数量不变。Replication
Controller是落实弹性伸缩、动态扩容和滚动升级的大旨。

KUBE_ALLOW_PRIV=``"--allow-privileged=false"

  Kubernetes工作方式server-client,Kubenetes
Master提供集中化管理Minions。

master:192.168.50.130

  3)Service   

4.启动etcd、kube-apiserver、kube-controller-manager、kube-scheduler等服务,并设置开机启动。

部署1台Kubernetes
Master节点和3台Minion节点,

monion01:192.168.50.131

    
Service定义了Pod的逻辑集合和做客该集合的策略,是实事求是服务的抽象。Service提供了一个联结的服务走访入口以及服务代办和发现体制,用户不必要驾驭后台Pod是何等运转。

for SERVICES in etcd kube-apiserver
kube-controller-manager kube-scheduler;

192.168.137.142
cmmaster

monion02:192.168.50.132

      4)Label     

do

192.168.137.148
cmnode1

monion03:192.168.50.133

     
 Kubernetes中的任意API对象都是透过Label举行标识,Label的面目是一多级的K/V键值对。Label是Replication
Controller和Service运行的根基,二者通过Label来展开关联Node上运行的Pod。

  systemctl restart
$SERVICES;

192.168.137.199
cmnode2

master部署:

      5)Node   

  systemctl enable
$SERVICES;

192.168.137.212
cmnode3

1.闭馆防火墙

   
  Node是Kubernetes集群架构中运作Pod的劳务节点(或agent)。Node是Kubernetes集群操作的单元,用来承载被分配Pod的运行,是Pod运行的宿主机。

  systemctl status $SERVICES
;

设置EPEL源,在享有节点上

#systemctl stop firewalld

2.先安装 Kubernetes Master

done

#
yum -y install epel-release

#systemctl disable firewalld

  1)使用yum安装etcd和kubernetes-master

5.在etcd中定义flannel网络

设置配备Kubernetes
Master,在Master节点上

2.禁用selinux

# yum -y install etcd kubernetes-master  flannel

etcdctl mk
/atomic.io/network/config ‘{“Network”:”172.17.0.0/16″}’

1.使用yum安装etcd和kubernetes-master

setenforce 0

  2)编辑/etc/etcd/etcd.conf文件

二.装置配备Kubernetes Node

#
yum -y install etcd kubernetes-master

3.安装ntp

ETCD_NAME=default
ETCD_DATA_DIR=”/var/lib/etcd/default.etcd”
ETCD_LISTEN_CLIENT_URLS=””
ETCD_ADVERTISE_CLIENT_URLS=””

正如操作在node1、node2上推行

2.编辑/etc/etcd/etcd.conf文件

yum -y install ntp

  3)编辑/etc/kubernetes/apiserver文件

1.使用yum安装flannel和kubernetes-node

ETCD_NAME=default

systemctl start ntpd

KUBE_API_ADDRESS=”–insecure-bind-address=0.0.0.0″
KUBE_API_PORT=”–port=8080″
KUBELET_PORT=”–kubelet-port=10250″
KUBE_ETCD_SERVERS=”–etcd-servers=”
KUBE_SERVICE_ADDRESSES=”–service-cluster-ip-range=10.254.0.0/16″
KUBE_ADMISSION_CONTROL=”–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota”
//注意:上一句里面的ServerCount一定要去掉,否则pod会没有数量的。

yum -y install flannel
kubernetes-node

ETCD_DATA_DIR=”/var/lib/etcd/default.etcd”

systemctl enable ntpd

  4)在etcd中定义flannel网络

2.为flannel互连网指定etcd服务,修改/etc/sysconfig/flanneld文件

ETCD_LISTEN_CLIENT_URLS=””

4.安装etcd与kubernete

# etcdctl mk /atomic.io/network/config ‘{“Network”:”172.17.0.0/16″}’

FLANNEL_ETCD=””                                           #etcd运行在哪个服务器上
FLANNEL_ETCD_KEY=”/atomic.io/network”

ETCD_ADVERTISE_CLIENT_URLS=””

yum -y install etcd kubernetes

   5)编辑/etc/sysconfig/flanneld

3.修改/etc/kubernetes/config文件

3.编辑/etc/kubernetes/apiserver文件

5.修改etcd配置文件

FLANNEL_ETCD_ENDPOINTS=””
FLANNEL_ETCD_PREFIX=”/atomic.io/network”

KUBE_LOGTOSTDERR=”–logtostderr=true”
KUBE_LOG_LEVEL=”–v=0″
KUBE_ALLOW_PRIV=”–allow-privileged=false”
KUBE_MASTER=”–master=”                              #主服务器的地点和端口

KUBE_API_ADDRESS=”–insecure-bind-address=0.0.0.0″

vi /etc/etcd/etcd.conf

  
6)启动etcd、kube-apiserver、kube-controller-manager、kube-scheduler、flanneld等劳务,并设置开机启动。

4.如约如下内容改动对应node的安顿文件/etc/kubernetes/kubelet

KUBE_API_PORT=”–port=8080″

ETCD_NAME=default

# for SERVICES in etcd kube-apiserver kube-controller-manager
kube-scheduler flanneld;
   do
    systemctl restart $SERVICES;
    systemctl enable $SERVICES;
    systemctl status $SERVICES ;
    done

首先个节点

KUBELET_PORT=”–kubelet-port=10250″

ETCD_DATA_DIR=”/var/lib/etcd/default.etcd”

3.设置配备Kubernetes Node

KUBELET_ADDRESS=”–address=0.0.0.0″
KUBELET_PORT=”–port=10250″
KUBELET_HOSTNAME=”–hostname-override=192.168.1.21″ #修改成对应Node的IP
KUBELET_API_SERVER=”–api-servers=” #指定Master节点的API
Server
KUBELET_POD_INFRA_CONTAINER=”–pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest”
KUBELET_ARGS=””

KUBE_ETCD_SERVERS=”–etcd-servers=”

ETCD_LISTEN_CLIENT_URLS=””

  1)使用yum安装flannel和kubernetes-node

第三个节点

KUBE_SERVICE_ADDRESSES=”–service-cluster-ip-range=10.254.0.0/16″

ETCD_ADVERTISE_CLIENT_URLS=””

# yum -y install flannel kubernetes-node
 

KUBELET_ADDRESS=”–address=0.0.0.0″
KUBELET_PORT=”–port=10250″
KUBELET_HOSTNAME=”–hostname-override=192.168.1.141″ #修改成相应Node的IP
KUBELET_API_SERVER=”–api-servers=” #指定Master节点的API
Server
KUBELET_POD_INFRA_CONTAINER=”–pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest”
KUBELET_ARGS=””

KUBE_ADMISSION_CONTROL=”–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota”

6.修改kubernetes apiserver

  2)为flannel互联网指定etcd服务,修改/etc/sysconfig/flanneld文件

5.在具有Node节点上启动kube-proxy,kubelet,docker,flanneld等劳动,并安装开机启动。

KUBE_API_ARGS=””

vi /etc/kubernetes/apiserver

FLANNEL_ETCD=””
FLANNEL_ETCD_KEY=”/atomic.io/network”
 

for SERVICES in kube-proxy kubelet docker
flanneld;

4.先导etcd、kube-apiserver、kube-controller-manager、kube-scheduler等服务,并设置开机启动。

KUBE_API_ADDRESS=”–address=0.0.0.0″

  3)修改/etc/kubernetes/config文件

do

起步etcd、kube-apiserver、kube-controller-manager、kube-scheduler等服务,并安装开机启动。

KUBE_API_PORT=”–port=8080″

KUBE_LOGTOSTDERR=”–logtostderr=true”
KUBE_LOG_LEVEL=”–v=0″
KUBE_ALLOW_PRIV=”–allow-privileged=false”
KUBE_MASTER=”–master=”

  systemctl restart
$SERVICES;

for
SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler;
do systemctl restart $SERVICES;systemctl enable $SERVICES;systemctl
status $SERVICES ; done

KUBELET_PORT=”–kubelet_port=10250″

  4)依照如下内容改动对应node的布局文件/etc/kubernetes/kubelet

  systemctl enable
$SERVICES;

#
systemctl status  etcd.service

KUBE_ETCD_SERVERS=”–etcd_servers=”

node1:

  systemctl status $SERVICES;

#
systemctl status  kube-apiserver.service

KUBE_SERVICE_ADDRESSES=”–service-cluster-ip-range=10.254.0.0/16″

KUBELET_ADDRESS=”–address=0.0.0.0″
KUBELET_PORT=”–port=10250″
KUBELET_HOSTNAME=”–hostname-override=192.168.118.140″
#修改成对应Node的IP
KUBELET_API_SERVER=”–api-servers=”
#指定Master节点的API Server
KUBELET_POD_INFRA_CONTAINER=”–pod-infra-container-image=registry.access.RedHat.com/rhel7/pod-infrastructure:latest”
KUBELET_ARGS=””

done

#
systemctl status  kube-controller-manager.service

KUBE_ADMISSION_CONTROL=”–admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota”

node2:

三.验证集群是或不是安装成功

#
systemctl status  kube-scheduler.service

KUBE_API_ARGS=””

KUBELET_ADDRESS=”–address=0.0.0.0″
KUBELET_PORT=”–port=10250″
KUBELET_HOSTNAME=”–hostname-override=192.168.118.141″
KUBELET_API_SERVER=”–api-servers=”
KUBELET_POD_INFRA_CONTAINER=”–pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest”
KUBELET_ARGS=””

在master上实施如下命令

5.在etcd中定义flannel网络

7.启动kube-apiserver  kube-controller-manager  kube-scheduler

  5)在享有Node节点上启动kube-proxy,kubelet,docker,flanneld等服务,并安装开机启动。

kubectl get
node

[root@cmmaster
~]# etcdctl mk /atomic.io/network/config
‘{“Network”:”172.17.0.0/16″}’

forSERVICESinetcd kube-apiserver kube-controller-manager
kube-scheduler;do

# for SERVICES in kube-proxy kubelet docker flanneld;do systemctl
restart $SERVICES;systemctl enable $SERVICES;systemctl status $SERVICES;
done

注明:上述2个节点正常突显,状态为Ready,则印证集群搭建成功。

设置配置Kubernetes
Node

systemctl restart $SERVICES

4.验证是或不是中标

澳门金沙国际 2

一般来说操作在cmnode1、cmnode2、cmnode3上实施

systemctl enable $SERVICES

[root@master ~]# kubectl get node
NAME              STATUS    AGE
192.168.118.140  Ready    3d
192.168.118.141  Ready    3d

 

1.安装flannel
kubernetes-node

systemctl status $SERVICES

5.常用排错命令

 

yum
-y install flannel kubernetes-node

done

#kubectl describe pod/rc    …..  -n=kube-system

 

2.为flannel互连网指定etcd服务,修改/etc/sysconfig/flanneld文件

8.创设互联网

本文永久更新链接地址:http://www.linuxidc.com/Linux/2017-10/148042.htm

FLANNEL_ETCD=””

etcdctl mk /atomic.io/network/config'{“Network”:”172.17.0.0/16″}’

澳门金沙国际 3

FLANNEL_ETCD_KEY=”/atomic.io/network”

9.查看节点

3.修改/etc/kubernetes/config文件

kubectlgetnodes

KUBE_LOGTOSTDERR=”–logtostderr=true”

澳门金沙国际 4

KUBE_LOG_LEVEL=”–v=0″

到此master端配置落成

KUBE_ALLOW_PRIV=”–allow-privileged=false”

客户端配置

KUBE_MASTER=”–master=”

1.在monion01、monion02、monion03上部署

4.鲁人持竿如下内容改动对应node的配置文件/etc/kubernetes/kubelet

yum -y install flannel kubernetes

KUBELET_ADDRESS=”–address=0.0.0.0″ 
                                 #将127.0.0.1修改成0.0.0.0

2.配置flanneld

KUBELET_PORT=”–port=10250″

1

KUBELET_HOSTNAME=”–hostname-override=192.168.137.148″ 
           #修改成相应Node的IP

vi /etc/sysconfig/flanneld
FLANNEL_ETCD=””

KUBELET_API_SERVER=”–api-servers=” 
   #指定Master节点的API Server

3.配置kubernetes

KUBELET_POD_INFRA_CONTAINER=”–pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest”

vi /etc/kubernetes/config

KUBELET_ARGS=””

KUBE_MASTER=”–master=”

5.在有着Node节点上启动kube-proxy,kubelet,docker,flanneld等服务,并安装开机启动

4.配置kubelet

#
for SERVICES in kube-proxy kubelet docker flanneld;do systemctl restart
$SERVICES;systemctl enable $SERVICES;systemctl status $SERVICES;
done

monion01

•验证集群是还是不是安装成功

vi /etc/kubernetes/kubelet

在master上实施如下命令

KUBELET_ADDRESS=”–address=0.0.0.0″

[root@cmmaster
~]# kubectl get node

KUBELET_澳门金沙国际 ,PORT=”–port=10250″

NAME 
            STATUS    AGE

# change the hostname to this host’s IP address

192.168.137.147 
 Ready     7m

KUBELET_HOSTNAME=”–hostname_override=192.168.50.131″

192.168.137.148 
 Ready     1m

KUBELET_API_SERVER=”–api_servers=”

192.168.137.199 
 Ready     7m

KUBELET_ARGS=””

上述节点正常突显,状态为Ready,则申明集群搭建成功

monion02

澳门金沙国际 5

vi /etc/kubernetes/kubelet

KUBELET_ADDRESS=”–address=0.0.0.0″

KUBELET_PORT=”–port=10250″

# change the hostname to this host’s IP address

KUBELET_HOSTNAME=”–hostname_override=192.168.50.132″

KUBELET_API_SERVER=”–api_servers=”

KUBELET_ARGS=””

monion03

KUBELET_ADDRESS=”–address=0.0.0.0″

KUBELET_PORT=”–port=10250″

# change the hostname to this host’s IP address

KUBELET_HOSTNAME=”–hostname_override=192.168.50.133″

KUBELET_API_SERVER=”–api_servers=”

KUBELET_ARGS=””

5.开行服务

forSERVICESinkube-proxy kubelet docker flanneld;do

systemctl restart $SERVICES

systemctl enable $SERVICES

systemctl status $SERVICES

done

6.验证

monion01

ip a | grep flannel | grep inet

澳门金沙国际 6

在monion02和monion3上实施查看

master

kubect lget nodes

NAME             LABELS                                  STATUS

192.168.50.131   kubernetes.io/hostname=192.168.50.131   Ready

192.168.50.132   kubernetes.io/hostname=192.168.50.132   Ready

192.168.50.133   kubernetes.io/hostname=192.168.50.133   Ready

测试完了!

相关文章