devops 测试开发之路--devops (七):使用 salt 搭建 docker 和 k8s

孙高飞 · 2018年01月28日 · 最后由 孙高飞 回复于 2019年10月25日 · 2956 次阅读

前言

今天我们来做一些比较实际的事情,自动化部署 docker 和 k8s。 之前我写过一篇 kubeadm 安装 k8s 踩坑记, 建议大家先看一下。链接地址:https://testerhome.com/topics/8668

安装 docker

我把每一步的解释放在下面的注释里。

# init docker
# 我们要安装最新的docker-ce版本。 但是docker的坑之一就是变化特别快。 社区曾经有docker 和docker-engine 两个包名,同样的依赖包也有docker-common等。 而这些包都是彼此冲突的。如果机器上装过老版本的docker的话就悲剧了。所以首先要把环境里的痕迹清理干净。把下面的包依次删除
remove docker:
  pkg.removed:
    - pkgs:
      - docker
      - docker-ce
      - docker-common
      - docker-selinux
      - docker-engine
      - docker-ce-selinux
      - container-selinux

# 下面是安装docker-ce 按照官网推荐的centos方式进行安装。
install docker-ce:
  cmd.run:
    - name: yum install -y yum-utils device-mapper-persistent-data lvm2; yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo; yum -y install docker-ce
    - require:
      - pkg: remove docker

# 把docker.service 复制到节点上。 这是systemd控制docker的配置文件,我们做了定制例如使用aufs驱动,cgroups driver使用cgroupfs, 配置阿里加速器等等。后面会把这个文件贴出来。
docker_service:
  file.managed:
    - name: /lib/systemd/system/docker.service
    - source: salt://utils/k8s/docker.service
    - user: root
    - group: root
    - mode: 644
    - require:
      - cmd: install docker-ce

# 配置并启动docker
enable_docker:
  cmd.run:
    - name: rm -f /etc/docker/daemon.json; systemctl disable docker; systemctl enable docker; systemctl start docker; systemctl restart docker; systemctl status docker
    - require:
      - file: docker_service

# 要连接公司内部的私有镜像仓库,需要有login后的认证文件。 同样copy到节点上。
docker_login:
  cmd.run:
    - name: mkdir -p /root/.docker/
  file.managed:
    - name: /root/.docker/config.json
    - source: salt://utils/k8s/config.json
    - require:
      - cmd: docker_login

下面是 docker.service 的配置:

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target firewalld.service

[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd --registry-mirror=https://5f6xdooq.mirror.aliyuncs.com --live-restore=true  --exec-opt native.cgroupdriver=cgroupfs --storage-driver=aufs
ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process

安装 kubeadm

# 引入上面安装docker的文件
include:
  - init_docker

# copy kubeadm的源,这里使用阿里源
kubeadm.repo:
  file.managed:
    - name: /etc/yum.repos.d/kubernetes.repo
    - source: salt://utils/k8s/kubernetes.repo
    - user: root
    - group: root
    - mode: 644

# 安装kubadm
install kubeadm:
   pkg.installed:
    - name: kubeadm
    - require:
      - file: kubeadm.repo

# 启动前清理iptables,避免一些奇怪的规则堵死集群网络
init_iptalbes:
  cmd.run:
    - name: iptables -P INPUT ACCEPT; iptables -F; iptables -X; iptables -Z
    - require:
      - pkg: install kubeadm

# 将kubelet的cgroup-driver从systemd修改为cgroupfs。 kubelet默认使用systemd。但是刚才在docker的配置中我们指定了cgroupfs,这两者必须保持一致。 为什么不使用systemd呢?因为后面安装flannel网络的时候,flannel在systemd下会报错。
set_kubelet:
   cmd.run:
    - name: sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    - require:
      - pkg: install kubeadm

安装 master 节点

# 引入安装kubeadm的文件
include:
  - init_kubeadm
  - tools

# 初始化镜像, 由于google被墙,我实现将需要的镜像放到了公司的镜像仓库中。需要在master节点上把这些镜像拉下来,并修改成gcr的名字才可使用
k8s_images_init:
  cmd.run:
    - name: docker pull registry.4paradigm.com/pause-amd64:3.0; docker tag registry.4paradigm.com/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0; docker pull registry.4paradigm.com/kube-proxy-amd64:v1.8.2; docker tag registry.4paradigm.com/kube-proxy-amd64:v1.8.2 gcr.io/google_containers/kube-proxy-amd64:v1.8.2; docker pull registry.4paradigm.com/nginx-ingress-controller:0.9.0; docker tag registry.4paradigm.com/nginx-ingress-controller:0.9.0 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0; docker pull registry.4paradigm.com/k8s-dns-sidecar-amd64:1.14.4; docker tag registry.4paradigm.com/k8s-dns-sidecar-amd64:1.14.4 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4; docker pull registry.4paradigm.com/kube-apiserver-amd64:v1.8.2; docker tag registry.4paradigm.com/kube-apiserver-amd64:v1.8.2 gcr.io/google_containers/kube-apiserver-amd64:v1.8.2; docker pull registry.4paradigm.com/kube-controller-manager-amd64:v1.8.2; docker tag registry.4paradigm.com/kube-controller-manager-amd64:v1.8.2 gcr.io/google_containers/kube-controller-manager-amd64:v1.8.2; docker pull registry.4paradigm.com/kube-scheduler-amd64:v1.8.2; docker tag registry.4paradigm.com/kube-scheduler-amd64:v1.8.2 gcr.io/google_containers/kube-scheduler-amd64:v1.8.2; docker pull registry.4paradigm.com/defaultbackend:1.4; docker tag registry.4paradigm.com/defaultbackend:1.4 gcr.io/google_containers/defaultbackend:1.4; docker pull registry.4paradigm.com/k8s-dns-kube-dns-amd64:1.14.4; docker tag registry.4paradigm.com/k8s-dns-kube-dns-amd64:1.14.4 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4; docker pull registry.4paradigm.com/k8s-dns-dnsmasq-nanny-amd64:1.14.4; docker tag registry.4paradigm.com/k8s-dns-dnsmasq-nanny-amd64:1.14.4 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4; docker pull registry.4paradigm.com/etcd-amd64:3.0.17; docker tag registry.4paradigm.com/etcd-amd64:3.0.17 gcr.io/google_containers/etcd-amd64:3.0.17
    - require:
      - file: docker_login
      - cmd: enable_docker

# 初始化集群
setup_master:
  cmd.run:
    - name: kubeadm reset; kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.8.2 --token-ttl 0
    - require:
      - cmd: k8s_images_init
      - pkg: install kubeadm

# 配置kubectl,把需要的kube config 文件放到root用户下,以便root用户可以使用该命令
setup_kubectl:
   cmd.run:
     - name: mkdir -p /root/.kube; rm -f /root/.kube/config/; cp /etc/kubernetes/admin.conf /root/.kube/config;chown $(id -u):$(id -g) /root/.kube/config;
     - require:
       - cmd: setup_master

# 下载portmap,这是隧道网络flannel需要用的组件,很多系统上没有这个组件,需要事先安装好
port_map:
  file.managed:
    - name: /opt/cni/bin/portmap
    - source: salt://utils/k8s/portmap
    - user: root
    - group: root
    - mode: 777

# 初始化集群网络,这里使用flannel
install_flannel:
   file.managed:
     - name: /root/kube-flannel.yml
     - source: salt://utils/k8s/kube-flannel.yml
     - user: root
     - group: root
     - mode: 644
     - require:
        - cmd: setup_master
   cmd.run:
     - name: kubectl create -f /root/kube-flannel.yml
     - require:
       - file: install_flannel
       - cmd: setup_kubectl
       - file: port_map

安装完 master 节点后初始化一些特定服务

# 首先创建测试服务专用的namespace
test-service_namespace:
  cmd.run:
    - name: kubectl create namespace test-service

# 为default namespace设置image secret,k8s只有拥有这个secret才能在镜像仓库上下载镜像
default_image_secret:
  file.managed:
    - name: /root/image-pull-secret_default.yaml
    - source: salt://utils/k8s/image-pull-secret_default.yaml
    - user: root
    - group: root
    - mode: 644
  cmd.run:
    - name: kubectl create -f /root/image-pull-secret_default.yaml

# 同上,为test-service 这个namespace设置secret
test-service_image_secret:
  file.managed:
    - name: /root/image-pull-secret_test-service.yaml
    - source: salt://utils/k8s/image-pull-secret_test-service.yaml
    - user: root
    - group: root
    - mode: 644
  cmd.run:
    - name: kubectl create -f /root/image-pull-secret_test-service.yaml

# 安装ingress 服务,这是一个7层路由。
ingress:
  file.managed:
    - name: /root/ingress.yaml
    - source: salt://utils/k8s/ingress.yaml
    - user: root
    - group: root
    - mode: 644
  cmd.run:
    - name: kubectl create -f /root/ingress.yaml

# 解绑master节点, 默认master节点是不能部署私有的pod的,需要解绑后才可使用
untaint_master:
  cmd.run:
    - name: export node=`kubectl get nodes|grep Ready|awk '{print $1}'`;  kubectl taint nodes $node  node-role.kubernetes.io/master-

配置从节点

# 需要运行时通过pillar传递master节点的地址和token
{% set k8s_token = pillar.get("token", "token") %}
{% set master_node = pillar.get("master_node", "token") %}

# 引入安装kubeadm的文件
include:
  - init_kubeadm

# 初始化从节点需要的镜像
k8s_images_init:
  cmd.run:
    - name: docker pull registry.4paradigm.com/pause-amd64:3.0; docker tag registry.4paradigm.com/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0; docker pull registry.4paradigm.com/kube-proxy-amd64:v1.8.2; docker tag registry.4paradigm.com/kube-proxy-amd64:v1.8.2 gcr.io/google_containers/kube-proxy-amd64:v1.8.2
    - require:
      - file: docker_login
      - cmd: enable_docker
# join master节点
join_master:
  cmd.run:
    - name: kubeadm reset; kubeadm join --token {{k8s_token}} {{master_node}}:6443
    - require:
      - cmd: k8s_images_init
      - pkg: install kubeadm

结尾

这就是一个 salt 在初始化 k8s 节点的应用

如果觉得我的文章对您有用,请随意打赏。您的支持将鼓励我继续创作!
共收到 2 条回复 时间 点赞

大佬想问个题外话, 请问你是怎么总结经验的啊, 也就是学习的方法.
感觉你真的好高产, 所以想想你请教学习的方法啊.
或者说是做笔记方便的方法, 如何可以更有效的做到回顾笔记呢?
就算做笔记的过程冗余也不要紧, 毕竟有些东西太久没用真的就忘干净啦.

先感谢了!

zhang 回复

就是学~~ 然后用在工作里~~~ 就这样。 我觉得只要在工作上用的上。 就不会忘的

ABEE ycwdaaaa (孙高飞) 在 TesterHome 的发帖整理 中提及了此贴 01月12日 13:47
需要 登录 后方可回复, 如果你还没有账号请点击这里 注册