본 글은 kubernetes | vagrant and docker 글에서 이어집니다.
https://docs.docker.com/engine/install/ubuntu/#set-up-the-repository 내용을 참고해서 설치합니다.

docker 설치

$ sudo apt-get update

$ sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release

docker offical GPG key. 등록

$ sudo mkdir -p /etc/apt/keyrings

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

repository 추가

$ echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

docker-engine 설치

$ sudo apt-get update

$ sudo apt-get -y install docker-ce docker-ce-cli containerd.io docker-compose-plugin
$ sudo systemctl enable docker
$ sudo systemctl start docker

kubenetes 설치

사전준비 사항

  1. 환경설정
  2. kubeaadm, kubectl, kubelet 설치
  3. control-plane 구성
  4. worker node 구성
  5. 설치확인

1, swap 제거

스왑의 비활성화. kubelet이 제대로 작동하게 하려면 반드시 스왑을 사용하지 않도록 설정한다

$ swapoff -a && sed -i '/swap/s/^/#/' /etc/fstab

2. iptables가 브리지된 트래픽을 보게 하기

Letting iptables see bridged traffic

$ cat <<EOF | tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

$ cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

$ sudo sysctl --system

3. 방화벽 설정

앞단에 방화벽이 대부분 존재합니다. 방화벽을 끄고 사용하도록 합니다.

$ systemctl stop firewalld 
$ systemctl disable firewalld

selinux disabled 설정

4) Set SELinux in permissive mode (effectively disabling it)
# setenforce 0
# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

kubeadm, kubelet, kubectl 설치


$ sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

$ echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

$ sudo apt-get update
$ sudo apt-get install -y kubelet kubeadm kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl

3. Initializing your control-plane node

$ kubeadm init -- 옵션들
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR CRI]: container runtime is not running: output: E0531 14:39:51.155826   27989 remote_runtime.go:925] "Status from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
time="2022-05-31T14:39:51Z" level=fatal msg="getting status of runtime: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

에러가 발생했습니다. 아래와 같이 설정파일 날리고 재실행하니 문제가 없었습니다.

unknown service runtime.v1alpha2.RuntimeService 과 같은 오류가 발생하게 되면 컨테이너 런타임에서 CRI 기능을 비활성화 한 경우이므로 다음 추가 작업을 진행합니다.
전체 master, node1, node2 에서

$ rm /etc/containerd/config.toml
# 아니면 안에 cri 내용을 주석처리 한다.
$ sudo systemctl restart containerd

가상환경에서는 --apiserver-advertise-address옵션을 줘야 192.168.56.xxx 으로 통신을 할 수 있습니다.

calico 기반

$ ubeadm init --apiserver-advertise-address=10.0.0.10 --pod-network-cidr=192.168.0.0/16

flannel 기반

(Master)$ sudo kubeadm init --apiserver-advertise-address=192.168.56.100 --pod-network-cidr=10.244.0.0/16

kubeadm를 초기화 한다. –pod-network-cidr는 반드시 10.244.0.0/16으로 설정해야 한다. Docker Version으로 인한 Error가 발생하면 kubeadm init 마지막에 '–ignore-preflight-errors=SystemVerification'를 붙인다.

아래와 같은 로그들이 출력 된다.

[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 10.0.2.15]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 10.507423 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 39tu9f.zvo7sn8r11613q4u
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.56.10:6443 --token 39tu9f.zvo7sn8r11613q4u \
        --discovery-token-ca-cert-hash sha256:a5ff918cb85ae91f474f8b49b7b3c2047c9eaa9be158c6cc470a382b7e3f9aee

controller, schedular, etcd 등이 생성된다.
node들에서 master와 연결을 하기 위해서는 token 정보를 알고 있어야 합니다.
아래는 예시입니다. 테스트기기 마다ㅏ 값이 다르기 때문에 메모할 필요가 있습니다

$ kubeadm join 192.168.56.10:6443 --token 39tu9f.zvo7sn8r11613q4u \
        --discovery-token-ca-cert-hash sha256:a5ff918cb85ae91f474f8b49b7b3c2047c9eaa9be158c6cc470a382b7e3f9aee

pod network

master에서만 실행

weave 계열

$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
ARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created

flannel 설치

--pod-network-cidr=10.244.0.0/16 들 추가한다.

$ kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml

vagrant network 에러

-- https://github.com/k3s-io/k3s/issues/72 에서 해결책을 찾음

$ wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
$ vi kube-flannel.yml
수정

  containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=enp0s8

자동완성 설정

# setup autocomplete in bash into the current shell, bash-completion package should be installed first.
$ source <(kubectl completion bash) 
# add autocomplete permanently to your bash shell.
$ echo "source <(kubectl completion bash)" >> ~/.bashrc
$ source ~/.bashrc

root 에서 안 error가 발생할 시 아래 내용을 /etc/bash.bashrc에 넣습니다. log out 후에 재접속 하면 반영 되는 것을 확인 할 수 있습니다.

echo 'if [ -f /etc/bash_completion ] && ! shopt -oq posix; then
    . /etc/bash_completion
fi' >> /etc/bash.bashrc

master node에서도 pod이 생성 되게 설정

# kubectl taint nodes --all node-role.kubernetes.io/control-plane- node-role.kubernetes.io/master-

dashboard 설치

https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/ 에 최신버전을 확인해nr 사용하도록 한다.

(Master)# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml

localhost:8080 error

worker node에서는 에러가 난다.. 모든 설정은 node1에서 되고 있으니 문제가 없어영

유용한 참조 블로그

https://ssup2.github.io/record/Kubernetes_%EC%84%A4%EC%B9%98_kubeadm_Ubuntu_18.04/

  • CNI 네트워크 설정을 해결하는데 큰 도움이 됨
반응형

'개발 > 가상화' 카테고리의 다른 글

Apache Mesos, Kubernetes 비교  (0) 2022.09.15
docker | error : Failed to get D-Bus connection: Operation not permitted  (0) 2022.07.14
kubenetes | vagrant 로 구성하기  (0) 2022.07.06
kubernetes | command  (0) 2022.07.05
XCP-ng Tools 설치  (0) 2018.11.29

+ Recent posts