본문 바로가기
카테고리 없음

[Kubernetes] 쿠버네티스 Network 이해하기 (weave net, flannel, calico / coreDNS, kube-dns / kube-proxy)

by newstellar 2022. 12. 10.
반응형

1. Pod 통신을 위한 CNI 플러그인 설치하기

 

Kubernetes uses CNI plugins to setup network. The kubelet is responsible for executing plugins as we mention the following parameters in kubelet configuration.
- cni-bin-dir:   Kubelet probes this directory for plugins on startup
- network-plugin: The network plugin to use from cni-bin-dir. It must match the name reported by a plugin probed from the plugin directory. 

 

아래, 세 가지 대표적인 CNI 플러그인을 설치하는 방법을 소개드립니다.

 

혹시 CKA나 CKAD 시험을 준비하는 분이라면 위에서 소개드린 CNI 플러그인 설치는 시험 범위가 아닙니다. 디렉토리에 CNI configuration 파일이 여러 개라면, kubelet은 알파벳 사전 순서 상으로 가장 처음 오는 configuration 파일을 사용합니다.

 

  1) Weave Net

아래 명령어를 통해 GitHub에 저장된 .yaml 파일을 불러와 Pod 형태로 간단히 weave net을 설치할 수 있다. 

kubectl apply -f https://github.com/weaveworks/weave/releases/download/v2.8.1/weave-daemonset-k8s.yaml

 

참고:

https://kubernetes.io/docs/concepts/cluster-administration/addons/#networking-and-network-policy

https://www.weave.works/docs/net/latest/kubernetes/kube-addon/

 

 

  2) Flannel 

아래 명령어를 통해 .yaml 파일을 불러와 Pod 형태로 간단히 Flannel을 설치할 수 있다. 

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml

 

참고:

https://github.com/flannel-io/flannel#deploying-flannel-manually

   

Note: As of now flannel does not support kubernetes network policies.

 

 

  3) Calico

아래 명령어를 통해 .yaml 파일을 불러와 간단히 Calico를 설치할 수 있습니다.

curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml

 

참고:
https://projectcalico.docs.tigera.io/getting-started/kubernetes/quickstart

 

 

반응형

 


2. DNS in Kubernetes

 

Kubernetes uses CoreDNS. CoreDNS is a flexible, extensible DNS server that can serve as the Kubernetes cluster DNS.

 

Memory and Pods

In large scale Kubernetes clusters, CoreDNS's memory usage is predominantly affected by the number of Pods and Services in the cluster. Other factors include the size of the filled DNS answer cache, and the rate of queries received (QPS) per CoreDNS instance.

 

Kubernetes resources for coreDNS are:   

  1. a service account named coredns, 
  2. cluster-roles named coredns and kube-dns
  3. clusterrolebindings named coredns and kube-dns 
  4. a deployment named coredns, 
  5. a configmap named coredns and a 
  6. service named kube-dns.

 

While analyzing the coreDNS deployment you can see that the the Corefile plugin consists of important configuration which is defined as a configmap.

 

Port 53 is used for for DNS resolution.

 

  1. kubernetes cluster.local in-addr.arpa ip6.arpa {
  2. pods insecure
  3. fallthrough in-addr.arpa ip6.arpa
  4. ttl 30
  5. }

 

This is the backend to k8s for cluster.local and reverse domains. 

 

proxy . /etc/resolv.conf

 

Forward out of cluster domains directly to right authoritative DNS server.

 

 

Troubleshooting issues related to coreDNS

1. If you find CoreDNS pods in pending state first check network plugin is installed.

2. coredns pods have CrashLoopBackOff or Error state

If you have nodes that are running SELinux with an older version of Docker you might experience a scenario where the coredns pods are not starting. To solve that you can try one of the following options:

a)Upgrade to a newer version of Docker.

b)Disable SELinux.

c)Modify the coredns deployment to set allowPrivilegeEscalation to true:

 

  1. kubectl -n kube-system get deployment coredns -o yaml | \
  2. sed 's/allowPrivilegeEscalation: false/allowPrivilegeEscalation: true/g' | \
  3. kubectl apply -f -

d)Another cause for CoreDNS to have CrashLoopBackOff is when a CoreDNS Pod deployed in Kubernetes detects a loop. 

 

  There are many ways to work around this issue, some are listed here:

 

  • Add the following to your kubelet config yaml: resolvConf: <path-to-your-real-resolv-conf-file> This flag tells kubelet to pass an alternate resolv.conf to Pods. For systems using systemd-resolved, /run/systemd/resolve/resolv.conf is typically the location of the "real" resolv.conf, although this can be different depending on your distribution.
  • Disable the local DNS cache on host nodes, and restore /etc/resolv.conf to the original.
  • A quick fix is to edit your Corefile, replacing forward . /etc/resolv.conf with the IP address of your upstream DNS, for example forward . 8.8.8.8. But this only fixes the issue for CoreDNS, kubelet will continue to forward the invalid resolv.conf to all default dnsPolicy Pods, leaving them unable to resolve DNS.
  •  

3. If CoreDNS pods and the kube-dns service is working fine, check the kube-dns service has valid endpoints.

              kubectl -n kube-system get ep kube-dns

If there are no endpoints for the service, inspect the service and make sure it uses the correct selectors and ports.

 

 


3. Kube-Proxy

 

kube-proxy is a network proxy that runs on each node in the cluster. kube-proxy maintains network rules on nodes. These network rules allow network communication to the Pods from network sessions inside or outside of the cluster.

 

In a cluster configured with kubeadm, you can find kube-proxyas a daemonset. 

 

kubeproxy is responsible for watching services and endpoint associated with each service. When the client is going to connect to the service using the virtual IP the kubeproxy is responsible for sending traffic to actual pods. 

 

If you run a kubectl describe ds kube-proxy -n kube-system you can see that the kube-proxy binary runs with following command inside the kube-proxy container.

 

  1. Command:
  2. /usr/local/bin/kube-proxy
  3. --config=/var/lib/kube-proxy/config.conf
  4. --hostname-override=$(NODE_NAME)

  

    So it fetches the configuration from a configuration file ie, /var/lib/kube-proxy/config.conf and we can override the hostname with the node name of at which the pod is running.

  

  In the config file we define the clusterCIDR, kubeproxy mode, ipvs, iptables, bindaddress, kube-config etc.

  

Troubleshooting issues related to kube-proxy

1. Check kube-proxy pod in the kube-system namespace is running.

2. Check kube-proxy logs.

3. Check configmap is correctly defined and the config file for running kube-proxy binary is correct.

4. kube-config is defined in the config map.

5. check kube-proxy is running inside the container

  1. # netstat -plan | grep kube-proxy
  2. tcp 0 0 0.0.0.0:30081 0.0.0.0:* LISTEN 1/kube-proxy
  3. tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN 1/kube-proxy
  4. tcp 0 0 172.17.0.12:33706 172.17.0.12:6443 ESTABLISHED 1/kube-proxy
  5. tcp6 0 0 :::10256 :::* LISTEN 1/kube-proxy

 

 

참고문서:

https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/

https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/

 

 


2022.11.26 - [웹 & 앱 개발/DevOps] - [Kubernetes] CKA(Certified Kubernetes Administration) 자격증 접수 방법/예약/출제 범위/할인바우처 (+CKAD, CKS 소개)

 

[Kubernetes] CKA(Certified Kubernetes Administration) 자격증 접수 방법/예약/출제 범위/할인바우처 (+CKAD, CKS

CNCF(Cloud Native Computing Foundation) 재단에서 주관하는 CKA(관리), CKAD(개발), CKS(보안) 세 자격증 가운데, 가장 대표적인 CKA 자격증에 대해 알아봅니다. 접수 및 시험 예약 방법, 추천하는 강좌 그리고

newstellar.tistory.com

2022.12.02 - [웹 & 앱 개발/DevOps] - [Kubernetes] Master Node(Control Plane)의 Pod를 삭제하면 어떻게 될까? (컨테이너런타임/Static Pod)

 

[Kubernetes] Master Node(Control Plane)의 Pod를 삭제하면 어떻게 될까? (컨테이너런타임/Static Pod)

시작하며 일반 개발자가 아닌, 관리자의 입장에서는 Kubernetes 클러스터의 노드에서 Pod가 실행될 때 발생하는 에러들에 대해서 알아야 합니다. (만약 CKAD 수준의 쿠버네티스 이해 및 사용 능력이

newstellar.tistory.com

2022.12.04 - [웹 & 앱 개발/DevOps] - [Kubernetes] 쿠버네티스 스케쥴러(Scheduler)를 직접 만들어보자. (kube-scheduler 개념/작동방식)

 

[Kubernetes] 쿠버네티스 스케쥴러(Scheduler)를 직접 만들어보자. (kube-scheduler 개념/작동방식)

들어가며 쿠버네티스 스케줄링(Kubernetes Scheduling)이란 적절한 node의 kubelet이 Pod를 실행하도록 할당하는 것을 뜻합니다. Kubernetes에서는 master node(control plane)의 kube-scheduler가 위 역할을 담당합니다.

newstellar.tistory.com

 

반응형

댓글