k8sを理解する道筋はいろいろあると思う。ひとまずAWSのEKS HandsOnはやったものの、表面的な事しか分からない。手順に従ってCopy&Pasteしているだけなので。。AWS上で試行錯誤するとお金もかかりそうだ。自前環境で俺Kubeを動かすということで、Ubuntu上でkindを動かしてみる
https://kind.sigs.k8s.io/
kindは以下で入れられる
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.19.0/kind-linux-amd64 sudo mv kind /usr/local/bin
$ kind version kind v0.19.0 go1.20.4 linux/amd64
kindを動かすにはDockerが入ってる必要があった。Docker DesktopはGUIがちょっと嫌なので、DockerEngineをインストール。手順は以下
#Set up the repository sudo apt-get update sudo apt-get install ca-certificates curl gnupg sudo install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg sudo chmod a+r /etc/apt/keyrings/docker.gpg echo \ "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null # install docker engine sudo apt-get update sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Install Docker Engine on Ubuntu | Docker Documentation
Dockerが走ってる状態
# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES # ps aux | grep docker root 11330 0.0 1.9 1540556 75436 ? Ssl 17:18 0:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock root 11677 0.0 0.0 18912 2752 pts/3 S+ 17:43 0:00 grep --color=auto docker
このままだと、rootでないとdocker操作できないので、non-root userにも操作を許す設定を追加
Linux post-installation steps for Docker Engine | Docker Documentation
自分の環境だとすでにdocker groupが存在するので、自分のIDをdocker groupに追加
$ grep docker /etc/group docker:x:999: sudo usermod -aG docker $USER
一度抜けて再度ログイン、docker psを打ってみる→OK
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
準備が整ったはずなので、clusterを構築
kind create cluster
詳細不明なのだがクラスタができたようだ。構成図等がないと一体何が何やら・・
$ kind create cluster Creating cluster "kind" ... ✓ Ensuring node image (kindest/node:v1.27.1) 🖼 ✓ Preparing nodes 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 Set kubectl context to "kind-kind" You can now use your cluster with: kubectl cluster-info --context kind-kind Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
kubectlを打ってみると知らんと言われる
$ kubectl Command 'kubectl' not found, but can be installed with: sudo snap install kubectl
以下で入れろと
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
Install and Set Up kubectl on Linux | Kubernetes
kubectlでclusterの属性情報を取ってみる
$ kubectl cluster-info Kubernetes control plane is running at https://127.0.0.1:37457 CoreDNS is running at https://127.0.0.1:37457/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
clusterってnodeと違うのか?分からず調べた。以下に整理。単純にこんなものではないかもですが、今はこのような階層と理解
nodeが存在するのかを確認、control-plane nodeが存在するようだ
$ kubectl get nodes NAME STATUS ROLES AGE VERSION kind-control-plane Ready control-plane 69m v1.27.1
詳細に表示、IPが172.18.0.2であると。。
$ kubectl get node -o wide -A NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME kind-control-plane Ready control-plane 81m v1.27.1 172.18.0.2 <none> Debian GNU/Linux 11 (bullseye) 5.19.0-43-generic containerd://1.6.21
コンテナの確認、kind-control-plane というコンテナが一つ走っている・・と
$ docker container ls CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 124ec07c4a6b kindest/node:v1.27.1 "/usr/local/bin/entr…" About an hour ago Up About an hour 127.0.0.1:37457->6443/tcp kind-control-plane
試しにコンテナに入ってみると以下のようなプロセスが実行中
$ docker exec -it 124ec07c4a6b /bin/bash root@kind-control-plane:/# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.2 19976 10488 ? Ss 08:56 0:00 /sbin/init root 96 0.0 0.1 23748 7656 ? Ss 08:56 0:00 /lib/systemd/systemd-journald root 107 1.3 1.2 2317508 48220 ? Ssl 08:56 0:59 /usr/local/bin/containerd root 287 0.0 0.2 719564 10400 ? Sl 08:57 0:02 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 9c5e481444175bcb86ec2 root 288 0.0 0.2 719564 8188 ? Sl 08:57 0:02 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id f65069a2e609f0cc2c15b root 289 0.0 0.2 719820 10588 ? Sl 08:57 0:02 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 4fa90ea821647150a5498 root 290 0.0 0.2 719820 10088 ? Sl 08:57 0:02 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id a8c49e596037da5b9a70c 65535 366 0.0 0.0 996 4 ? Ss 08:57 0:00 /pause 65535 370 0.0 0.0 996 4 ? Ss 08:57 0:00 /pause 65535 378 0.0 0.0 996 4 ? Ss 08:57 0:00 /pause 65535 382 0.0 0.0 996 4 ? Ss 08:57 0:00 /pause root 456 0.7 1.1 766088 46176 ? Ssl 08:57 0:31 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --author root 504 3.2 2.0 781424 79576 ? Ssl 08:57 2:23 kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/et root 513 7.9 7.0 996692 278228 ? Ssl 08:57 5:56 kube-apiserver --advertise-address=172.18.0.2 --allow-privileged=true --authorizat root 596 4.2 1.3 11214208 54792 ? Ssl 08:57 3:09 etcd --advertise-client-urls=https://172.18.0.2:2379 --cert-file=/etc/kubernetes/p root 652 4.3 2.0 1666968 80336 ? Ssl 08:57 3:13 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --k root 753 0.0 0.2 719820 9352 ? Sl 08:57 0:02 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 2bab911f6fd5a44a6ab6c root 780 0.0 0.2 719820 9396 ? Sl 08:57 0:02 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 6804269058692ae1933cb 65535 799 0.0 0.0 996 4 ? Ss 08:57 0:00 /pause 65535 808 0.0 0.0 996 4 ? Ss 08:57 0:00 /pause root 852 0.0 1.2 764852 48032 ? Ssl 08:57 0:01 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-over root 972 0.0 0.6 741612 25752 ? Ssl 08:57 0:01 /bin/kindnetd root 1164 0.0 0.2 719820 9608 ? Sl 08:57 0:02 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id e5c0c76c32d9d2803f908 root 1176 0.0 0.2 719820 9392 ? Sl 08:57 0:02 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id a1e4e4cc76848d7b05719 root 1181 0.0 0.2 719564 8496 ? Sl 08:57 0:02 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id dba7c5f68f685dff78bd4 65535 1231 0.0 0.0 996 4 ? Ss 08:57 0:00 /pause 65535 1238 0.0 0.0 996 4 ? Ss 08:57 0:00 /pause 65535 1240 0.0 0.0 996 4 ? Ss 08:57 0:00 /pause root 1361 0.4 1.2 764912 49152 ? Ssl 08:57 0:18 /coredns -conf /etc/coredns/Corefile root 1374 0.4 1.2 764912 48896 ? Ssl 08:57 0:18 /coredns -conf /etc/coredns/Corefile root 1464 0.0 0.7 740380 28504 ? Ssl 08:57 0:04 local-path-provisioner --debug start --helper-image docker.io/kindest/local-path-h root 2260 0.0 0.0 4028 3332 pts/1 Ss 10:11 0:00 /bin/bash root 2266 0.0 0.0 6756 2884 pts/1 R+ 10:11 0:00 ps aux
$ kubectl get all,node -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system pod/coredns-5d78c9869d-466kw 1/1 Running 0 124m 10.244.0.3 kind-control-plane <none> <none> kube-system pod/coredns-5d78c9869d-47mmp 1/1 Running 0 124m 10.244.0.4 kind-control-plane <none> <none> kube-system pod/etcd-kind-control-plane 1/1 Running 0 124m 172.18.0.2 kind-control-plane <none> <none> kube-system pod/kindnet-rhtzq 1/1 Running 0 124m 172.18.0.2 kind-control-plane <none> <none> kube-system pod/kube-apiserver-kind-control-plane 1/1 Running 0 124m 172.18.0.2 kind-control-plane <none> <none> kube-system pod/kube-controller-manager-kind-control-plane 1/1 Running 0 124m 172.18.0.2 kind-control-plane <none> <none> kube-system pod/kube-proxy-pfrww 1/1 Running 0 124m 172.18.0.2 kind-control-plane <none> <none> kube-system pod/kube-scheduler-kind-control-plane 1/1 Running 0 124m 172.18.0.2 kind-control-plane <none> <none> local-path-storage pod/local-path-provisioner-6bc4bddd6b-6mc2v 1/1 Running 0 124m 10.244.0.2 kind-control-plane <none> <none> NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 124m <none> kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 124m k8s-app=kube-dns NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR kube-system daemonset.apps/kindnet 1 1 1 1 1 kubernetes.io/os=linux 124m kindnet-cni docker.io/kindest/kindnetd:v20230511-dc714da8 app=kindnet kube-system daemonset.apps/kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 124m kube-proxy registry.k8s.io/kube-proxy:v1.27.1 k8s-app=kube-proxy NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR kube-system deployment.apps/coredns 2/2 2 2 124m coredns registry.k8s.io/coredns/coredns:v1.10.1 k8s-app=kube-dns local-path-storage deployment.apps/local-path-provisioner 1/1 1 1 124m local-path-provisioner docker.io/kindest/local-path-provisioner:v20230511-dc714da8 app=local-path-provisioner NAMESPACE NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR kube-system replicaset.apps/coredns-5d78c9869d 2 2 2 124m coredns registry.k8s.io/coredns/coredns:v1.10.1 k8s-app=kube-dns,pod-template-hash=5d78c9869d local-path-storage replicaset.apps/local-path-provisioner-6bc4bddd6b 1 1 1 124m local-path-provisioner docker.io/kindest/local-path-provisioner:v20230511-dc714da8 app=local-path-provisioner,pod-template-hash=6bc4bddd6b NAMESPACE NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME node/kind-control-plane Ready control-plane 124m v1.27.1 172.18.0.2 <none> Debian GNU/Linux 11 (bullseye) 5.19.0-43-generic containerd://1.6.21
どのContainerがどのPod上で走り、どのNodeなのか。。コマンドだけではすぐに分からない。分かる方法もあるのだろうけど。
~/tech/k8s$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-5d78c9869d-466kw 1/1 Running 0 130m coredns-5d78c9869d-47mmp 1/1 Running 0 130m etcd-kind-control-plane 1/1 Running 0 130m kindnet-rhtzq 1/1 Running 0 130m kube-apiserver-kind-control-plane 1/1 Running 0 130m kube-controller-manager-kind-control-plane 1/1 Running 0 130m kube-proxy-pfrww 1/1 Running 0 130m kube-scheduler-kind-control-plane 1/1 Running 0 130m sumi@MBP01:~/tech/k8s$ kubectl get nodes NAME STATUS ROLES AGE VERSION kind-control-plane Ready control-plane 130m v1.27.1 sumi@MBP01:~/tech/k8s$ kubectl get ns NAME STATUS AGE default Active 130m kube-node-lease Active 130m kube-public Active 130m kube-system Active 130m local-path-storage Active 130m sumi@MBP01:~/tech/k8s$ kubectl get nodes NAME STATUS ROLES AGE VERSION kind-control-plane Ready control-plane 131m v1.27.1
worker nodeを追加しないと、仕事ができない。nodeを追加してみる。(すでに解説本から離れている・・)
認証キーを取ってきてゴリゴリにnodeを追加する方法があるようだが、、kindと整合取れるのか分からず、
kindの解説ページでは、マルチノードで立ち上げましょうと書いてるので、それに従う。
kind – Quick Start
$ cat kind_multi_config.yaml # three node (two workers) cluster config kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane - role: worker - role: worker
マルチノード構成となるようクラスタを作り直す
# 削除 $ kind get clusters kind $ kind delete cluster --name kind Deleting cluster "kind" ... Deleted nodes: ["kind-control-plane"] $ kind get clusters No kind clusters found. # 作成 $ kind create cluster --config ./kind_multi_config.yaml Creating cluster "kind" ... ? Ensuring node image (kindest/node:v1.27.1) ?? ? Preparing nodes ?? ?? ?? ? Writing configuration ?? ? Starting control-plane ??? ? Installing CNI ?? ? Installing StorageClass ?? ? Joining worker nodes ?? Set kubectl context to "kind-kind" You can now use your cluster with: kubectl cluster-info --context kind-kind Not sure what to do next? ?? Check out https://kind.sigs.k8s.io/docs/user/quick-start/ # 確認 $ kubectl cluster-info Kubernetes control plane is running at https://127.0.0.1:34039 CoreDNS is running at https://127.0.0.1:34039/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. $ kubectl get nodes NAME STATUS ROLES AGE VERSION kind-control-plane Ready control-plane 42s v1.27.1 kind-worker Ready <none> 23s v1.27.1 kind-worker2 Ready <none> 22s v1.27.1
細かいことを考えずにHelloWorldを実行してみる
$ kubectl run hello-world --image=hello-world -it --restart=Never Hello from Docker! This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. (amd64) 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker ID: https://hub.docker.com/ For more examples and ideas, visit: https://docs.docker.com/get-started/
どこのNode/Podで走ったのか詳細不明だが走るには走った。HelloWorldイメージはすぐに終了するので、docker psでコンテナを確認しても存在しない*1
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2ff9115d2766 kindest/node:v1.27.1 "/usr/local/bin/entr…" 11 minutes ago Up 11 minutes kind-worker2 3c35738e01dc kindest/node:v1.27.1 "/usr/local/bin/entr…" 11 minutes ago Up 11 minutes 127.0.0.1:34039->6443/tcp kind-control-plane 097e18ca5f53 kindest/node:v1.27.1 "/usr/local/bin/entr…" 11 minutes ago Up 11 minutes kind-worker
参考書に沿って、nginxを走らせてみる
$ cat nginx-pod.yaml apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - name: nginx image: nginx:latest $ kubectl apply -f nginx-pod.yaml pod/nginx created
$ kubectl get pods NAME READY STATUS RESTARTS AGE hello-world 0/1 Completed 0 26m nginx 1/1 Running 0 85s $ kubectl get pods nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx 1/1 Running 0 3m39s 10.244.2.2 kind-worker2 <none> <none> $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2ff9115d2766 kindest/node:v1.27.1 "/usr/local/bin/entr…" 32 minutes ago Up 32 minutes kind-worker2 3c35738e01dc kindest/node:v1.27.1 "/usr/local/bin/entr…" 32 minutes ago Up 32 minutes 127.0.0.1:34039->6443/tcp kind-control-plane 097e18ca5f53 kindest/node:v1.27.1 "/usr/local/bin/entr…" 32 minutes ago Up 32 minutes kind-worker
nginxがNodeの中で走っている。詳細不明。
コンテナ(POD?)内に入ってみる。psコマンドすら存在しない。やることないので、ログ等を見てみる。
kubectl exec -it nginx /bin/bash root@nginx:/# ps bash: ps: command not found root@nginx:/var/log/nginx# pwd /var/log/nginx root@nginx:/var/log/nginx# ls -la total 8 drwxr-xr-x 2 root root 4096 May 24 22:43 . drwxr-xr-x 1 root root 4096 May 24 22:43 .. lrwxrwxrwx 1 root root 11 May 24 22:43 access.log -> /dev/stdout lrwxrwxrwx 1 root root 11 May 24 22:43 error.log -> /dev/stderr
curlは入ってるようで、自ホストに80でGETしてみる。応答がある
root@nginx:/# curl localhost:80 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
P01:~/tech/k8s$ kubectl describe pod nginx Name: nginx Namespace: default Priority: 0 Service Account: default Node: kind-worker2/172.18.0.4 Start Time: Sat, 10 Jun 2023 21:18:46 +0900 Labels: <none> Annotations: <none> Status: Running IP: 10.244.2.2 IPs: IP: 10.244.2.2 Containers: nginx: Container ID: containerd://8c03bf2617226ac7103af9d471a40dd6f879b85c1f72aa0d23237f3e092889e6 Image: nginx:latest Image ID: docker.io/library/nginx@sha256:af296b188c7b7df99ba960ca614439c99cb7cf252ed7bbc23e90cfda59092305 Port: <none> Host Port: <none> State: Running Started: Sat, 10 Jun 2023 21:18:58 +0900 Ready: True Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zq4db (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-zq4db: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: <none>
先ほど調べた出力より、nginxはNode: worker2の中で走っている 。worker2の中のプロセスは以下。確かにnginxが存在する。Node内で、pod/containerの仕切りがどうなっているのか、分からない。
$ docker exec -it kind-worker2 /bin/bash root@kind-worker2:/# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.1 18472 7344 ? Ss 11:47 0:00 /sbin/init root 96 0.0 0.1 25684 4624 ? Ss 11:47 0:00 /lib/systemd/systemd-journald root 107 0.9 1.0 1947824 41888 ? Ssl 11:47 0:27 /usr/local/bin/containerd root 215 2.1 1.9 1592724 75320 ? Ssl 11:48 0:59 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf root 261 0.0 0.2 719564 9264 ? Sl 11:48 0:01 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 86328315fa7606e84 root 269 0.0 0.2 719820 9724 ? Sl 11:48 0:01 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 0530f982b749ce497 65535 309 0.0 0.0 996 4 ? Ss 11:48 0:00 /pause 65535 316 0.0 0.0 996 4 ? Ss 11:48 0:00 /pause root 364 0.0 1.2 764852 47688 ? Ssl 11:48 0:00 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname- root 441 0.0 0.6 742124 26824 ? Ssl 11:48 0:01 /bin/kindnetd root 917 0.0 0.2 719820 9348 ? Sl 12:18 0:00 /usr/local/bin/containerd-shim-runc-v2 -namespace k8s.io -id 1fe93ea0ce9bd4f15 65535 939 0.0 0.0 996 4 ? Ss 12:18 0:00 /pause root 992 0.0 0.1 9736 6516 ? Ss 12:18 0:00 nginx: master process nginx -g daemon off; _rpc 1032 0.0 0.0 10132 2616 ? S 12:18 0:00 nginx: worker process _rpc 1033 0.0 0.0 10132 2616 ? S 12:18 0:00 nginx: worker process _rpc 1034 0.0 0.0 10132 2616 ? S 12:18 0:00 nginx: worker process _rpc 1035 0.0 0.0 10132 2616 ? S 12:18 0:00 nginx: worker process root 1280 0.0 0.0 4028 3200 pts/1 Ss 12:35 0:00 /bin/bash root 1286 0.0 0.0 6756 2884 pts/1 R+ 12:35 0:00 ps aux
どのportが空いてるのか調べたかったが、netstatが無いと・・・
root@kind-worker2:/# netstat -na | grep -i listen bash: netstat: command not found
LoadBalancerを足さないと、Nodeの外からnginxに接続できない(多分)。 LBは、k8sの提供方式に依存するらしい。
KindのLB提供に沿って設定が必要(らしい)
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml
KindのDocumentによると上記らしいので、とりあえず打ってみる
$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml namespace/metallb-system created customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created serviceaccount/controller created serviceaccount/speaker created role.rbac.authorization.k8s.io/controller created role.rbac.authorization.k8s.io/pod-lister created clusterrole.rbac.authorization.k8s.io/metallb-system:controller created clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created rolebinding.rbac.authorization.k8s.io/controller created rolebinding.rbac.authorization.k8s.io/pod-lister created clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created secret/webhook-server-cert created service/webhook-service created deployment.apps/controller created daemonset.apps/speaker created validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created
次にdockerのネットワークの範囲を調べろと
$ docker network inspect -f '{{.IPAM.Config}}' kind [{172.18.0.0/16 172.18.0.1 map[]} {fc00:f853:ccd:e793::/64 map[]}]
上記/16のセグメントから、loadbalancer IP rangeを確保するらしい。yamlの例は以下
$ cat metallb-config.yaml apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: example namespace: metallb-system spec: addresses: - 172.18.255.200-172.18.255.250 --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: empty namespace: metallb-system
実行した
$ kubectl apply -f metallb-config.yaml ipaddresspool.metallb.io/example created l2advertisement.metallb.io/empty created
LBを構築する
$ cat > lb-config.yaml apiVersion: v1 kind: Service apiVersion: v1 metadata: name: ngx-service spec: type: LoadBalancer selector: app: nginx ports: - port: 80 $ kubectl apply -f lb-config.yaml service/ngx-service created $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 78m ngx-service LoadBalancer 10.96.87.71 172.18.255.200 80:31472/TCP 18s $ kubectl describe svc ngx-service Name: ngx-service Namespace: default Labels: <none> Annotations: <none> Selector: app=nginx Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 10.96.87.71 IPs: 10.96.87.71 LoadBalancer Ingress: 172.18.255.200 Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 31472/TCP Endpoints: <none> Session Affinity: None External Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal IPAllocated 43s metallb-controller Assigned IP ["172.18.255.200"]
LBのIPは172.18.255.200ですと言ってるように思える。curlを叩いてみるが応答がない
sumi@MBP01:~/tech/k8s$ curl -v http://172.18.255.200:80 * Trying 172.18.255.200:80... * connect to 172.18.255.200 port 80 failed: No route to host * Failed to connect to 172.18.255.200 port 80 after 3069 ms: No route to host * Closing connection 0 curl: (7) Failed to connect to 172.18.255.200 port 80 after 3069 ms: No route to host sumi@MBP01:~/tech/k8s$ ping 172.18.255.200 PING 172.18.255.200 (172.18.255.200) 56(84) bytes of data. From 172.18.0.1 icmp_seq=1 Destination Host Unreachable From 172.18.0.1 icmp_seq=2 Destination Host Unreachable From 172.18.0.1 icmp_seq=3 Destination Host Unreachable
$ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME kind-control-plane Ready control-plane 84m v1.27.1 172.18.0.2 <none> Debian GNU/Linux 11 (bullseye) 5.19.0-43-generic containerd://1.6.21 kind-worker Ready <none> 84m v1.27.1 172.18.0.3 <none> Debian GNU/Linux 11 (bullseye) 5.19.0-43-generic containerd://1.6.21 kind-worker2 Ready <none> 83m v1.27.1 172.18.0.4 <none> Debian GNU/Linux 11 (bullseye) 5.19.0-43-generic containerd://1.6.21 $ kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES hello-world 0/1 Completed 0 79m 10.244.1.2 kind-worker <none> <none> nginx 1/1 Running 0 53m 10.244.2.2 kind-worker2 <none> <none>
LBを設定したものの、Endpointsが noneなので、接続先が特定できていないのではなかろうか。
Endpoints: <none>
ubuntuのネットワークの状況
$ ifconfig -a br-d2e7a2e96ce4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255 inet6 fe80::42:70ff:fe90:33fc prefixlen 64 scopeid 0x20<link> inet6 fc00:f853:ccd:e793::1 prefixlen 64 scopeid 0x0<global> inet6 fe80::1 prefixlen 64 scopeid 0x20<link> ether 02:42:70:90:33:fc txqueuelen 0 (Ethernet) RX packets 75030 bytes 9887122 (9.8 MB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 161774 bytes 232410124 (232.4 MB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255 inet6 fe80::42:faff:fe95:6b65 prefixlen 64 scopeid 0x20<link> ether 02:42:fa:95:6b:65 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 3 bytes 306 (306.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 *略*
仮想NICと言っていいのか、 br-d2e7a2e96ce4が作られていて、172.18.0.1/16である。LBの割り当てIPが、172.18.255.200なので、 172.18.255.200の80番で待っているつもりだろうと推測。
*1:ちょっと間違っているかも・・ Docker内のNode内のPodとしてHelloWorldが走るので、docker ps (-a)しても、出てこない?