Felhasználói eszközök

Eszközök a webhelyen


kubernetes:gyakorlatok

Különbségek

A kiválasztott változat és az aktuális verzió közötti különbségek a következők.

Összehasonlító nézet linkje

Előző változat mindkét oldalonElőző változat
Következő változat
Előző változat
kubernetes:gyakorlatok [2025/09/17 05:44] riba.zoltankubernetes:gyakorlatok [2025/10/07 15:56] (aktuális) riba.zoltan
Sor 1: Sor 1:
 +===== Kubernetes klaszter telepítése =====
 +
 +A telepített környezet három virtuális gépből áll:
 +
 +*  *kube01* (control plane): Almalinux 10 (x86_64), minimal install (VCPU: 2, RAM: 3 GB, DISK: 20 GB)
 +*  *kube02* (worker): Almalinux 10 (x86_64), minimal install (VCPU: 2, RAM: 4 GB, DISK: 20 GB)
 +*  *kube03* (worker): Almalinux 10 (x86_64), minimal install (VCPU: 2, RAM: 4 GB, DISK: 20 GB)
 +
 +A telepítéskor ne adjunk swap területet.
 +
 +==== Telepítést követő lépések ====
 +
 +Az alábbi utasításokat a klaszter összes gépén le kell futtatni.
 +
 +SElinux megengedő módba kapcsolása
 +
 +<code>
 +# sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config
 +
 +# setenforce 0
 +</code>
 +
 +Tűzfal szolgáltatás kikapcsolása és tiltása
 +
 +<code>
 +# systemctl disable firewalld
 +Removed '/etc/systemd/system/multi-user.target.wants/firewalld.service'.
 +Removed '/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service'.
 +
 +# systemctl stop firewalld
 +</code>
 +
 +Hosts állományok módosítása
 +
 +<code>
 +# cat > /etc/hosts <<'EOF'
 +127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
 +::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
 +
 +192.168.110.161 kube01
 +192.168.110.162 kube02
 +192.168.110.163 kube03
 +EOF
 +</code>
 +
 +Modulok betöltése
 +
 +<code>
 +# cat > /etc/modules-load.d/01-kubernetes.conf <<'EOF'
 +br_netfilter
 +overlay
 +EOF
 +
 +# modprobe br_netfilter
 +
 +# modprobe overlay
 +</code>
 +
 +Kernel hálózati paraméterek módosítása
 +
 +<code>
 +# cat > /etc/sysctl.d/01-kubernetes.conf <<'EOF' 
 +net.ipv4.ip_forward = 1
 +net.bridge.bridge-nf-call-ip6tables = 1
 +net.bridge.bridge-nf-call-iptables = 1
 +EOF
 +
 +# sysctl --system
 +</code>
 +
 +SWAP tiltása
 +
 +<code>
 +# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
 +
 +# systemctl daemon-reload
 +
 +# swapoff -a
 +</code>
 +
 +Containerd repo telepítése
 +
 +<code>
 +# curl -L -o /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/rhel/docker-ce.repo
 +</code>
 +
 +Kubernetes repo létrehozása
 +
 +<code>
 +# cat > /etc/yum.repos.d/kubernetes.repo <<'EOF'
 +[kubernetes]
 +name=Kubernetes
 +baseurl=https://pkgs.k8s.io/core:/stable:/v1.34/rpm/
 +enabled=1
 +gpgcheck=1
 +gpgkey=https://pkgs.k8s.io/core:/stable:/v1.34/rpm/repodata/repomd.xml.key
 +exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
 +EOF
 +</code>
 +
 +Containerd telepítése
 +
 +<code>
 +# dnf install containerd
 +</code>
 +
 +Containerd konfiguráció mentése
 +
 +<code>
 +# cp -a /etc/containerd/config.toml /etc/containerd/config.toml.orig
 +</code>
 +
 +Containerd konfiguráció készítése
 +
 +<code>
 +# containerd config default > /etc/containerd/config.toml
 +</code>
 +
 +Containerd konfiguráció módosítása
 +
 +<code>
 +# grep pause:3 /etc/containerd/config.toml 
 +    sandbox_image = "registry.k8s.io/pause:3.8"
 +
 +# sed -i 's/pause:3.8/pause:3.10.1/' /etc/containerd/config.toml
 +
 +# grep pause:3 /etc/containerd/config.toml 
 +    sandbox_image = "registry.k8s.io/pause:3.10.1"
 +
 +# grep SystemdCgroup /etc/containerd/config.toml 
 +            SystemdCgroup = false
 +
 +# sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
 +
 +# grep SystemdCgroup /etc/containerd/config.toml 
 +            SystemdCgroup = true
 +</code>
 +
 +Containerd engedélyezése és indítása
 +
 +<code>
 +# systemctl --now enable containerd
 +Created symlink '/etc/systemd/system/multi-user.target.wants/containerd.service' → '/usr/lib/systemd/system/containerd.service'.
 +</code>
 +
 +Kubernetes klaszterhez szükséges csomagok telepítése
 +
 +<code>
 +# dnf --disableexcludes=kubernetes install kubeadm kubectl kubelet
 +</code>
 +
 +Kubernetes kubelet szolgáltatás engedélyezése
 +
 +<code>
 +# systemctl enable kubelet
 +</code>
 +
 +==== Control plane konfigurálása ====
 +
 +Az alábbi utasításokat a control plane gépen kell futtatni
 +
 +Klaszter init meghívása
 +
 +<code>
 +# kubeadm init --pod-network-cidr=10.244.0.0/16
 +[init] Using Kubernetes version: v1.34.1
 +[preflight] Running pre-flight checks
 +[preflight] Pulling images required for setting up a Kubernetes cluster
 +[preflight] This might take a minute or two, depending on the speed of your internet connection
 +[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
 +[certs] Using certificateDir folder "/etc/kubernetes/pki"
 +[certs] Generating "ca" certificate and key
 +[certs] Generating "apiserver" certificate and key
 +[certs] apiserver serving cert is signed for DNS names [kube01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.110.161]
 +[certs] Generating "apiserver-kubelet-client" certificate and key
 +[certs] Generating "front-proxy-ca" certificate and key
 +[certs] Generating "front-proxy-client" certificate and key
 +[certs] Generating "etcd/ca" certificate and key
 +[certs] Generating "etcd/server" certificate and key
 +[certs] etcd/server serving cert is signed for DNS names [kube01 localhost] and IPs [192.168.110.161 127.0.0.1 ::1]
 +[certs] Generating "etcd/peer" certificate and key
 +[certs] etcd/peer serving cert is signed for DNS names [kube01 localhost] and IPs [192.168.110.161 127.0.0.1 ::1]
 +[certs] Generating "etcd/healthcheck-client" certificate and key
 +[certs] Generating "apiserver-etcd-client" certificate and key
 +[certs] Generating "sa" key and public key
 +[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
 +[kubeconfig] Writing "admin.conf" kubeconfig file
 +[kubeconfig] Writing "super-admin.conf" kubeconfig file
 +[kubeconfig] Writing "kubelet.conf" kubeconfig file
 +[kubeconfig] Writing "controller-manager.conf" kubeconfig file
 +[kubeconfig] Writing "scheduler.conf" kubeconfig file
 +[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
 +[control-plane] Using manifest folder "/etc/kubernetes/manifests"
 +[control-plane] Creating static Pod manifest for "kube-apiserver"
 +[control-plane] Creating static Pod manifest for "kube-controller-manager"
 +[control-plane] Creating static Pod manifest for "kube-scheduler"
 +[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
 +[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
 +[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
 +[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
 +[kubelet-start] Starting the kubelet
 +[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
 +[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
 +[kubelet-check] The kubelet is healthy after 1.50097886s
 +[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
 +[control-plane-check] Checking kube-apiserver at https://192.168.110.161:6443/livez
 +[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
 +[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
 +[control-plane-check] kube-controller-manager is healthy after 3.507200493s
 +[control-plane-check] kube-scheduler is healthy after 4.632817046s
 +[control-plane-check] kube-apiserver is healthy after 11.004003859s
 +[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
 +[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
 +[upload-certs] Skipping phase. Please see --upload-certs
 +[mark-control-plane] Marking the node kube01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
 +[mark-control-plane] Marking the node kube01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
 +[bootstrap-token] Using token: is490j.gmk4mrbp5aum3q8y
 +[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
 +[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
 +[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
 +[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
 +[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
 +[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
 +[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
 +[addons] Applied essential addon: CoreDNS
 +[addons] Applied essential addon: kube-proxy
 +
 +Your Kubernetes control-plane has initialized successfully!
 +
 +To start using your cluster, you need to run the following as a regular user:
 +
 +  mkdir -p $HOME/.kube
 +  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 +  sudo chown $(id -u):$(id -g) $HOME/.kube/config
 +
 +Alternatively, if you are the root user, you can run:
 +
 +  export KUBECONFIG=/etc/kubernetes/admin.conf
 +
 +You should now deploy a pod network to the cluster.
 +Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
 +  https://kubernetes.io/docs/concepts/cluster-administration/addons/
 +
 +Then you can join any number of worker nodes by running the following on each as root:
 +kubeadm join 192.168.110.171:6443 --token m9o4h9.tot3bz6dt54v9yfx --discovery-token-ca-cert-hash sha256:ef8dbd13f9e35b877d8d944ae4b102bac15b027e4108e22729cf8572d459c3b8
 +</code>
 +
 +A kapcsolódáshoz szükséges konfiguráció beállítása
 +
 +<code>
 +# mkdir -p $HOME/.kube
 +
 +# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 +
 +# sudo chown $(id -u):$(id -g) $HOME/.kube/config
 +</code>
 +
 +Működés ellenőrzése
 +
 +<code>
 +# kubectl get nodes
 +NAME     STATUS     ROLES           AGE     VERSION
 +kube01   NotReady   control-plane   11m     v1.34.1
 +</code>
 +
 +Pod hálózat létrehozása (Flannel)
 +
 +<code>
 +# kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
 +namespace/kube-flannel created
 +serviceaccount/flannel created
 +clusterrole.rbac.authorization.k8s.io/flannel created
 +clusterrolebinding.rbac.authorization.k8s.io/flannel created
 +configmap/kube-flannel-cfg created
 +daemonset.apps/kube-flannel-ds created
 +</code>
 +
 +Rövid idő elteltével újabb ellenőrzés
 +
 +<code>
 +# kubectl get nodes
 +NAME     STATUS   ROLES           AGE    VERSION
 +kube01   Ready    control-plane   2m     v1.34.1
 +</code>
 +
 +==== Worker gépek csatlakoztatása ====
 +
 +Az alábbi utasításokat a worker gépeken kell futtatni
 +
 +<code>
 +# kubeadm join 192.168.110.161:6443 --token is490j.gmk4mrbp5aum3q8y --discovery-token-ca-cert-hash sha256:2454cd136d590b724210551fcb95ac360a2761f18a43729fe043eaf8dc139027
 +[preflight] Running pre-flight checks
 +[preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
 +[preflight] Use 'kubeadm init phase upload-config kubeadm --config your-config-file' to re-upload it.
 +[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
 +[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
 +[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
 +[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
 +[kubelet-start] Starting the kubelet
 +[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
 +[kubelet-check] The kubelet is healthy after 1.004029985s
 +[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
 +
 +This node has joined the cluster:
 +* Certificate signing request was sent to apiserver and a response was received.
 +* The Kubelet was informed of the new secure connection details.
 +
 +Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
 +</code>
 +
 +==== Klaszter ellenőrzése ====
 +
 +A klaszter ellenőrzését a control plane gépen végezzük el
 +
 +<code>
 +# kubectl get nodes
 +NAME     STATUS   ROLES           AGE    VERSION
 +kube01   Ready    control-plane   12m    v1.34.1
 +kube02   Ready    <none>          4m5s   v1.34.1
 +kube03   Ready    <none>          112s   v1.34.1
 +</code>
 +
 ====== Pod ====== ====== Pod ======
 +
 Pod erőforás dokumentáció megjelenítése Pod erőforás dokumentáció megjelenítése
 +
 <code> <code>
 # kubectl explain pod # kubectl explain pod
Sor 8: Sor 332:
  
 <code> <code>
-# kubectl run nginx-pod --image=registry.r-l.hu/library/nginx:latest --restart=Never+# kubectl run nginx-pod --image=nginx:latest --restart=Never
 </code> </code>
  
Sor 22: Sor 346:
   containers:   containers:
     - name: nginx     - name: nginx
-      image: registry.r-l.hu/library/nginx:latest+      image: nginx:latest
 EOF EOF
 </code> </code>
Sor 59: Sor 383:
   nginx-pod:   nginx-pod:
     Container ID:   containerd://406b1f5856e2bfaa9e91d391078458c56e64c2f9d068f9b65dbab4d3c0b44e8b     Container ID:   containerd://406b1f5856e2bfaa9e91d391078458c56e64c2f9d068f9b65dbab4d3c0b44e8b
-    Image:          registry.r-l.hu/library/nginx:latest +    Image:          nginx:latest 
-    Image ID:       registry.r-l.hu/library/nginx@sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e+    Image ID:       nginx@sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e
     Port:           <none>     Port:           <none>
     Host Port:      <none>     Host Port:      <none>
Sor 92: Sor 416:
   ----    ------     ----  ----               -------   ----    ------     ----  ----               -------
   Normal  Scheduled  53s   default-scheduler  Successfully assigned default/nginx-pod to worker01.r-logic.eu   Normal  Scheduled  53s   default-scheduler  Successfully assigned default/nginx-pod to worker01.r-logic.eu
-  Normal  Pulling    52s   kubelet            Pulling image "registry.r-l.hu/library/nginx:latest" +  Normal  Pulling    52s   kubelet            Pulling image "nginx:latest" 
-  Normal  Pulled     46s   kubelet            Successfully pulled image "registry.r-l.hu/library/nginx:latest" in 5.622s (5.622s including waiting). Image size: 72319182 bytes.+  Normal  Pulled     46s   kubelet            Successfully pulled image "nginx:latest" in 5.622s (5.622s including waiting). Image size: 72319182 bytes.
   Normal  Created    46s   kubelet            Created container: nginx-pod   Normal  Created    46s   kubelet            Created container: nginx-pod
   Normal  Started    46s   kubelet            Started container nginx-pod   Normal  Started    46s   kubelet            Started container nginx-pod
Sor 137: Sor 461:
   containers:   containers:
     - name: nginx     - name: nginx
-      image: registry.r-l.hu/library/nginx:1.25+      image: nginx:1.25
       ports:       ports:
         - containerPort: 80         - containerPort: 80
Sor 177: Sor 501:
  
 <code> <code>
-# kubectl run debug-pod --rm -it --image=registry.r-l.hu/library/busybox:1.36 --restart=Never -- sh+# kubectl run debug-pod --rm -it --image=busybox:1.36 --restart=Never -- sh
 </code> </code>
  
Sor 188: Sor 512:
  
 <code> <code>
-# kubectl create deployment nginx-deployment --image=registry.r-l.hu/library/nginx:latest && kubectl wait --for=condition=Available deployment/nginx-deployment --timeout=90s+# kubectl create deployment nginx-deployment --image=nginx:latest && kubectl wait --for=condition=Available deployment/nginx-deployment --timeout=90s
 deployment.apps/nginx-deployment created deployment.apps/nginx-deployment created
 deployment.apps/nginx-deployment condition met deployment.apps/nginx-deployment condition met
Sor 224: Sor 548:
       containers:       containers:
         - name: nginx         - name: nginx
-          image: registry.r-l.hu/library/nginx:1.25+          image: nginx:1.25
           ports:           ports:
             - containerPort: 80             - containerPort: 80
Sor 239: Sor 563:
 deployment.apps/nginx-deployment annotated deployment.apps/nginx-deployment annotated
  
-# kubectl set image deployment/nginx-deployment nginx=registry.r-l.hu/library/nginx:1.26+# kubectl set image deployment/nginx-deployment nginx=nginx:1.26
 deployment.apps/nginx-deployment image updated deployment.apps/nginx-deployment image updated
  
Sor 262: Sor 586:
 deployment.apps/nginx-deployment annotated deployment.apps/nginx-deployment annotated
  
-# kubectl set image deployment/nginx-deployment nginx=registry.r-l.hu/library/nginx:1.27+# kubectl set image deployment/nginx-deployment nginx=nginx:1.27
 deployment.apps/nginx-deployment image updated deployment.apps/nginx-deployment image updated
  
Sor 287: Sor 611:
 kubectl get replicasets -o wide kubectl get replicasets -o wide
 NAME                          DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES                               SELECTOR NAME                          DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES                               SELECTOR
-nginx-deployment-6585597c84                         6m35s   nginx        registry.r-l.hu/library/nginx:1.26   app=nginx,pod-template-hash=6585597c84 +nginx-deployment-6585597c84                         6m35s   nginx        nginx:1.26   app=nginx,pod-template-hash=6585597c84 
-nginx-deployment-6ccb84987c                         2m58s   nginx        registry.r-l.hu/library/nginx:1.27   app=nginx,pod-template-hash=6ccb84987c +nginx-deployment-6ccb84987c                         2m58s   nginx        nginx:1.27   app=nginx,pod-template-hash=6ccb84987c 
-nginx-deployment-7bdc5996d7                         7m27s   nginx        registry.r-l.hu/library/nginx:1.25   app=nginx,pod-template-hash=7bdc5996d7+nginx-deployment-7bdc5996d7                         7m27s   nginx        nginx:1.25   app=nginx,pod-template-hash=7bdc5996d7
 </code> </code>
  
Sor 300: Sor 624:
 # kubectl get replicasets -o wide # kubectl get replicasets -o wide
 NAME                          DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES                               SELECTOR NAME                          DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES                               SELECTOR
-nginx-deployment-6585597c84                         12m     nginx        registry.r-l.hu/library/nginx:1.26   app=nginx,pod-template-hash=6585597c84 +nginx-deployment-6585597c84                         12m     nginx        nginx:1.26   app=nginx,pod-template-hash=6585597c84 
-nginx-deployment-6ccb84987c                         8m33s   nginx        registry.r-l.hu/library/nginx:1.27   app=nginx,pod-template-hash=6ccb84987c +nginx-deployment-6ccb84987c                         8m33s   nginx        nginx:1.27   app=nginx,pod-template-hash=6ccb84987c 
-nginx-deployment-7bdc5996d7                         13m     nginx        registry.r-l.hu/library/nginx:1.25   app=nginx,pod-template-hash=7bdc5996d7+nginx-deployment-7bdc5996d7                         13m     nginx        nginx:1.25   app=nginx,pod-template-hash=7bdc5996d7
 </code> </code>
  
Sor 377: Sor 701:
 <h1 th:text="${title}">Alkalmazás</h1> <h1 th:text="${title}">Alkalmazás</h1>
 <p th:text="${message}">Hello Kubernetes!</p> <p th:text="${message}">Hello Kubernetes!</p>
-<small>Forrás: <code>ConfigMap → env → @Value → Thymeleaf</code></small>+<small>Forrás: ConfigMap → env → @Value → Thymeleaf</small>
 </main> </main>
 </body> </body>
Sor 570: Sor 894:
  
 <code> <code>
-# podman image tag localhost/minimal-spring-k8s:0.0.1 registry.r-l.hu/minimal-spring-k8s:0.0.1+# podman image tag localhost/minimal-spring-k8s:0.0.1 REGISTRY_URL/minimal-spring-k8s:0.0.1
  
-# podman push registry.r-l.hu/minimal-spring-k8s:0.0.1+# podman push REGISTRY_URL/minimal-spring-k8s:0.0.1
 Getting image source signatures Getting image source signatures
 Copying blob cba3fb5670d7 done    Copying blob cba3fb5670d7 done   
Sor 626: Sor 950:
       containers:       containers:
         - name: app         - name: app
-          image: registry.r-l.hu/minimal-spring-k8s:0.0.1+          image: REGISTRY_URL/minimal-spring-k8s:0.0.1
           imagePullPolicy: IfNotPresent           imagePullPolicy: IfNotPresent
           ports:           ports:
Sor 699: Sor 1023:
 service/minimal-spring-k8s created service/minimal-spring-k8s created
  
-# kubectl get all+# kubectl get all,cm
 NAME                                      READY   STATUS    RESTARTS   AGE NAME                                      READY   STATUS    RESTARTS   AGE
-pod/minimal-spring-k8s-6d956c4c9f-n6rb2   1/    Running            103s +pod/minimal-spring-k8s-6d956c4c9f-n6rb2   1/    Running            4m33s 
-pod/minimal-spring-k8s-6d956c4c9f-vj8tc   1/    Running            103s+pod/minimal-spring-k8s-6d956c4c9f-vj8tc   1/    Running            4m33s
  
 NAME                         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE NAME                         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
 service/kubernetes           ClusterIP   10.96.0.1      <none>        443/TCP        43h service/kubernetes           ClusterIP   10.96.0.1      <none>        443/TCP        43h
-service/minimal-spring-k8s   NodePort    10.106.11.23   <none>        80:30001/TCP   95s+service/minimal-spring-k8s   NodePort    10.106.11.23   <none>        80:30001/TCP   4m25s
  
 NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
-deployment.apps/minimal-spring-k8s   2/               2           104s+deployment.apps/minimal-spring-k8s   2/               2           4m33s
  
 NAME                                            DESIRED   CURRENT   READY   AGE NAME                                            DESIRED   CURRENT   READY   AGE
-replicaset.apps/minimal-spring-k8s-6d956c4c9f                         104s+replicaset.apps/minimal-spring-k8s-6d956c4c9f                         4m33s 
 + 
 +NAME                              DATA   AGE 
 +configmap/kube-root-ca.crt        1      43h 
 +configmap/minimal-spring-config        4m40s 
 +</code> 
 + 
 +Módosítások a configmap tartalmában 
 + 
 +<code> 
 +# cat > k8s/configmap.yaml <<'EOF' 
 +apiVersion: v1 
 +kind: ConfigMap 
 +metadata: 
 +  name: minimal-spring-config 
 +  labels: 
 +    app: minimal-spring-k8s 
 +data: 
 +  SITE_TITLE: "Kubernetesből jövő új cím" 
 +  SITE_MESSAGE: "Ez az üzenet az űrből érkezett." 
 +EOF 
 + 
 +# kubectl apply -f k8s/configmap.yaml 
 +configmap/minimal-spring-config configured 
 + 
 +# kubectl rollout restart deployment/minimal-spring-k8s 
 +deployment.apps/minimal-spring-k8s restarted 
 + 
 +# kubectl get all 
 +NAME                                      READY   STATUS    RESTARTS   AGE 
 +pod/minimal-spring-k8s-5d757fcb88-km5bn   1/    Running            5m26s 
 +pod/minimal-spring-k8s-5d757fcb88-rtfrv   1/    Running            5m41s 
 + 
 +NAME                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE 
 +service/kubernetes           ClusterIP   10.96.0.1       <none>        443/TCP        47h 
 +service/minimal-spring-k8s   NodePort    10.98.248.233   <none>        80:30001/TCP   33m 
 + 
 +NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE 
 +deployment.apps/minimal-spring-k8s   2/               2           33m 
 + 
 +NAME                                            DESIRED   CURRENT   READY   AGE 
 +replicaset.apps/minimal-spring-k8s-57d696db7c                         33m 
 +replicaset.apps/minimal-spring-k8s-5d757fcb88                         5m41s 
 +</code> 
 + 
 +====== Kubernetes natív LB megoldás ====== 
 + 
 +A konfigurálást a control plane gépen végezzük el 
 + 
 +MetalLB telepítése 
 + 
 +<code> 
 +# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml 
 +namespace/metallb-system created 
 +customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created 
 +customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created 
 +customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created 
 +customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created 
 +customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created 
 +customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created 
 +customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created 
 +serviceaccount/controller created 
 +serviceaccount/speaker created 
 +role.rbac.authorization.k8s.io/controller created 
 +role.rbac.authorization.k8s.io/pod-lister created 
 +clusterrole.rbac.authorization.k8s.io/metallb-system:controller created 
 +clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created 
 +rolebinding.rbac.authorization.k8s.io/controller created 
 +rolebinding.rbac.authorization.k8s.io/pod-lister created 
 +clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created 
 +clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created 
 +configmap/metallb-excludel2 created 
 +secret/webhook-server-cert created 
 +service/webhook-service created 
 +deployment.apps/controller created 
 +daemonset.apps/speaker created 
 +validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created 
 +</code> 
 + 
 +Publikus IP tartomány megadása 
 + 
 +<code> 
 +# cat > ~/metallb-l2.yaml <<'EOF'  
 +apiVersion: metallb.io/v1beta1 
 +kind: IPAddressPool 
 +metadata: 
 +  name: pool-l2 
 +  namespace: metallb-system 
 +spec: 
 +  addresses: 
 +    - 192.168.110.170-192.168.110.179 
 +--- 
 +apiVersion: metallb.io/v1beta1 
 +kind: L2Advertisement 
 +metadata: 
 +  name: l2adv 
 +  namespace: metallb-system 
 +spec: 
 +  ipAddressPools: 
 +    - pool-l2 
 +EOF 
 +</code> 
 + 
 +MetalLB podjainak ellenőrzése 
 + 
 +<code> 
 +# kubectl -n metallb-system get pods 
 +NAME                          READY   STATUS    RESTARTS   AGE 
 +controller-7dbf649dcc-w4frr   1/    Running            2m17s 
 +speaker-4nkqt                 1/    Running            2m17s 
 +speaker-q4h2p                 1/    Running            2m17s 
 +speaker-vxp69                 1/    Running            2m17s 
 +</code> 
 + 
 +Konfiguráció alkalmazása (amennyiben a pod-ok Ready/Running állapotban vannak) 
 + 
 +<code> 
 +# kubectl apply -f metallb-l2.yaml  
 +ipaddresspool.metallb.io/pool-l2 created 
 +l2advertisement.metallb.io/l2adv created 
 +</code> 
 + 
 +A metallb-system névtér ellenőrzése 
 + 
 +<code> 
 +# kubectl get all -n metallb-system 
 +NAME                              READY   STATUS    RESTARTS   AGE 
 +pod/controller-7dbf649dcc-w4frr   1/    Running            49m 
 +pod/speaker-4nkqt                 1/    Running            49m 
 +pod/speaker-q4h2p                 1/    Running            49m 
 +pod/speaker-vxp69                 1/    Running            49m 
 + 
 +NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE 
 +service/webhook-service   ClusterIP   10.104.247.76   <none>        443/TCP   49m 
 + 
 +NAME                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE 
 +daemonset.apps/speaker                                    3           kubernetes.io/os=linux   49m 
 + 
 +NAME                         READY   UP-TO-DATE   AVAILABLE   AGE 
 +deployment.apps/controller   1/               1           49m 
 + 
 +NAME                                    DESIRED   CURRENT   READY   AGE 
 +replicaset.apps/controller-7dbf649dcc                         49m 
 +</code> 
 + 
 +Teszt deployment létrehozása és ellenőrzése 
 + 
 +<code> 
 +# kubectl create deploy nginx --image=nginx:stable --port=80 
 +deployment.apps/nginx created 
 + 
 +# kubectl expose deploy nginx --type=LoadBalancer --port=80 --target-port=80 
 +service/nginx exposed 
 + 
 +# kubectl get svc nginx 
 +NAME    TYPE           CLUSTER-IP     EXTERNAL-IP       PORT(S)        AGE 
 +nginx   LoadBalancer   10.109.25.42   192.168.110.170   80:31422/TCP   13s 
 +</code> 
 + 
 +Amennyiben megjelent az EXTERNAL-IP oszlopban a definiált tartomány egyik IP címe, akkor tesztelhető a szolgáltatás 
 + 
 +<code># curl -I http://192.168.110.170 
 +HTTP/1.1 200 OK 
 +Server: nginx/1.28.0 
 +Date: Thu, 25 Sep 2025 17:17:34 GMT 
 +Content-Type: text/html 
 +Content-Length: 615 
 +Last-Modified: Wed, 23 Apr 2025 11:48:54 GMT 
 +Connection: keep-alive 
 +ETag: "6808d3a6-267" 
 +Accept-Ranges: bytes
 </code> </code>
  
kubernetes/gyakorlatok.1758087862.txt.gz · Utolsó módosítás: szerkesztette: riba.zoltan