input
stringclasses
1 value
output
stringlengths
30
28.3k
text
stringlengths
151
31.8k
instruction
stringlengths
49
30.3k
And what is the error message you are getting? Some of the issues I noticed: use the expected file name know to docker, i.e. Dockerfile not DockerFile make sure base image exists (tritonserver:latest does not, you probably want one of these) first update the sources (RUN apt install ... -> RUN apt update && apt install ...) reduce layers number by installing multiple python packages at once tritonclient[all] already includes tritonclient don't run containers as root (tritonserver does not require it anyway) make sure you pull the image first time (imagePullPolicy: Never -> IfNotPresent) remove multiple and unnecessary commands from args (such as cd /models) tritonserver can import all subfolders, so --model-repository=/models is probably better
###Instruction: Description Trying to deploy the triton docker image as container on kubernetes cluster Triton Information What version of Triton are you using? -> 22.10 Are you using the Triton container or did you build it yourself? I used the server repo with following command: python3 compose.py --backend onnxruntime --backend python --backend tensorflow2 --repoagent checksum --container-version 22.10 then again created new triton image with following dockerfile: FROM tritonserver:latest RUN apt install python3-pip -y RUN pip install tensorflow==2.7.0 RUN pip install transformers==2.11.0 RUN pip install tritonclient RUN pip install tritonclient[all] and dockerfile is being with following command: docker build -t customtritonimage -f ./DockerFiles/DockerFile . To Reproduce directory structure: parent directory -> tritonnludeployment files in it -> DockerFiles (folder containing docker files), k8_trial.yaml, model_repo_triton (all the models here in triton-supported directory shape and has required files) I am using this 'k8_trial.yaml' file for starting kubectl deployment apiVersion: apps/v1 kind: Deployment metadata: name: flower labels: app: flower spec: replicas: 3 selector: matchLabels: app: flower template: metadata: labels: app: flower spec: volumes: - name: models hostPath: # server: 216.48.183.17 path: /root/Documents/tritonnludeployment # readOnly: false type: Directory containers: - name: flower ports: - containerPort: 8000 name: http-triton - containerPort: 8001 name: grpc-triton - containerPort: 8002 name: metrics-triton image: "customtritonimage:latest" imagePullPolicy: Never volumeMounts: - mountPath: /root/Documents/tritonnludeployment name: models command: ["/bin/sh", "-c"] args: ["cd /models /opt/tritonserver/bin/tritonserver --model-repository=/models/model_repo_triton --allow-gpu-metrics=false --strict-model-config=false"] # resources: # requests: # memory: "500Mi" # cpu: "500Mi" # limits: # memory: "900Mi" # cpu: "900Mi" # nvidia.com/gpu: 1 Describe the models (framework, inputs, outputs), ideally include the model configuration file (if using an ensemble include the model configuration file for that as well). Expected behavior kubectl deployment should start, with triton container as one of the pods Which step i am doing wrong! ### Response: And what is the error message you are getting? Some of the issues I noticed: use the expected file name know to docker, i.e. Dockerfile not DockerFile make sure base image exists (tritonserver:latest does not, you probably want one of these) first update the sources (RUN apt install ... -> RUN apt update && apt install ...) reduce layers number by installing multiple python packages at once tritonclient[all] already includes tritonclient don't run containers as root (tritonserver does not require it anyway) make sure you pull the image first time (imagePullPolicy: Never -> IfNotPresent) remove multiple and unnecessary commands from args (such as cd /models) tritonserver can import all subfolders, so --model-repository=/models is probably better
Description Trying to deploy the triton docker image as container on kubernetes cluster Triton Information What version of Triton are you using? -> 22.10 Are you using the Triton container or did you build it yourself? I used the server repo with following command: python3 compose.py --backend onnxruntime --backend python --backend tensorflow2 --repoagent checksum --container-version 22.10 then again created new triton image with following dockerfile: FROM tritonserver:latest RUN apt install python3-pip -y RUN pip install tensorflow==2.7.0 RUN pip install transformers==2.11.0 RUN pip install tritonclient RUN pip install tritonclient[all] and dockerfile is being with following command: docker build -t customtritonimage -f ./DockerFiles/DockerFile . To Reproduce directory structure: parent directory -> tritonnludeployment files in it -> DockerFiles (folder containing docker files), k8_trial.yaml, model_repo_triton (all the models here in triton-supported directory shape and has required files) I am using this 'k8_trial.yaml' file for starting kubectl deployment apiVersion: apps/v1 kind: Deployment metadata: name: flower labels: app: flower spec: replicas: 3 selector: matchLabels: app: flower template: metadata: labels: app: flower spec: volumes: - name: models hostPath: # server: 216.48.183.17 path: /root/Documents/tritonnludeployment # readOnly: false type: Directory containers: - name: flower ports: - containerPort: 8000 name: http-triton - containerPort: 8001 name: grpc-triton - containerPort: 8002 name: metrics-triton image: "customtritonimage:latest" imagePullPolicy: Never volumeMounts: - mountPath: /root/Documents/tritonnludeployment name: models command: ["/bin/sh", "-c"] args: ["cd /models /opt/tritonserver/bin/tritonserver --model-repository=/models/model_repo_triton --allow-gpu-metrics=false --strict-model-config=false"] # resources: # requests: # memory: "500Mi" # cpu: "500Mi" # limits: # memory: "900Mi" # cpu: "900Mi" # nvidia.com/gpu: 1 Describe the models (framework, inputs, outputs), ideally include the model configuration file (if using an ensemble include the model configuration file for that as well). Expected behavior kubectl deployment should start, with triton container as one of the pods Which step i am doing wrong!
I think the simplest solution is to run crond (or an alternative of your choice) in a sidecar container (that is, another container in the same pod). Recall that all containers in a pod share the same network namespace, so localhost is the same thing for all containers. This means your cron container can happily run a curl or wget command (or whatever else is necessary) to ping your API over the local port. For example, something like this, in which our cron task simply runs wget against the web server running in the api container: apiVersion: v1 data: root: | * * * * * wget -O /tmp/testfile http://127.0.0.1:8080 2> /tmp/testfile.err kind: ConfigMap metadata: labels: app: cron-example name: crontabs-ghm86fgddg --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: cron-example name: cron-example spec: replicas: 1 selector: matchLabels: app: cron-example template: metadata: labels: app: cron-example spec: containers: - image: docker.io/alpinelinux/darkhttpd:latest name: api - command: - /bin/sh - -c - | crontab /data/crontabs/root exec crond -f -d0 image: docker.io/alpine:latest name: cron volumeMounts: - mountPath: /data/crontabs name: crontabs volumes: - configMap: name: crontabs-ghm86fgddg name: crontabs
###Instruction: I have read about the various ways to run tasks periodically in a K8s cluster, but none of them seem to work well for this specific case. I have a deployment "my-depl" that can run an arbitrary number of pods and the task needs to execute periodically inside each pod (basically a shell command that "nudges" the main application once a week or so). The Kubernetes Cronjob functionality starts a task in its own container. This K8s task does not know how many pods are currently running for "my-depl" and cannot run anything in those pods. Conceivably, I could run kubectl within this K8s Cronjob, but that seems incredibly hacky and dangerous. The second alternative would be to have crond (or an alternative tool like Jobber or Cronenberg) run as part of the pod. But that would mean that two processes are running and the container might not die, if only the cron process dies. The third option is to run a multi-process container via a special init process like s6-overlay. This can be made to die if one of the child processes dies, but it seems fairly involved and hardly a first-class feature. The fourth option I could think of was "don't do this, it's stupid. Redesign your application so it doesn't need to be 'nudged' once a week". That's a sound suggestion, but a lot of work and I need at least a temporary solution in the meantime. So, does anyone have a better idea than those detailed here? ### Response: I think the simplest solution is to run crond (or an alternative of your choice) in a sidecar container (that is, another container in the same pod). Recall that all containers in a pod share the same network namespace, so localhost is the same thing for all containers. This means your cron container can happily run a curl or wget command (or whatever else is necessary) to ping your API over the local port. For example, something like this, in which our cron task simply runs wget against the web server running in the api container: apiVersion: v1 data: root: | * * * * * wget -O /tmp/testfile http://127.0.0.1:8080 2> /tmp/testfile.err kind: ConfigMap metadata: labels: app: cron-example name: crontabs-ghm86fgddg --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: cron-example name: cron-example spec: replicas: 1 selector: matchLabels: app: cron-example template: metadata: labels: app: cron-example spec: containers: - image: docker.io/alpinelinux/darkhttpd:latest name: api - command: - /bin/sh - -c - | crontab /data/crontabs/root exec crond -f -d0 image: docker.io/alpine:latest name: cron volumeMounts: - mountPath: /data/crontabs name: crontabs volumes: - configMap: name: crontabs-ghm86fgddg name: crontabs
I have read about the various ways to run tasks periodically in a K8s cluster, but none of them seem to work well for this specific case. I have a deployment "my-depl" that can run an arbitrary number of pods and the task needs to execute periodically inside each pod (basically a shell command that "nudges" the main application once a week or so). The Kubernetes Cronjob functionality starts a task in its own container. This K8s task does not know how many pods are currently running for "my-depl" and cannot run anything in those pods. Conceivably, I could run kubectl within this K8s Cronjob, but that seems incredibly hacky and dangerous. The second alternative would be to have crond (or an alternative tool like Jobber or Cronenberg) run as part of the pod. But that would mean that two processes are running and the container might not die, if only the cron process dies. The third option is to run a multi-process container via a special init process like s6-overlay. This can be made to die if one of the child processes dies, but it seems fairly involved and hardly a first-class feature. The fourth option I could think of was "don't do this, it's stupid. Redesign your application so it doesn't need to be 'nudged' once a week". That's a sound suggestion, but a lot of work and I need at least a temporary solution in the meantime. So, does anyone have a better idea than those detailed here?
Hope is not too late, I found myself doing this a couple of months ago as well, all you have to do is to change the docker image as explained in the documentation above, to be more specific, the deployment docker image. So I assume you have some basic understanding of helm, it is pretty common to have a custom values.yaml file in which you edit the configuration to fit your needs you just need to specify this file while running the upgrade with the -f flag. image: registry: registry.opensource.zalan.do repository: acid/postgres-operator tag: v1.8.2 pullPolicy: "IfNotPresent" and then helm upgrade <your-release> <path-to-your-chart> -f values.yaml -n <your-ns> That should be it. (be sure to double check the breaking changes for changes in the crds)
###Instruction: I've installed Zolando PostgreSQL Operator v1.8.1 in bare-metal K8S using helm, helm repo add postgres-operator-charts https://opensource.zalando.com/postgres-operator/charts/postgres-operator helm install postgres-operator postgres-operator-charts/postgres-operator --version 1.8.1 but I'm not sure how to properly upgrade it to v1.8.2? There is a documentation but it's too high level for me: https://github.com/zalando/postgres-operator/blob/master/docs/administrator.md#upgrading-the-operator Could somebody provide step-by-step command line how to upgrade operator using Helm or without Helm? ### Response: Hope is not too late, I found myself doing this a couple of months ago as well, all you have to do is to change the docker image as explained in the documentation above, to be more specific, the deployment docker image. So I assume you have some basic understanding of helm, it is pretty common to have a custom values.yaml file in which you edit the configuration to fit your needs you just need to specify this file while running the upgrade with the -f flag. image: registry: registry.opensource.zalan.do repository: acid/postgres-operator tag: v1.8.2 pullPolicy: "IfNotPresent" and then helm upgrade <your-release> <path-to-your-chart> -f values.yaml -n <your-ns> That should be it. (be sure to double check the breaking changes for changes in the crds)
I've installed Zolando PostgreSQL Operator v1.8.1 in bare-metal K8S using helm, helm repo add postgres-operator-charts https://opensource.zalando.com/postgres-operator/charts/postgres-operator helm install postgres-operator postgres-operator-charts/postgres-operator --version 1.8.1 but I'm not sure how to properly upgrade it to v1.8.2? There is a documentation but it's too high level for me: https://github.com/zalando/postgres-operator/blob/master/docs/administrator.md#upgrading-the-operator Could somebody provide step-by-step command line how to upgrade operator using Helm or without Helm?
Yes, you can solve it with a secret deletion in one of our cases it was also not working. So i would recommend checking the kubetl get events it will point out an error also if someone can not enable the logs of not getting debug log. i faced issues due to helm deploying creating the service Type: LoadBalancer and we were out of Quota for LoadBalancer in tenancy/account. We mostly check for the POD status but the helm might be having issues with service, secret, configmap, etc so make sure you do instead of guessing debug properly. Another workaround you can do is to roll back to the previous release version which will change the status from pending-update to deployed even if you are deploying the first time. There could be chances if anything is still pending it gets installed it will mark the release as deployed. So you upgrade or install a new one instead of getting an error. helm -n [NAMESPACE ] rollback <RELEASE> [REVISION] Example helm -n default rollback service 1 this will mark your release as deployed as a workaround if stuck in state.
###Instruction: I am experiencing a pending-update Helm issue, ie. in CI pipelines we sometimes deploy a particular application just one time after another. The first helm deployment is still in the pending-update state, while the other deployment of the same application (ran eg. 1 min later) cannot be deployed causing a well-known error: Error: UPGRADE FAILED: another operation (install/upgrade/rollback) is in progress. I know it is a well-known issue and one possible solution could be Helm-related k8s secret deletion (automatically in the CI pipeline) just before the helm upgrade command but is it a good way of solving this? How do you deal with those scenarios in your CI pipelines? ### Response: Yes, you can solve it with a secret deletion in one of our cases it was also not working. So i would recommend checking the kubetl get events it will point out an error also if someone can not enable the logs of not getting debug log. i faced issues due to helm deploying creating the service Type: LoadBalancer and we were out of Quota for LoadBalancer in tenancy/account. We mostly check for the POD status but the helm might be having issues with service, secret, configmap, etc so make sure you do instead of guessing debug properly. Another workaround you can do is to roll back to the previous release version which will change the status from pending-update to deployed even if you are deploying the first time. There could be chances if anything is still pending it gets installed it will mark the release as deployed. So you upgrade or install a new one instead of getting an error. helm -n [NAMESPACE ] rollback <RELEASE> [REVISION] Example helm -n default rollback service 1 this will mark your release as deployed as a workaround if stuck in state.
I am experiencing a pending-update Helm issue, ie. in CI pipelines we sometimes deploy a particular application just one time after another. The first helm deployment is still in the pending-update state, while the other deployment of the same application (ran eg. 1 min later) cannot be deployed causing a well-known error: Error: UPGRADE FAILED: another operation (install/upgrade/rollback) is in progress. I know it is a well-known issue and one possible solution could be Helm-related k8s secret deletion (automatically in the CI pipeline) just before the helm upgrade command but is it a good way of solving this? How do you deal with those scenarios in your CI pipelines?
It's hard to say precisely what the root cause is, but in general it's something related to the network or domain names resolution. A public address is assigned to a node on a startup and is exposed to other nodes for communication. Other nodes store that address and nodeId in their history. Here is what is happening: a new node is trying to enter the cluster, it connects to a random node, then this request is transferred to the coordinator. The coordinator issues TcpDiscoveryNodeAddedMessage that must circle across the topology ring and be ACKed by all other nodes. That process didn't finish during a join timeout, so the new node is trying to re-enter the topology by starting the same joining process but with a new ID. But, other nodes see that this address is already registered by another nodeId, causing the original duplicate nodeId error. Some recommendations: If the issue is reproducible on a regular basis, I'd recommend collecting more information by enabling DEBUG logging for the following package: org.apache.ignite.spi.discovery (discovery-related events tracing) Take thread dumps from affected nodes (could be done by kill -3). Check for discovery-related issues. Search for "lookupAllHostAddr". Check that it's not DNS issue and all public addresses for your node are resolved instantly nkw-mnomni-ignite-1-1-1.nkw-mnomni-ignite-1-1.680e5bbc-21b1-5d61-8dfa-6b27be10ede7.svc.cluster.local. I was asking about the provider, because in OpenShift there seems to be a hard limit on DNS resolution time. Check GC and safepoints. To hide the underlying issue you can play around by increasing Ignite configuration: network timeout, join timeout, reducing failure detection timeout. But I recommend finding the real root cause instead of treating the symptoms.
###Instruction: We have been experimenting with the number of Ignite server pods to see the impact on performance. One thing that we have noticed is that if the number of Ignite server pods is increased after client nodes have established communication the new pod will just fail loop with the error below. If however the grid is destroyed (bring down all client and server nodes) and then the desired number of server nodes is launch there are no issues. Also the above procedure is not fully dependable for anything other than launching a single Ignite server. From reading it looks like [this stack over flow][1] post and [this documentation][2] that the issue may be that we are not launching the "Kubernetes service". Ignite's KubernetesIPFinder requires users to configure and deploy a special Kubernetes service that maintains a list of the IP addresses of all the alive Ignite pods (nodes). However this is the only documentation I have found and it says that it is no longer current. Is this information still relevant for Ignite 2.11.1? If not is there some more recent documentation? If this service is indeed needed, are there some more concreate examples and information on setting them up? Error on new Server pod: [21:37:55,793][SEVERE][main][IgniteKernal] Failed to start manager: GridManagerAdapter [enabled=true, name=o.a.i.i.managers.discovery.GridDiscoveryManager] class org.apache.ignite.IgniteCheckedException: Failed to start SPI: TcpDiscoverySpi [addrRslvr=null, addressFilter=null, sockTimeout=5000, ackTimeout=5000, marsh=JdkMarshaller [clsFilter=org.apache.ignite.marshaller.MarshallerUtils$1@78422efb], reconCnt=10, reconDelay=2000, maxAckTimeout=600000, soLinger=0, forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null, skipAddrsRandomization=false] at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:281) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:980) at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1985) at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1331) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2141) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1787) at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1172) at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1066) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:952) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:851) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:721) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:690) at org.apache.ignite.Ignition.start(Ignition.java:353) at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:367) Caused by: class org.apache.ignite.spi.IgniteSpiException: Node with the same ID was found in node IDs history or existing node in topology has the same ID (fix configuration and restart local node) [localNode=TcpDiscoveryNode [id=000e84bb-f587-43a2-a662-c7c6147d2dde, consistentId=8751ef49-db25-4cf9-a38c-26e23a96a3e4, addrs=ArrayList [0:0:0:0:0:0:0:1%lo, 127.0.0.1, fd00:85:4001:5:f831:8cc:cd3:f863%eth0], sockAddrs=HashSet [nkw-mnomni-ignite-1-1-1.nkw-mnomni-ignite-1-1.680e5bbc-21b1-5d61-8dfa-6b27be10ede7.svc.cluster.local/fd00:85:4001:5:f831:8cc:cd3:f863:47500, /0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500], discPort=47500, order=0, intOrder=0, lastExchangeTime=1676497065109, loc=true, ver=2.11.1#20211220-sha1:eae1147d, isClient=false], existingNode=000e84bb-f587-43a2-a662-c7c6147d2dde] at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.duplicateIdError(TcpDiscoverySpi.java:2083) at org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:1201) at org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:473) at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2207) at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:278) ... 13 more Server DiscoverySpi Config: <property name="discoverySpi"> <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi"> <property name="ipFinder"> <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder"> <property name="namespace" value="myNameSpace"/> <property name="serviceName" value="myServiceName"/> </bean> </property> </bean> </property> Client DiscoverySpi Configs: <bean id="discoverySpi" class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi"> <property name="ipFinder" ref="ipFinder" /> </bean> <bean id="ipFinder" class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder"> <property name="shared" value="false" /> <property name="addresses"> <list> <value>myServiceName.myNameSpace:47500</value> </list> </property> </bean> Edit: I have experimented more with this issue. As long as I do not deploy any clients (using the static TcpDiscoveryVmIpFinder above) I am able to scale up and down server pods without any issue. However as soon as a single client joins I am no longer able to scale the server pods up. I can see that the server pods have ports 47500 and 47100 open so I am not sure what the issue is. Dows the TcpDiscoveryKubernetesIpFinder still need the port to be specified on the client config? I have tried to change my client config to use the TcpDiscoveryKubernetesIpFinder below but I am getting a discovery timeout falure (see below). <property name="discoverySpi"> <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi"> <property name="ipFinder"> <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder"> <property name="namespace" value="680e5bbc-21b1-5d61-8dfa-6b27be10ede7"/> <property name="serviceName" value="nkw-mnomni-ignite-1-1"/> </bean> </property> </bean> </property> 24-Feb-2023 14:15:02.450 WARNING [grid-timeout-worker-#22%igniteClientInstance%] org.apache.ignite.logger.java.JavaLogger.warning Thread dump at 2023/02/24 14:15:02 UTC Thread [name="main", id=1, state=WAITING, blockCnt=78, waitCnt=3] Lock [object=java.util.concurrent.CountDownLatch$Sync@45296dbd, ownerName=null, ownerId=-1] at java.base@17.0.1/jdk.internal.misc.Unsafe.park(Native Method) at java.base@17.0.1/java.util.concurrent.locks.LockSupport.park(LockSupport.java:211) at java.base@17.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:715) at java.base@17.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1047) at java.base@17.0.1/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:230) at o.a.i.spi.discovery.tcp.ClientImpl.spiStart(ClientImpl.java:324) at o.a.i.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2207) at o.a.i.i.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:278) at o.a.i.i.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:980) at o.a.i.i.IgniteKernal.startManager(IgniteKernal.java:1985) at o.a.i.i.IgniteKernal.start(IgniteKernal.java:1331) at o.a.i.i.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2141) at o.a.i.i.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1787) - locked o.a.i.i.IgnitionEx$IgniteNamedInstance@57ac9100 at o.a.i.i.IgnitionEx.start0(IgnitionEx.java:1172) at o.a.i.i.IgnitionEx.startConfigurations(IgnitionEx.java:1066) at o.a.i.i.IgnitionEx.start(IgnitionEx.java:952) at o.a.i.i.IgnitionEx.start(IgnitionEx.java:851) at o.a.i.i.IgnitionEx.start(IgnitionEx.java:721) at o.a.i.i.IgnitionEx.start(IgnitionEx.java:690) at o.a.i.Ignition.start(Ignition.java:353) Edit 2: I also spoke with an admin about opening client side ports in case that was the issue. He indicated that should not be needed as clients should be able to open ephemeral ports to communicate with the server nodes. [1]: Ignite not discoverable in kubernetes cluster with TcpDiscoveryKubernetesIpFinder [2]: https://apacheignite.readme.io/docs/kubernetes-ip-finder ### Response: It's hard to say precisely what the root cause is, but in general it's something related to the network or domain names resolution. A public address is assigned to a node on a startup and is exposed to other nodes for communication. Other nodes store that address and nodeId in their history. Here is what is happening: a new node is trying to enter the cluster, it connects to a random node, then this request is transferred to the coordinator. The coordinator issues TcpDiscoveryNodeAddedMessage that must circle across the topology ring and be ACKed by all other nodes. That process didn't finish during a join timeout, so the new node is trying to re-enter the topology by starting the same joining process but with a new ID. But, other nodes see that this address is already registered by another nodeId, causing the original duplicate nodeId error. Some recommendations: If the issue is reproducible on a regular basis, I'd recommend collecting more information by enabling DEBUG logging for the following package: org.apache.ignite.spi.discovery (discovery-related events tracing) Take thread dumps from affected nodes (could be done by kill -3). Check for discovery-related issues. Search for "lookupAllHostAddr". Check that it's not DNS issue and all public addresses for your node are resolved instantly nkw-mnomni-ignite-1-1-1.nkw-mnomni-ignite-1-1.680e5bbc-21b1-5d61-8dfa-6b27be10ede7.svc.cluster.local. I was asking about the provider, because in OpenShift there seems to be a hard limit on DNS resolution time. Check GC and safepoints. To hide the underlying issue you can play around by increasing Ignite configuration: network timeout, join timeout, reducing failure detection timeout. But I recommend finding the real root cause instead of treating the symptoms.
We have been experimenting with the number of Ignite server pods to see the impact on performance. One thing that we have noticed is that if the number of Ignite server pods is increased after client nodes have established communication the new pod will just fail loop with the error below. If however the grid is destroyed (bring down all client and server nodes) and then the desired number of server nodes is launch there are no issues. Also the above procedure is not fully dependable for anything other than launching a single Ignite server. From reading it looks like [this stack over flow][1] post and [this documentation][2] that the issue may be that we are not launching the "Kubernetes service". Ignite's KubernetesIPFinder requires users to configure and deploy a special Kubernetes service that maintains a list of the IP addresses of all the alive Ignite pods (nodes). However this is the only documentation I have found and it says that it is no longer current. Is this information still relevant for Ignite 2.11.1? If not is there some more recent documentation? If this service is indeed needed, are there some more concreate examples and information on setting them up? Error on new Server pod: [21:37:55,793][SEVERE][main][IgniteKernal] Failed to start manager: GridManagerAdapter [enabled=true, name=o.a.i.i.managers.discovery.GridDiscoveryManager] class org.apache.ignite.IgniteCheckedException: Failed to start SPI: TcpDiscoverySpi [addrRslvr=null, addressFilter=null, sockTimeout=5000, ackTimeout=5000, marsh=JdkMarshaller [clsFilter=org.apache.ignite.marshaller.MarshallerUtils$1@78422efb], reconCnt=10, reconDelay=2000, maxAckTimeout=600000, soLinger=0, forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null, skipAddrsRandomization=false] at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:281) at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:980) at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1985) at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1331) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2141) at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1787) at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1172) at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1066) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:952) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:851) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:721) at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:690) at org.apache.ignite.Ignition.start(Ignition.java:353) at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:367) Caused by: class org.apache.ignite.spi.IgniteSpiException: Node with the same ID was found in node IDs history or existing node in topology has the same ID (fix configuration and restart local node) [localNode=TcpDiscoveryNode [id=000e84bb-f587-43a2-a662-c7c6147d2dde, consistentId=8751ef49-db25-4cf9-a38c-26e23a96a3e4, addrs=ArrayList [0:0:0:0:0:0:0:1%lo, 127.0.0.1, fd00:85:4001:5:f831:8cc:cd3:f863%eth0], sockAddrs=HashSet [nkw-mnomni-ignite-1-1-1.nkw-mnomni-ignite-1-1.680e5bbc-21b1-5d61-8dfa-6b27be10ede7.svc.cluster.local/fd00:85:4001:5:f831:8cc:cd3:f863:47500, /0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500], discPort=47500, order=0, intOrder=0, lastExchangeTime=1676497065109, loc=true, ver=2.11.1#20211220-sha1:eae1147d, isClient=false], existingNode=000e84bb-f587-43a2-a662-c7c6147d2dde] at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.duplicateIdError(TcpDiscoverySpi.java:2083) at org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:1201) at org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:473) at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2207) at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:278) ... 13 more Server DiscoverySpi Config: <property name="discoverySpi"> <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi"> <property name="ipFinder"> <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder"> <property name="namespace" value="myNameSpace"/> <property name="serviceName" value="myServiceName"/> </bean> </property> </bean> </property> Client DiscoverySpi Configs: <bean id="discoverySpi" class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi"> <property name="ipFinder" ref="ipFinder" /> </bean> <bean id="ipFinder" class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder"> <property name="shared" value="false" /> <property name="addresses"> <list> <value>myServiceName.myNameSpace:47500</value> </list> </property> </bean> Edit: I have experimented more with this issue. As long as I do not deploy any clients (using the static TcpDiscoveryVmIpFinder above) I am able to scale up and down server pods without any issue. However as soon as a single client joins I am no longer able to scale the server pods up. I can see that the server pods have ports 47500 and 47100 open so I am not sure what the issue is. Dows the TcpDiscoveryKubernetesIpFinder still need the port to be specified on the client config? I have tried to change my client config to use the TcpDiscoveryKubernetesIpFinder below but I am getting a discovery timeout falure (see below). <property name="discoverySpi"> <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi"> <property name="ipFinder"> <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder"> <property name="namespace" value="680e5bbc-21b1-5d61-8dfa-6b27be10ede7"/> <property name="serviceName" value="nkw-mnomni-ignite-1-1"/> </bean> </property> </bean> </property> 24-Feb-2023 14:15:02.450 WARNING [grid-timeout-worker-#22%igniteClientInstance%] org.apache.ignite.logger.java.JavaLogger.warning Thread dump at 2023/02/24 14:15:02 UTC Thread [name="main", id=1, state=WAITING, blockCnt=78, waitCnt=3] Lock [object=java.util.concurrent.CountDownLatch$Sync@45296dbd, ownerName=null, ownerId=-1] at java.base@17.0.1/jdk.internal.misc.Unsafe.park(Native Method) at java.base@17.0.1/java.util.concurrent.locks.LockSupport.park(LockSupport.java:211) at java.base@17.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:715) at java.base@17.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1047) at java.base@17.0.1/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:230) at o.a.i.spi.discovery.tcp.ClientImpl.spiStart(ClientImpl.java:324) at o.a.i.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2207) at o.a.i.i.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:278) at o.a.i.i.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:980) at o.a.i.i.IgniteKernal.startManager(IgniteKernal.java:1985) at o.a.i.i.IgniteKernal.start(IgniteKernal.java:1331) at o.a.i.i.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2141) at o.a.i.i.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1787) - locked o.a.i.i.IgnitionEx$IgniteNamedInstance@57ac9100 at o.a.i.i.IgnitionEx.start0(IgnitionEx.java:1172) at o.a.i.i.IgnitionEx.startConfigurations(IgnitionEx.java:1066) at o.a.i.i.IgnitionEx.start(IgnitionEx.java:952) at o.a.i.i.IgnitionEx.start(IgnitionEx.java:851) at o.a.i.i.IgnitionEx.start(IgnitionEx.java:721) at o.a.i.i.IgnitionEx.start(IgnitionEx.java:690) at o.a.i.Ignition.start(Ignition.java:353) Edit 2: I also spoke with an admin about opening client side ports in case that was the issue. He indicated that should not be needed as clients should be able to open ephemeral ports to communicate with the server nodes. [1]: Ignite not discoverable in kubernetes cluster with TcpDiscoveryKubernetesIpFinder [2]: https://apacheignite.readme.io/docs/kubernetes-ip-finder
You can leverage the Operator to create these databases on Kubernetes. Operators will manage and create the database for you. Elasticsearch - https://www.elastic.co/downloads/elastic-cloud-kubernetes Cassandra - https://github.com/instaclustr/cassandra-operator Janusgraph - https://github.com/IBM/janusgraph-operator You can also leverage the helm chart available to setup database, mostly operators have helm chart for easy setup. If you don't want to go with an operator you can directly install the helm also which will create the Deployment, POD, services for you with PVC and other required objects. The best practice will be to go with the operator.
###Instruction: I want to setup cassandra,elasticsearch and janusgraph on ubuntu 20.04 using kubernetes. can anyone suggest me links or provide me yaml file for setup? i have a 3node Kubernetes cluster,1 master and 2 nodes, i have installed 1.26.1 Kubernetes version ### Response: You can leverage the Operator to create these databases on Kubernetes. Operators will manage and create the database for you. Elasticsearch - https://www.elastic.co/downloads/elastic-cloud-kubernetes Cassandra - https://github.com/instaclustr/cassandra-operator Janusgraph - https://github.com/IBM/janusgraph-operator You can also leverage the helm chart available to setup database, mostly operators have helm chart for easy setup. If you don't want to go with an operator you can directly install the helm also which will create the Deployment, POD, services for you with PVC and other required objects. The best practice will be to go with the operator.
I want to setup cassandra,elasticsearch and janusgraph on ubuntu 20.04 using kubernetes. can anyone suggest me links or provide me yaml file for setup? i have a 3node Kubernetes cluster,1 master and 2 nodes, i have installed 1.26.1 Kubernetes version
There is no official AWS support for S3 as a CSI. There is a fuse driver from Yandex-cloud here: https://github.com/yandex-cloud/k8s-csi-s3
###Instruction: We have on-premises Kubernetes clusters. Our data is in S3-compatible cloud storage (e.g. Amazon s3). We want to mount S3 bucket to Kubernetes pods in ReadWriteMany (RWX) mode via a CSI driver. What non-dead Kuebrnetes CSI RWX S3 drivers exist in 2022? ### Response: There is no official AWS support for S3 as a CSI. There is a fuse driver from Yandex-cloud here: https://github.com/yandex-cloud/k8s-csi-s3
We have on-premises Kubernetes clusters. Our data is in S3-compatible cloud storage (e.g. Amazon s3). We want to mount S3 bucket to Kubernetes pods in ReadWriteMany (RWX) mode via a CSI driver. What non-dead Kuebrnetes CSI RWX S3 drivers exist in 2022?
A common approach is to use separate projects for staging and production. With that, you will get the isolation, and clear separation between the staging & production environment. Prevents accidental changes or testing and also prevents downtimes etc. Resource management across the project would be easy for you will have to a better idea bout the resources & management across the project. You properly get a list of resources & billing details in the project. Access Management : Separate projects also have better access control, as only specific users will have prod project across so not everyone hang around the project or view/update the resources. But the separate project also comes with the challenges of admin work so you mentioned you are a small team without DevOps it would be better to go with a Single project with multiple clusters managed with proper labels, networks & IAM roles.
###Instruction: This is my first time using GCP, I have always been an AWS person before this. As a small team without dedicated DevOps, should we separate our prod and sandbox environments into 2 clusters / VPCs under one GCP project, or should we be going through the hassle of entirely separate GCP projects as well? What is the industry standard here? If we have IAM setup correctly, what would be the con to having them both in the same project? ### Response: A common approach is to use separate projects for staging and production. With that, you will get the isolation, and clear separation between the staging & production environment. Prevents accidental changes or testing and also prevents downtimes etc. Resource management across the project would be easy for you will have to a better idea bout the resources & management across the project. You properly get a list of resources & billing details in the project. Access Management : Separate projects also have better access control, as only specific users will have prod project across so not everyone hang around the project or view/update the resources. But the separate project also comes with the challenges of admin work so you mentioned you are a small team without DevOps it would be better to go with a Single project with multiple clusters managed with proper labels, networks & IAM roles.
This is my first time using GCP, I have always been an AWS person before this. As a small team without dedicated DevOps, should we separate our prod and sandbox environments into 2 clusters / VPCs under one GCP project, or should we be going through the hassle of entirely separate GCP projects as well? What is the industry standard here? If we have IAM setup correctly, what would be the con to having them both in the same project?
I also had the issue showing in the etcd logs: {"level":"warn","ts":"2023-02-14T11:47:26.260Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.65.4:34996","server-name":"","error":"tls: failed to verify client certificate: x509: certificate has expired or is not yet valid: current time 2023-02-14T11:47:26Z is after 2023-02-03T12:24:57Z"} I found the answer for windows here: https://forums.docker.com/t/kubernetes-on-docker-desktop-fails-to-launch-after-kube-apiserver-certificate-expiry/106570/2 on Mac OS, the path is ~/Library/Containers/com.docker.docker. delete or backup the pki folder and restart docker. kubernetes should come back
###Instruction: Issue type: Kubernetes on Docker Desktop stopped working due to expired kuber-apiserver certificates OS Version/build: Windows 10 version - 1909 and OS Build - 18363 App version: Docker Desktop 3.03 Steps to reproduce: Install Docker Desktop Enable Kubernetes Change the Windows PC time to ahead by 1 year Kubernetes cluster will stop working saying the kube-apiserver certificates are expired The applications/workloads deployed on the Kubernetes cluster will also stop working. As Kubernetes certificates are issued for 1 year; after the 1 year time period the certificates are expired and this will break Kubernetes. certificate snapshot Need help: Requesting Information on how to renew the kube-apiserver certificates without affecting the Kubernetes and the installed application ### Response: I also had the issue showing in the etcd logs: {"level":"warn","ts":"2023-02-14T11:47:26.260Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.65.4:34996","server-name":"","error":"tls: failed to verify client certificate: x509: certificate has expired or is not yet valid: current time 2023-02-14T11:47:26Z is after 2023-02-03T12:24:57Z"} I found the answer for windows here: https://forums.docker.com/t/kubernetes-on-docker-desktop-fails-to-launch-after-kube-apiserver-certificate-expiry/106570/2 on Mac OS, the path is ~/Library/Containers/com.docker.docker. delete or backup the pki folder and restart docker. kubernetes should come back
Issue type: Kubernetes on Docker Desktop stopped working due to expired kuber-apiserver certificates OS Version/build: Windows 10 version - 1909 and OS Build - 18363 App version: Docker Desktop 3.03 Steps to reproduce: Install Docker Desktop Enable Kubernetes Change the Windows PC time to ahead by 1 year Kubernetes cluster will stop working saying the kube-apiserver certificates are expired The applications/workloads deployed on the Kubernetes cluster will also stop working. As Kubernetes certificates are issued for 1 year; after the 1 year time period the certificates are expired and this will break Kubernetes. certificate snapshot Need help: Requesting Information on how to renew the kube-apiserver certificates without affecting the Kubernetes and the installed application
@Harsh Manvar while I do appreciate your anwser I found something that is a better suit for my needs. Cert-manager documentation contains multiple options to sync secrets across namespaces The one I chose was reflector. The steps to install are included in the documentation but just for the sake of service i'll post here aswell Requirements: Helm Installation: helm repo add emberstack https://emberstack.github.io/helm-charts helm repo update helm upgrade --install reflector emberstack/reflector Setup: Add the following annotation to your secret reflector.v1.k8s.emberstack.com/reflection-allowed: "true", it should look like the following apiVersion: v1 kind: Secret metadata: name: source-secret annotations: reflector.v1.k8s.emberstack.com/reflection-allowed: "true" Done! Your secret should be replicated within all namespaces. For multiple ingress configurations within the same namespace you could edit your ingress.yaml like this Ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: jenkins-ingress namespace: jenkins annotations: cert-manager.io/cluster-issuer: letsencrypt-global kubernetes.io/tls-acme: "true" nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/proxy-body-size: "125m" nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx tls: - hosts: - jenkins.foo.com - nginx.foo.com secretName: letsencrypt-global # < cert-manager will store the created certificate in this secret. rules: - host: jenkins.foo.com http: paths: - path: / pathType: Prefix backend: service: name: jenkins port: number: 80 - host: nginx.foo.com http: paths: - path: / pathType: Prefix backend: service: name: nginx port: number: 80
###Instruction: I was under the impression that the main point of cluster-issuer is that its namespaced and doesn't have to be recreated across different resources, in general there could be one main cluster-issuer that will manage all ingresses across the cluster. From what I am seeing the cluster-issuer can only create one secret and if its in use by one ingress the second wont wont be created properly cause its already taken. Is there anyway to create one cluster-issuer to manage all ingresses across the cluster? Code included below Cluster-issuer.yaml apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-grafana namespace: cert-manager spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: foo@gmail.com privateKeySecretRef: name: letsencrypt-grafana solvers: - selector: dnsZones: - "foo.com" dns01: route53: region: eu-central-1 hostedZoneID: foo accessKeyID: foo secretAccessKeySecretRef: name: aws-route53-creds key: password.txt Ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: grafana-ingress namespace: loki annotations: cert-manager.io/cluster-issuer: letsencrypt-grafana kubernetes.io/tls-acme: "true" nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/proxy-body-size: "125m" nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx tls: - hosts: - grafana.foo.com secretName: letsencrypt-grafana # < cert-manager will store the created certificate in this secret. rules: - host: grafana.foo.com http: paths: - path: / pathType: Prefix backend: service: name: loki-grafana port: number: 80 ### Response: @Harsh Manvar while I do appreciate your anwser I found something that is a better suit for my needs. Cert-manager documentation contains multiple options to sync secrets across namespaces The one I chose was reflector. The steps to install are included in the documentation but just for the sake of service i'll post here aswell Requirements: Helm Installation: helm repo add emberstack https://emberstack.github.io/helm-charts helm repo update helm upgrade --install reflector emberstack/reflector Setup: Add the following annotation to your secret reflector.v1.k8s.emberstack.com/reflection-allowed: "true", it should look like the following apiVersion: v1 kind: Secret metadata: name: source-secret annotations: reflector.v1.k8s.emberstack.com/reflection-allowed: "true" Done! Your secret should be replicated within all namespaces. For multiple ingress configurations within the same namespace you could edit your ingress.yaml like this Ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: jenkins-ingress namespace: jenkins annotations: cert-manager.io/cluster-issuer: letsencrypt-global kubernetes.io/tls-acme: "true" nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/proxy-body-size: "125m" nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx tls: - hosts: - jenkins.foo.com - nginx.foo.com secretName: letsencrypt-global # < cert-manager will store the created certificate in this secret. rules: - host: jenkins.foo.com http: paths: - path: / pathType: Prefix backend: service: name: jenkins port: number: 80 - host: nginx.foo.com http: paths: - path: / pathType: Prefix backend: service: name: nginx port: number: 80
I was under the impression that the main point of cluster-issuer is that its namespaced and doesn't have to be recreated across different resources, in general there could be one main cluster-issuer that will manage all ingresses across the cluster. From what I am seeing the cluster-issuer can only create one secret and if its in use by one ingress the second wont wont be created properly cause its already taken. Is there anyway to create one cluster-issuer to manage all ingresses across the cluster? Code included below Cluster-issuer.yaml apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-grafana namespace: cert-manager spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: foo@gmail.com privateKeySecretRef: name: letsencrypt-grafana solvers: - selector: dnsZones: - "foo.com" dns01: route53: region: eu-central-1 hostedZoneID: foo accessKeyID: foo secretAccessKeySecretRef: name: aws-route53-creds key: password.txt Ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: grafana-ingress namespace: loki annotations: cert-manager.io/cluster-issuer: letsencrypt-grafana kubernetes.io/tls-acme: "true" nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/proxy-body-size: "125m" nginx.ingress.kubernetes.io/rewrite-target: / spec: ingressClassName: nginx tls: - hosts: - grafana.foo.com secretName: letsencrypt-grafana # < cert-manager will store the created certificate in this secret. rules: - host: grafana.foo.com http: paths: - path: / pathType: Prefix backend: service: name: loki-grafana port: number: 80
After reading silh's comment and the kubebuilder Dockerfile again, I somehow solved this problem by adding CGO_ENABLED=0 to go build. It changes dynamic linker to static linker. CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -o manager main.go At least the no such file or directory error is now gone.
###Instruction: This is my Dockerfile: FROM gcr.io/distroless/static:nonroot WORKDIR / COPY ls . COPY tail . COPY test . COPY manager . ENTRYPOINT ["/manager"] after [root@master go-docker-test]# docker build -t strangething:v1.13 . [root@master go-docker-test]# docker run -d strangething:v1.13 [root@master go-docker-test]# docker logs b2 it shows: exec /manager: no such file or directory I'm pretty sure it is there. I use dive to see it: [Layers]───────────────────────────────────────────────────────────────────── [● Current Layer Contents]────────────────────────────────────────────────── Cmp Image ID Size Command Permission UID:GID Size Filetree sha256:cb60fb9b862c6a89f9 2.3 MB FROM sha256:cb60fb9b862c6a89f9 drwxr-xr-x 0:0 2.3 MB ├── . sha256:3e884d7c2d4ba9bac6 118 kB COPY ls . # buildkit drwxr-xr-x 0:0 0 B │ ├── bin sha256:e75e9da8f1605f7944 67 kB COPY tail . # buildkit drwxr-xr-x 0:0 0 B │ ├── boot sha256:7a0f1970f36a364672 1.8 MB COPY test . # buildkit drwxr-xr-x 0:0 0 B │ ├── dev sha256:c9ab59cb1ce11477ca 47 MB COPY manager . # buildkit drwxr-xr-x 0:0 220 kB │ ├─⊕ etc drwxr-xr-x 65532:65532 0 B │ ├─⊕ home [Layer Details]────────────────────────────────────────────────────────────── drwxr-xr-x 0:0 0 B │ ├── lib drwxr-xr-x 0:0 0 B │ ├── proc Digest: sha256:c9ab59cb1ce11477cac4d634bb81cf7316c344b50f01a62a8e5ddcf355d5fe drwx------ 0:0 0 B │ ├── root cf drwxr-xr-x 0:0 0 B │ ├── run Tar ID: 998c57d00785ccffaf3b308a529c7f816633897097d1ef6519269a8e3c5af59b drwxr-xr-x 0:0 0 B │ ├── sbin Command: drwxr-xr-x 0:0 0 B │ ├── sys COPY manager . # buildkit drwxrwxrwx 0:0 0 B │ ├── tmp drwxr-xr-x 0:0 2.1 MB │ ├─⊕ usr [Image Details]────────────────────────────────────────────────────────────── drwxr-xr-x 0:0 1.8 kB │ └─⊕ var -rwxr-xr-x 0:0 118 kB ├── ls Total Image size: 51 MB -rwxr-xr-x 0:0 47 MB ├── manager Potential wasted space: 0 B -rwxr-xr-x 0:0 67 kB ├── tail Image efficiency score: 100 % -rwxr-xr-x 0:0 1.8 MB └── test Count Total Space Path This file tree confuses me more or less. Anyway, strange thing is, ls tail manager can't exec, but testcan exec. after I change ENTRYPOINT ["/manager"] to ENTRYPOINT ["/test"]: [root@master go-docker-test]# docker logs c11 empty test is a program built by test.go, I write it myself: package main import ( "fmt" ) type Per struct { Name string Age int } type Person struct { Name string Age int Lov Per } func main() { var one Person one.Name="abc" one.Age=11 var two Per one.Lov=two if one.Lov != (Per{}){ fmt.Println("not empty!") }else { fmt.Println("empty") } } ### Response: After reading silh's comment and the kubebuilder Dockerfile again, I somehow solved this problem by adding CGO_ENABLED=0 to go build. It changes dynamic linker to static linker. CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -o manager main.go At least the no such file or directory error is now gone.
This is my Dockerfile: FROM gcr.io/distroless/static:nonroot WORKDIR / COPY ls . COPY tail . COPY test . COPY manager . ENTRYPOINT ["/manager"] after [root@master go-docker-test]# docker build -t strangething:v1.13 . [root@master go-docker-test]# docker run -d strangething:v1.13 [root@master go-docker-test]# docker logs b2 it shows: exec /manager: no such file or directory I'm pretty sure it is there. I use dive to see it: [Layers]───────────────────────────────────────────────────────────────────── [● Current Layer Contents]────────────────────────────────────────────────── Cmp Image ID Size Command Permission UID:GID Size Filetree sha256:cb60fb9b862c6a89f9 2.3 MB FROM sha256:cb60fb9b862c6a89f9 drwxr-xr-x 0:0 2.3 MB ├── . sha256:3e884d7c2d4ba9bac6 118 kB COPY ls . # buildkit drwxr-xr-x 0:0 0 B │ ├── bin sha256:e75e9da8f1605f7944 67 kB COPY tail . # buildkit drwxr-xr-x 0:0 0 B │ ├── boot sha256:7a0f1970f36a364672 1.8 MB COPY test . # buildkit drwxr-xr-x 0:0 0 B │ ├── dev sha256:c9ab59cb1ce11477ca 47 MB COPY manager . # buildkit drwxr-xr-x 0:0 220 kB │ ├─⊕ etc drwxr-xr-x 65532:65532 0 B │ ├─⊕ home [Layer Details]────────────────────────────────────────────────────────────── drwxr-xr-x 0:0 0 B │ ├── lib drwxr-xr-x 0:0 0 B │ ├── proc Digest: sha256:c9ab59cb1ce11477cac4d634bb81cf7316c344b50f01a62a8e5ddcf355d5fe drwx------ 0:0 0 B │ ├── root cf drwxr-xr-x 0:0 0 B │ ├── run Tar ID: 998c57d00785ccffaf3b308a529c7f816633897097d1ef6519269a8e3c5af59b drwxr-xr-x 0:0 0 B │ ├── sbin Command: drwxr-xr-x 0:0 0 B │ ├── sys COPY manager . # buildkit drwxrwxrwx 0:0 0 B │ ├── tmp drwxr-xr-x 0:0 2.1 MB │ ├─⊕ usr [Image Details]────────────────────────────────────────────────────────────── drwxr-xr-x 0:0 1.8 kB │ └─⊕ var -rwxr-xr-x 0:0 118 kB ├── ls Total Image size: 51 MB -rwxr-xr-x 0:0 47 MB ├── manager Potential wasted space: 0 B -rwxr-xr-x 0:0 67 kB ├── tail Image efficiency score: 100 % -rwxr-xr-x 0:0 1.8 MB └── test Count Total Space Path This file tree confuses me more or less. Anyway, strange thing is, ls tail manager can't exec, but testcan exec. after I change ENTRYPOINT ["/manager"] to ENTRYPOINT ["/test"]: [root@master go-docker-test]# docker logs c11 empty test is a program built by test.go, I write it myself: package main import ( "fmt" ) type Per struct { Name string Age int } type Person struct { Name string Age int Lov Per } func main() { var one Person one.Name="abc" one.Age=11 var two Per one.Lov=two if one.Lov != (Per{}){ fmt.Println("not empty!") }else { fmt.Println("empty") } }
Finally the problem wasn't the values.yaml that was not set correctly but more the way it was used within the template. When using an include of a definition coming from a .tpl file (this one was the autogenerated by Helm), we must be careful to not be in a range. I was creating a range of assets so it seems that it will run the code in the context of the range. Your conditional logic is being evaluated inside a range loop. This means . you're using to access Values is not the one you expect it to be, as it's overridden for each range iteration evaluation. ref: ingress.yaml template returns error in renderring --> nil pointer evaluating interface {}.service That means that we should use $ instead of . notation because it references the global scope. Example: {{- include "mychart-stack.fullname" $ }}
###Instruction: I am building a new Helm chart (mychart) that I'm trying to install. A values.yaml exists and its contents specify the fullnameOverride: fullnameOverride: "myapp" I run the following command helm install --dry-run -f "mychart-stack/values.yaml" mychart-stack1 ./mychart-stack And it's giving me the error: template: mychart-stack/templates/persistentvolume.local-storage.range.yml:5:14: executing "mychart-stack/templates/persistentvolume.local-storage.range.yml" at <include "mychart-stack.fullname" .>: error calling include: template: mychart-stack/templates/_helpers.tpl:14:14: executing "mychart-stack.fullname" at <.Values.fullnameOverride>: nil pointer evaluating interface {}.fullnameOverride The mychart-stack/templates/_helpers.tpl:14:14 is the pregenerated one when you're asking Helm to produce a Chart example. The error (14:14) is associated at the first line of the following auto generated code: {{- if .Values.fullnameOverride }} {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} {{- else }} A little more context, as it's throwing an error while checking the persistentvolume.local-storage.range.yml, here are the contents of the file: {{- range .Values.persistentVolume.localStorage }} --- apiVersion: v1 kind: PersistentVolume metadata: name: pv-{{ include "mychart-stack.fullname" }}-{{ .name }} spec: capacity: storage: 20Gi # le champ volumeMode requiert l'activation de la "feature gate" Alpha BlockVolume volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage-{{ include "mychart-stack.fullname" }}--{{ .name }} local: path: {{ .Values.persistentVolume.basePath }}/{{ .name }} nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - {{ .Values.hostName }} {{- end }} I don't know what's wrong, the code seems to indicate that it's not defined properly. I tried to run it in --debug mode but it doesn't help (same error). ### Response: Finally the problem wasn't the values.yaml that was not set correctly but more the way it was used within the template. When using an include of a definition coming from a .tpl file (this one was the autogenerated by Helm), we must be careful to not be in a range. I was creating a range of assets so it seems that it will run the code in the context of the range. Your conditional logic is being evaluated inside a range loop. This means . you're using to access Values is not the one you expect it to be, as it's overridden for each range iteration evaluation. ref: ingress.yaml template returns error in renderring --> nil pointer evaluating interface {}.service That means that we should use $ instead of . notation because it references the global scope. Example: {{- include "mychart-stack.fullname" $ }}
I am building a new Helm chart (mychart) that I'm trying to install. A values.yaml exists and its contents specify the fullnameOverride: fullnameOverride: "myapp" I run the following command helm install --dry-run -f "mychart-stack/values.yaml" mychart-stack1 ./mychart-stack And it's giving me the error: template: mychart-stack/templates/persistentvolume.local-storage.range.yml:5:14: executing "mychart-stack/templates/persistentvolume.local-storage.range.yml" at <include "mychart-stack.fullname" .>: error calling include: template: mychart-stack/templates/_helpers.tpl:14:14: executing "mychart-stack.fullname" at <.Values.fullnameOverride>: nil pointer evaluating interface {}.fullnameOverride The mychart-stack/templates/_helpers.tpl:14:14 is the pregenerated one when you're asking Helm to produce a Chart example. The error (14:14) is associated at the first line of the following auto generated code: {{- if .Values.fullnameOverride }} {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} {{- else }} A little more context, as it's throwing an error while checking the persistentvolume.local-storage.range.yml, here are the contents of the file: {{- range .Values.persistentVolume.localStorage }} --- apiVersion: v1 kind: PersistentVolume metadata: name: pv-{{ include "mychart-stack.fullname" }}-{{ .name }} spec: capacity: storage: 20Gi # le champ volumeMode requiert l'activation de la "feature gate" Alpha BlockVolume volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage-{{ include "mychart-stack.fullname" }}--{{ .name }} local: path: {{ .Values.persistentVolume.basePath }}/{{ .name }} nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - {{ .Values.hostName }} {{- end }} I don't know what's wrong, the code seems to indicate that it's not defined properly. I tried to run it in --debug mode but it doesn't help (same error).
To annotate all nodes with a particular machinedeployment. You can use the kubectl annotate command to annotate all nodes in a particular machinedeployment with a specific key-value pair. For example, to annotate all nodes with a machinedeployment of nginx-deployment, you can run the following command: kubectl annotate nodes --all deployment=nginx-deployment key=value This will annotate all nodes in the machinedeployment of nginx-deployment with the specified key-value pair. For more information follow this blog by Seyi Ewegbemi.
###Instruction: We have created two machine deployments. kubectl get machinedeployment -A NAMESPACE NAME REPLICAS AVAILABLE-REPLICAS PROVIDER OS KUBELET AGE kube-system abc 3 3 hetzner ubuntu 1.24.9 116m kube-system vnr4jdxd6s-worker-tgl65w 1 1 hetzner ubuntu 1.24.9 13d kubectl get nodes NAME STATUS ROLES AGE VERSION abc-b6647d7cb-bcprj Ready <none> 62m v1.24.9 abc-b6647d7cb-llsq8 Ready <none> 65m v1.24.9 abc-b6647d7cb-mtlsl Ready <none> 58m v1.24.9 vnr4jdxd6s-worker-tgl65w-59ff7fc46c-d9tm6 Ready <none> 13d v1.24.9 We know that we can add a label to a specific node kubectl label nodes abc-b6647d7cb-bcprj key=value But our nodes are autoscaled. We would like to install, for example, MariaDB Galera on specific machinedeployment node. Is it somehow possible to annotate all nodes with a particular machinedeployments? Is it somehow possible to annotate all nodes with a particular machinedeployments? ### Response: To annotate all nodes with a particular machinedeployment. You can use the kubectl annotate command to annotate all nodes in a particular machinedeployment with a specific key-value pair. For example, to annotate all nodes with a machinedeployment of nginx-deployment, you can run the following command: kubectl annotate nodes --all deployment=nginx-deployment key=value This will annotate all nodes in the machinedeployment of nginx-deployment with the specified key-value pair. For more information follow this blog by Seyi Ewegbemi.
We have created two machine deployments. kubectl get machinedeployment -A NAMESPACE NAME REPLICAS AVAILABLE-REPLICAS PROVIDER OS KUBELET AGE kube-system abc 3 3 hetzner ubuntu 1.24.9 116m kube-system vnr4jdxd6s-worker-tgl65w 1 1 hetzner ubuntu 1.24.9 13d kubectl get nodes NAME STATUS ROLES AGE VERSION abc-b6647d7cb-bcprj Ready <none> 62m v1.24.9 abc-b6647d7cb-llsq8 Ready <none> 65m v1.24.9 abc-b6647d7cb-mtlsl Ready <none> 58m v1.24.9 vnr4jdxd6s-worker-tgl65w-59ff7fc46c-d9tm6 Ready <none> 13d v1.24.9 We know that we can add a label to a specific node kubectl label nodes abc-b6647d7cb-bcprj key=value But our nodes are autoscaled. We would like to install, for example, MariaDB Galera on specific machinedeployment node. Is it somehow possible to annotate all nodes with a particular machinedeployments? Is it somehow possible to annotate all nodes with a particular machinedeployments?
Yes, you will be able to access the ClusterIP from Kubernetes worker nodes. When you create the SVC in to Kubernetes with type ClusterIP it assigns the Virtual IP to SVC, Virtual IP address is accessible only within the K8s cluster. Kubernetes uses the IPTables rules to forward traffic from VirtualIP to the PODs that the service is routing traffic to. Kubernetes nodes are part of the Cluster they will be able to access the virtual IP. So clusterIp will be accessible from in cluster across from any of worker node but you won't be able to access it from outside of Cluster. If you really want to go deep understanding the bridge, IP assignment, forwarding you can ref this nice article : https://dustinspecker.com/posts/iptables-how-kubernetes-services-direct-traffic-to-pods/
###Instruction: when CLUSTER-IP is internal to cluster network, then why I am able to ping one of the cluster IP service from host/node where k8s installed. Have a look at ip 10.101.210.88 which is a cluster-ip and as per definition it must be only reachable from any other pod only but still I can reach it from my Ubuntu host/node machine /root#kgs NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16d kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 16d ricinfra service-tiller-ricxapp ClusterIP 10.98.94.194 <none> 44134/TCP 7d7h ricplt aux-entry ClusterIP 10.105.149.143 <none> 80/TCP,443/TCP 7d7h ricplt r4-influxdb-influxdb2 ClusterIP 10.110.14.243 <none> 80/TCP 7d7h ricplt r4-infrastructure-kong-proxy NodePort 10.107.12.178 <none> 32080:32080/TCP,32443:32443/TCP 7d7h ricplt r4-infrastructure-prometheus-alertmanager ClusterIP 10.104.86.76 <none> 80/TCP 7d7h ricplt r4-infrastructure-prometheus-server ClusterIP 10.102.224.176 <none> 80/TCP 7d7h ricplt service-ricplt-a1mediator-http ClusterIP 10.105.45.1 <none> 10000/TCP 7d7h ricplt service-ricplt-a1mediator-rmr ClusterIP 10.108.188.147 <none> 4561/TCP,4562/TCP 7d7h ricplt service-ricplt-alarmmanager-http ClusterIP 10.111.239.130 <none> 8080/TCP 7d7h ricplt service-ricplt-alarmmanager-rmr ClusterIP 10.106.30.195 <none> 4560/TCP,4561/TCP 7d7h ricplt service-ricplt-appmgr-http ClusterIP 10.110.110.91 <none> 8080/TCP 7d7h ricplt service-ricplt-appmgr-rmr ClusterIP 10.110.96.28 <none> 4561/TCP,4560/TCP 7d7h ricplt service-ricplt-dbaas-tcp ClusterIP None <none> 6379/TCP 7d7h ricplt service-ricplt-e2mgr-http ClusterIP 10.101.210.88 <none> 3800/TCP 7d7h ricplt service-ricplt-e2mgr-rmr ClusterIP 10.101.245.34 <none> 4561/TCP,3801/TCP 7d7h ricplt service-ricplt-e2term-prometheus-alpha ClusterIP 10.97.95.213 <none> 8088/TCP 7d7h ricplt service-ricplt-e2term-rmr-alpha ClusterIP 10.100.36.142 <none> 4561/TCP,38000/TCP 7d7h ricplt service-ricplt-e2term-sctp-alpha NodePort 10.108.215.136 <none> 36422:32222/SCTP 7d7h ricplt service-ricplt-o1mediator-http ClusterIP 10.96.196.67 <none> 9001/TCP,8080/TCP,3000/TCP 7d7h ricplt service-ricplt-o1mediator-tcp-netconf NodePort 10.104.237.252 <none> 830:30830/TCP 7d7h ricplt service-ricplt-rtmgr-http ClusterIP 10.105.27.42 <none> 3800/TCP 7d7h ricplt service-ricplt-rtmgr-rmr ClusterIP 10.110.0.158 <none> 4561/TCP,4560/TCP 7d7h ricplt service-ricplt-submgr-http ClusterIP None <none> 3800/TCP 7d7h ricplt service-ricplt-submgr-rmr ClusterIP None <none> 4560/TCP,4561/TCP 7d7h ricplt service-ricplt-vespamgr-http ClusterIP 10.98.139.191 <none> 8080/TCP,9095/TCP 7d7h ricxapp aux-entry ClusterIP 10.99.152.66 <none> 80/TCP,443/TCP 7d7h ricxapp service-ricxapp-bouncer-xapp-http ClusterIP 10.99.222.68 <none> 8080/TCP 25h ricxapp service-ricxapp-bouncer-xapp-rmr ClusterIP 10.97.149.3 <none> 4560/TCP,4561/TCP 25h ricxapp service-ricxapp-example-http ClusterIP 10.101.14.166 <none> 8080/TCP 2d11h ricxapp service-ricxapp-example-rmr ClusterIP 10.97.129.85 <none> 4560/TCP,4561/TCP 2d11h /root# /root# /root#ping 10.101.210.88 --------> this is a cluster ip and i am pinging it from my host/node machine PING 10.101.210.88 (10.101.210.88) 56(84) bytes of data. 64 bytes from 10.101.210.88: icmp_seq=1 ttl=64 time=0.061 ms 64 bytes from 10.101.210.88: icmp_seq=2 ttl=64 time=0.032 ms ^C --- 10.101.210.88 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1019ms rtt min/avg/max/mdev = 0.032/0.046/0.061/0.014 ms /root# thanks ### Response: Yes, you will be able to access the ClusterIP from Kubernetes worker nodes. When you create the SVC in to Kubernetes with type ClusterIP it assigns the Virtual IP to SVC, Virtual IP address is accessible only within the K8s cluster. Kubernetes uses the IPTables rules to forward traffic from VirtualIP to the PODs that the service is routing traffic to. Kubernetes nodes are part of the Cluster they will be able to access the virtual IP. So clusterIp will be accessible from in cluster across from any of worker node but you won't be able to access it from outside of Cluster. If you really want to go deep understanding the bridge, IP assignment, forwarding you can ref this nice article : https://dustinspecker.com/posts/iptables-how-kubernetes-services-direct-traffic-to-pods/
when CLUSTER-IP is internal to cluster network, then why I am able to ping one of the cluster IP service from host/node where k8s installed. Have a look at ip 10.101.210.88 which is a cluster-ip and as per definition it must be only reachable from any other pod only but still I can reach it from my Ubuntu host/node machine /root#kgs NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 16d kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 16d ricinfra service-tiller-ricxapp ClusterIP 10.98.94.194 <none> 44134/TCP 7d7h ricplt aux-entry ClusterIP 10.105.149.143 <none> 80/TCP,443/TCP 7d7h ricplt r4-influxdb-influxdb2 ClusterIP 10.110.14.243 <none> 80/TCP 7d7h ricplt r4-infrastructure-kong-proxy NodePort 10.107.12.178 <none> 32080:32080/TCP,32443:32443/TCP 7d7h ricplt r4-infrastructure-prometheus-alertmanager ClusterIP 10.104.86.76 <none> 80/TCP 7d7h ricplt r4-infrastructure-prometheus-server ClusterIP 10.102.224.176 <none> 80/TCP 7d7h ricplt service-ricplt-a1mediator-http ClusterIP 10.105.45.1 <none> 10000/TCP 7d7h ricplt service-ricplt-a1mediator-rmr ClusterIP 10.108.188.147 <none> 4561/TCP,4562/TCP 7d7h ricplt service-ricplt-alarmmanager-http ClusterIP 10.111.239.130 <none> 8080/TCP 7d7h ricplt service-ricplt-alarmmanager-rmr ClusterIP 10.106.30.195 <none> 4560/TCP,4561/TCP 7d7h ricplt service-ricplt-appmgr-http ClusterIP 10.110.110.91 <none> 8080/TCP 7d7h ricplt service-ricplt-appmgr-rmr ClusterIP 10.110.96.28 <none> 4561/TCP,4560/TCP 7d7h ricplt service-ricplt-dbaas-tcp ClusterIP None <none> 6379/TCP 7d7h ricplt service-ricplt-e2mgr-http ClusterIP 10.101.210.88 <none> 3800/TCP 7d7h ricplt service-ricplt-e2mgr-rmr ClusterIP 10.101.245.34 <none> 4561/TCP,3801/TCP 7d7h ricplt service-ricplt-e2term-prometheus-alpha ClusterIP 10.97.95.213 <none> 8088/TCP 7d7h ricplt service-ricplt-e2term-rmr-alpha ClusterIP 10.100.36.142 <none> 4561/TCP,38000/TCP 7d7h ricplt service-ricplt-e2term-sctp-alpha NodePort 10.108.215.136 <none> 36422:32222/SCTP 7d7h ricplt service-ricplt-o1mediator-http ClusterIP 10.96.196.67 <none> 9001/TCP,8080/TCP,3000/TCP 7d7h ricplt service-ricplt-o1mediator-tcp-netconf NodePort 10.104.237.252 <none> 830:30830/TCP 7d7h ricplt service-ricplt-rtmgr-http ClusterIP 10.105.27.42 <none> 3800/TCP 7d7h ricplt service-ricplt-rtmgr-rmr ClusterIP 10.110.0.158 <none> 4561/TCP,4560/TCP 7d7h ricplt service-ricplt-submgr-http ClusterIP None <none> 3800/TCP 7d7h ricplt service-ricplt-submgr-rmr ClusterIP None <none> 4560/TCP,4561/TCP 7d7h ricplt service-ricplt-vespamgr-http ClusterIP 10.98.139.191 <none> 8080/TCP,9095/TCP 7d7h ricxapp aux-entry ClusterIP 10.99.152.66 <none> 80/TCP,443/TCP 7d7h ricxapp service-ricxapp-bouncer-xapp-http ClusterIP 10.99.222.68 <none> 8080/TCP 25h ricxapp service-ricxapp-bouncer-xapp-rmr ClusterIP 10.97.149.3 <none> 4560/TCP,4561/TCP 25h ricxapp service-ricxapp-example-http ClusterIP 10.101.14.166 <none> 8080/TCP 2d11h ricxapp service-ricxapp-example-rmr ClusterIP 10.97.129.85 <none> 4560/TCP,4561/TCP 2d11h /root# /root# /root#ping 10.101.210.88 --------> this is a cluster ip and i am pinging it from my host/node machine PING 10.101.210.88 (10.101.210.88) 56(84) bytes of data. 64 bytes from 10.101.210.88: icmp_seq=1 ttl=64 time=0.061 ms 64 bytes from 10.101.210.88: icmp_seq=2 ttl=64 time=0.032 ms ^C --- 10.101.210.88 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1019ms rtt min/avg/max/mdev = 0.032/0.046/0.061/0.014 ms /root# thanks
Is the client using a persistent/long-lived connection? Because the service endpoint will only distribute the new connections in a round-robin manner as per your setting. Kubernetes doesn't offer any built-in mechanism to load balance long-lived connections. For long-lived connections, you can handle the load balancing on the client side or use a reverse proxy (service mesh/traefik ingress) which can take care of the load balancing responsibility.
###Instruction: a have a problem with Kubernetes service. My service only sends requests to one pod ignoring other pods. I don't know why and how can I debug it. It should distribute request in a round-robin way. For me, it seems that something's wrong service but I don't know to debug it. Outputs of kubectl describe service and nodes along with endpoints apiVersion: v1 kind: Service metadata: name: web-svc spec: type: NodePort ports: - port: 80 targetPort: 80 nodePort: 30002 selector: app: web --- apiVersion: apps/v1 kind: Deployment metadata: name: web-app labels: app: web spec: selector: matchLabels: app: web replicas: 3 template: metadata: labels: app: web spec: containers: - name: web-app image: webimage ports: - containerPort: 80 imagePullPolicy: Never resources: limits: cpu: "0.5" requests: cpu: "0.5" Name: web-svc Namespace: default Labels: <none> Annotations: Selector: app=webpod Type: NodePort IP: 10.111.23.112 Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 30002/TCP Endpoints: 10.244.1.7:80,10.244.1.8:80,10.244.1.9:80 Session Affinity: None External Traffic Policy: Cluster Events: <none> Name: kubernetes Namespace: default Labels: component=apiserver provider=kubernetes Annotations: <none> Selector: <none> Type: ClusterIP IP: 10.96.0.1 Port: https 443/TCP TargetPort: 6443/TCP Endpoints: 172.18.0.3:6443 Session Affinity: None Events: <none> Name: web-depl-5c87b748f-kvtqr Namespace: default Priority: 0 Node: kind-worker/172.18.0.2 Start Time: Mon, 04 May 2020 04:20:34 +0000 Labels: app=webpod pod-template-hash=5c87b748f Annotations: <none> Status: Running IP: 10.244.1.8 IPs: IP: 10.244.1.8 Controlled By: ReplicaSet/web-depl-5c87b748f Containers: web: Container ID: containerd://8b431d80fd729c8b0d7e16fa898ad860d1a223b3e191367a68e3b65e330fe61a Image: web Image ID: sha256:16a4c5d1a652b1accbacc75807abc1d9a05e2be38115dc8a5f369a04a439fad2 Port: 80/TCP Host Port: 0/TCP State: Running Started: Mon, 04 May 2020 04:20:36 +0000 Ready: True Restart Count: 0 Limits: cpu: 500m Requests: cpu: 500m Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-c9tgf (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-c9tgf: Type: Secret (a volume populated by a Secret) SecretName: default-token-c9tgf Optional: false QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: <none> ========= Name: iweblens-svc Namespace: default Labels: <none> Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2020-05-04T04:20:36Z Subsets: Addresses: 10.244.1.7,10.244.1.8,10.244.1.9 NotReadyAddresses: <none> Ports: Name Port Protocol ---- ---- -------- <unset> 80 TCP Events: <none> ### Response: Is the client using a persistent/long-lived connection? Because the service endpoint will only distribute the new connections in a round-robin manner as per your setting. Kubernetes doesn't offer any built-in mechanism to load balance long-lived connections. For long-lived connections, you can handle the load balancing on the client side or use a reverse proxy (service mesh/traefik ingress) which can take care of the load balancing responsibility.
a have a problem with Kubernetes service. My service only sends requests to one pod ignoring other pods. I don't know why and how can I debug it. It should distribute request in a round-robin way. For me, it seems that something's wrong service but I don't know to debug it. Outputs of kubectl describe service and nodes along with endpoints apiVersion: v1 kind: Service metadata: name: web-svc spec: type: NodePort ports: - port: 80 targetPort: 80 nodePort: 30002 selector: app: web --- apiVersion: apps/v1 kind: Deployment metadata: name: web-app labels: app: web spec: selector: matchLabels: app: web replicas: 3 template: metadata: labels: app: web spec: containers: - name: web-app image: webimage ports: - containerPort: 80 imagePullPolicy: Never resources: limits: cpu: "0.5" requests: cpu: "0.5" Name: web-svc Namespace: default Labels: <none> Annotations: Selector: app=webpod Type: NodePort IP: 10.111.23.112 Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 30002/TCP Endpoints: 10.244.1.7:80,10.244.1.8:80,10.244.1.9:80 Session Affinity: None External Traffic Policy: Cluster Events: <none> Name: kubernetes Namespace: default Labels: component=apiserver provider=kubernetes Annotations: <none> Selector: <none> Type: ClusterIP IP: 10.96.0.1 Port: https 443/TCP TargetPort: 6443/TCP Endpoints: 172.18.0.3:6443 Session Affinity: None Events: <none> Name: web-depl-5c87b748f-kvtqr Namespace: default Priority: 0 Node: kind-worker/172.18.0.2 Start Time: Mon, 04 May 2020 04:20:34 +0000 Labels: app=webpod pod-template-hash=5c87b748f Annotations: <none> Status: Running IP: 10.244.1.8 IPs: IP: 10.244.1.8 Controlled By: ReplicaSet/web-depl-5c87b748f Containers: web: Container ID: containerd://8b431d80fd729c8b0d7e16fa898ad860d1a223b3e191367a68e3b65e330fe61a Image: web Image ID: sha256:16a4c5d1a652b1accbacc75807abc1d9a05e2be38115dc8a5f369a04a439fad2 Port: 80/TCP Host Port: 0/TCP State: Running Started: Mon, 04 May 2020 04:20:36 +0000 Ready: True Restart Count: 0 Limits: cpu: 500m Requests: cpu: 500m Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-c9tgf (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-c9tgf: Type: Secret (a volume populated by a Secret) SecretName: default-token-c9tgf Optional: false QoS Class: Burstable Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: <none> ========= Name: iweblens-svc Namespace: default Labels: <none> Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2020-05-04T04:20:36Z Subsets: Addresses: 10.244.1.7,10.244.1.8,10.244.1.9 NotReadyAddresses: <none> Ports: Name Port Protocol ---- ---- -------- <unset> 80 TCP Events: <none>
Just copy the templates folder from the builder, like: COPY --from=builder /app ./ ADD ./migrations /migrations COPY --from=builder ./templates /templates or add the folder like: COPY --from=builder /app ./ ADD ./migrations /migrations ADD ./templates /templates
###Instruction: I have the following directory structure: templates/ a.tmpl services/ service.go main.go Now inside the service.go file i am calling the below function: dir, err := filepath.Abs(filepath.Dir("./templates/")) if err != nil { return nil, err } baseFile := filepath.Join(dir, "a.tmpl") tmpl, err := template.New("base").ParseFiles(baseFile) now the above function is parsing my a.tmpl file as expected. but once this service is up on docker and kubernetes, i am no longer able to open the file since the file does not exists why is that? UPDATE: FROM golang:1.16-buster AS builder # Copy the code from the host and compile it WORKDIR $GOPATH/src/github.com/me/report COPY . ./ # pack templates to binary RUN CGO_ENABLED=0 GOOS=linux go build -mod vendor -ldflags "-X github.com/me/report/cmd.version=$(cat .VERSION)" -o /app . FROM xyz.amazonaws.com/common/platform/base:latest as prod COPY --from=builder /app ./ ADD ./migrations /migrations ENTRYPOINT ["/app"] ### Response: Just copy the templates folder from the builder, like: COPY --from=builder /app ./ ADD ./migrations /migrations COPY --from=builder ./templates /templates or add the folder like: COPY --from=builder /app ./ ADD ./migrations /migrations ADD ./templates /templates
I have the following directory structure: templates/ a.tmpl services/ service.go main.go Now inside the service.go file i am calling the below function: dir, err := filepath.Abs(filepath.Dir("./templates/")) if err != nil { return nil, err } baseFile := filepath.Join(dir, "a.tmpl") tmpl, err := template.New("base").ParseFiles(baseFile) now the above function is parsing my a.tmpl file as expected. but once this service is up on docker and kubernetes, i am no longer able to open the file since the file does not exists why is that? UPDATE: FROM golang:1.16-buster AS builder # Copy the code from the host and compile it WORKDIR $GOPATH/src/github.com/me/report COPY . ./ # pack templates to binary RUN CGO_ENABLED=0 GOOS=linux go build -mod vendor -ldflags "-X github.com/me/report/cmd.version=$(cat .VERSION)" -o /app . FROM xyz.amazonaws.com/common/platform/base:latest as prod COPY --from=builder /app ./ ADD ./migrations /migrations ENTRYPOINT ["/app"]
When you build your binary, go only includes the necessary go files to have your program work. It does not know that your templates directory is necessary to the running of the program. There is several solutions to your problem : Create an environment variable pointing to were the templates are and use it on runtime. Embed the templates directory into your binary using the embed package so that you can access the files at runtime
###Instruction: I have the following directory structure: templates/ a.tmpl services/ service.go main.go Now inside the service.go file i am calling the below function: dir, err := filepath.Abs(filepath.Dir("./templates/")) if err != nil { return nil, err } baseFile := filepath.Join(dir, "a.tmpl") tmpl, err := template.New("base").ParseFiles(baseFile) now the above function is parsing my a.tmpl file as expected. but once this service is up on docker and kubernetes, i am no longer able to open the file since the file does not exists why is that? UPDATE: FROM golang:1.16-buster AS builder # Copy the code from the host and compile it WORKDIR $GOPATH/src/github.com/me/report COPY . ./ # pack templates to binary RUN CGO_ENABLED=0 GOOS=linux go build -mod vendor -ldflags "-X github.com/me/report/cmd.version=$(cat .VERSION)" -o /app . FROM xyz.amazonaws.com/common/platform/base:latest as prod COPY --from=builder /app ./ ADD ./migrations /migrations ENTRYPOINT ["/app"] ### Response: When you build your binary, go only includes the necessary go files to have your program work. It does not know that your templates directory is necessary to the running of the program. There is several solutions to your problem : Create an environment variable pointing to were the templates are and use it on runtime. Embed the templates directory into your binary using the embed package so that you can access the files at runtime
I have the following directory structure: templates/ a.tmpl services/ service.go main.go Now inside the service.go file i am calling the below function: dir, err := filepath.Abs(filepath.Dir("./templates/")) if err != nil { return nil, err } baseFile := filepath.Join(dir, "a.tmpl") tmpl, err := template.New("base").ParseFiles(baseFile) now the above function is parsing my a.tmpl file as expected. but once this service is up on docker and kubernetes, i am no longer able to open the file since the file does not exists why is that? UPDATE: FROM golang:1.16-buster AS builder # Copy the code from the host and compile it WORKDIR $GOPATH/src/github.com/me/report COPY . ./ # pack templates to binary RUN CGO_ENABLED=0 GOOS=linux go build -mod vendor -ldflags "-X github.com/me/report/cmd.version=$(cat .VERSION)" -o /app . FROM xyz.amazonaws.com/common/platform/base:latest as prod COPY --from=builder /app ./ ADD ./migrations /migrations ENTRYPOINT ["/app"]
In our case we were using gcsfuse and mounted the bucket to the gce instance at /root/bucketmount. since the bucket is mounted on root fs, I have to use escalate privilege tag but it's not suggested to use. so we now changed the permissions to the folder using chown command. This helped us so In our pipeline we have created 3 steps 1 for creating a gcs bucket and mount it to the vm 2 for checking permissions for the mounted path and updating the permissions 3 for deploying the docker containers. As of now the path is hardcoded I want to make it randomised verifing options will let you know if I succeed.. Update I have created a script which will get triggered by n8n webhook and creates a random name for the mount directory and by using the same script the name of the directory will be updated to a csv file from which my CICD will fetch the details and deploy the containers.
###Instruction: We are planning to move from on-prem to gcp, but the process of designing is taking some time. We want to deploy our docker containers on gce vms. We don't want to manually increase the storage every time the containers fill up the space, we know that this can be done by auto scaling but we don't have enough expertise on this. So, we found out that GCS can be mounted on the VM and we can run our containers on the mounted path, but when we tried touching files or running containers we are getting permission denied error. can anyone help us to resolve it we tried many docs and tutorials but they are a little bit confusing. We used gcsfuse for mounting the bucket to the gce vm ### Response: In our case we were using gcsfuse and mounted the bucket to the gce instance at /root/bucketmount. since the bucket is mounted on root fs, I have to use escalate privilege tag but it's not suggested to use. so we now changed the permissions to the folder using chown command. This helped us so In our pipeline we have created 3 steps 1 for creating a gcs bucket and mount it to the vm 2 for checking permissions for the mounted path and updating the permissions 3 for deploying the docker containers. As of now the path is hardcoded I want to make it randomised verifing options will let you know if I succeed.. Update I have created a script which will get triggered by n8n webhook and creates a random name for the mount directory and by using the same script the name of the directory will be updated to a csv file from which my CICD will fetch the details and deploy the containers.
We are planning to move from on-prem to gcp, but the process of designing is taking some time. We want to deploy our docker containers on gce vms. We don't want to manually increase the storage every time the containers fill up the space, we know that this can be done by auto scaling but we don't have enough expertise on this. So, we found out that GCS can be mounted on the VM and we can run our containers on the mounted path, but when we tried touching files or running containers we are getting permission denied error. can anyone help us to resolve it we tried many docs and tutorials but they are a little bit confusing. We used gcsfuse for mounting the bucket to the gce vm
I don't think that is how it works. meshservice1 sidecar needs to know where to contact meshservice2, it's used Kubernetes service to get the latest endpoint IP of meshservice2 itself. alternatively, you can use spec.endpoints, but you need to update everytime Pod IP is change.
###Instruction: I have one question regarding ServiceEntry in Istio. As I can see in the guides it says that we can use workloadSelector to select one or more Kubernetes pods of the MESH_INTERNAL to direct traffic into them. That means that we can route traffic directly to a kuberneted pod without the usage of a kubernetes service? Let's say for example that meshservice1 wants to call meshservice2. The below are the labels of meshservice2 pods. kind: Pod labels: app.name: meshservice2 name: meshservice2 and below is its service-entry kind: ServiceEntry metadata: labels: app.name: meshservice2 spec: hosts: - meshservice2.test location: MESH_INTERNAL ports: - name: http number: 80 protocol: HTTP resolution: STATIC workloadSelector: labels: app.name: meshservice2 Is the above two enough to call meshservice2.test succesfully from meshservice1 pods without the definition of a kubernetes svc for memservice2? I tried it that way, it connects to the other service but I receive 503 response code. When I add a kubernetes svc for meshservice2 evrything works as expected. I am just wondering if I can safely discard kubernetes svc from my app for mesh internal calls. ### Response: I don't think that is how it works. meshservice1 sidecar needs to know where to contact meshservice2, it's used Kubernetes service to get the latest endpoint IP of meshservice2 itself. alternatively, you can use spec.endpoints, but you need to update everytime Pod IP is change.
I have one question regarding ServiceEntry in Istio. As I can see in the guides it says that we can use workloadSelector to select one or more Kubernetes pods of the MESH_INTERNAL to direct traffic into them. That means that we can route traffic directly to a kuberneted pod without the usage of a kubernetes service? Let's say for example that meshservice1 wants to call meshservice2. The below are the labels of meshservice2 pods. kind: Pod labels: app.name: meshservice2 name: meshservice2 and below is its service-entry kind: ServiceEntry metadata: labels: app.name: meshservice2 spec: hosts: - meshservice2.test location: MESH_INTERNAL ports: - name: http number: 80 protocol: HTTP resolution: STATIC workloadSelector: labels: app.name: meshservice2 Is the above two enough to call meshservice2.test succesfully from meshservice1 pods without the definition of a kubernetes svc for memservice2? I tried it that way, it connects to the other service but I receive 503 response code. When I add a kubernetes svc for meshservice2 evrything works as expected. I am just wondering if I can safely discard kubernetes svc from my app for mesh internal calls.
This: spec: containers: - name: private-reg-container image: xichen9718/portals_docker_repository:latest Is pulling from docker.io and not a private repo. Also, this: Warning Failed 28s (x2 over 46s) kubelet Failed to pull image "xichen9718/portals_docker_repository:latest": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/xichen9718/portals_docker_repository:latest": failed to resolve reference "docker.io/xichen9718/portals_docker_repository:latest": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed Says authorization failed. Presumeably because you're trying to authenticate against docker hub and not your private repo. If you are using a private repo, you need to include the hostname of your repo in the image, e.g. spec: containers: - name: private-reg-container image: myprivaterepo.com/xichen9718/portals_docker_repository:latest If you don't it will assume docker hub
###Instruction: I'm trying to pull my Docker Image from my private Docker Registry in Kubernetes, but I got this error: ImagePullBackOff NAME READY STATUS RESTARTS AGE nginx-994fc8fb7-f24sv 2/2 Running 0 2d22h portals-app-669b654d87-lk258 0/1 ImagePullBackOff 0 66m portals-app-669b654d87-p87c6 0/1 ImagePullBackOff 0 67m portals-app-7775d445-c5762 0/1 ImagePullBackOff 0 66m So I used describe command to view the error detail, here is the error: Name: portals-app-669b654d87-lk258 Namespace: default Priority: 0 Service Account: default Node: client-portal-nodepool-qjfch/10.127.0.2 Start Time: Sat, 25 Feb 2023 20:30:56 +1100 Labels: app=app pod-template-hash=669b654d87 Annotations: <none> Status: Pending IP: 10.244.0.60 IPs: IP: 10.244.0.60 Controlled By: ReplicaSet/portals-app-669b654d87 Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-s2j6z: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal BackOff 2m34s (x284 over 67m) kubelet Back-off pulling image "xichen9718/portals_docker_repository:latest" I feel this error message is not that clear, and I guess it might have something to do with the Docker Private Registry authentication. So I create an individual Pod. This is my pod Yaml file.: apiVersion: v1 kind: Pod metadata: name: private-reg spec: containers: - name: private-reg-container image: xichen9718/portals_docker_repository:latest imagePullSecrets: - name: regcred and I ran the describe command again, I got this at this time: Name: private-reg Namespace: default Priority: 0 Service Account: default Node: client-portal-nodepool-qjfch/10.127.0.2 Start Time: Sat, 25 Feb 2023 21:15:52 +1100 Labels: <none> Annotations: <none> Status: Pending IP: 10.244.0.79 IPs: IP: 10.244.0.79 Containers: private-reg-container: Container ID: Image: xichen9718/portals_docker_repository:latest Image ID: Port: <none> Host Port: <none> State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7gvvj (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-7gvvj: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 50s default-scheduler Successfully assigned default/private-reg to client-portal-nodepool-qjfch Warning Failed 28s (x2 over 46s) kubelet Failed to pull image "xichen9718/portals_docker_repository:latest": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/xichen9718/portals_docker_repository:latest": failed to resolve reference "docker.io/xichen9718/portals_docker_repository:latest": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed Warning Failed 28s (x2 over 46s) kubelet Error: ErrImagePull Normal BackOff 15s (x2 over 45s) kubelet Back-off pulling image "xichen9718/portals_docker_repository:latest" Warning Failed 15s (x2 over 45s) kubelet Error: ImagePullBackOff Normal Pulling 2s (x3 over 49s) kubelet Pulling image "xichen9718/portals_docker_repository:latest" But I think I set the secret successfully, when I run kubectl get secret regcred --output=yaml, I can view my .dockerconfigjson and other data. and I tried to pull the image locally, I can pull it successfully, so that means my image name and tag is correct. I'm really confused now, can anybody help me with this please? Thanks a lot. Update: I deleted my secret, deployment, pod and re-created them again, then it works. I guess it might have something to do with the sequence of my developing process ### Response: This: spec: containers: - name: private-reg-container image: xichen9718/portals_docker_repository:latest Is pulling from docker.io and not a private repo. Also, this: Warning Failed 28s (x2 over 46s) kubelet Failed to pull image "xichen9718/portals_docker_repository:latest": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/xichen9718/portals_docker_repository:latest": failed to resolve reference "docker.io/xichen9718/portals_docker_repository:latest": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed Says authorization failed. Presumeably because you're trying to authenticate against docker hub and not your private repo. If you are using a private repo, you need to include the hostname of your repo in the image, e.g. spec: containers: - name: private-reg-container image: myprivaterepo.com/xichen9718/portals_docker_repository:latest If you don't it will assume docker hub
I'm trying to pull my Docker Image from my private Docker Registry in Kubernetes, but I got this error: ImagePullBackOff NAME READY STATUS RESTARTS AGE nginx-994fc8fb7-f24sv 2/2 Running 0 2d22h portals-app-669b654d87-lk258 0/1 ImagePullBackOff 0 66m portals-app-669b654d87-p87c6 0/1 ImagePullBackOff 0 67m portals-app-7775d445-c5762 0/1 ImagePullBackOff 0 66m So I used describe command to view the error detail, here is the error: Name: portals-app-669b654d87-lk258 Namespace: default Priority: 0 Service Account: default Node: client-portal-nodepool-qjfch/10.127.0.2 Start Time: Sat, 25 Feb 2023 20:30:56 +1100 Labels: app=app pod-template-hash=669b654d87 Annotations: <none> Status: Pending IP: 10.244.0.60 IPs: IP: 10.244.0.60 Controlled By: ReplicaSet/portals-app-669b654d87 Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-s2j6z: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal BackOff 2m34s (x284 over 67m) kubelet Back-off pulling image "xichen9718/portals_docker_repository:latest" I feel this error message is not that clear, and I guess it might have something to do with the Docker Private Registry authentication. So I create an individual Pod. This is my pod Yaml file.: apiVersion: v1 kind: Pod metadata: name: private-reg spec: containers: - name: private-reg-container image: xichen9718/portals_docker_repository:latest imagePullSecrets: - name: regcred and I ran the describe command again, I got this at this time: Name: private-reg Namespace: default Priority: 0 Service Account: default Node: client-portal-nodepool-qjfch/10.127.0.2 Start Time: Sat, 25 Feb 2023 21:15:52 +1100 Labels: <none> Annotations: <none> Status: Pending IP: 10.244.0.79 IPs: IP: 10.244.0.79 Containers: private-reg-container: Container ID: Image: xichen9718/portals_docker_repository:latest Image ID: Port: <none> Host Port: <none> State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7gvvj (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-7gvvj: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 50s default-scheduler Successfully assigned default/private-reg to client-portal-nodepool-qjfch Warning Failed 28s (x2 over 46s) kubelet Failed to pull image "xichen9718/portals_docker_repository:latest": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/xichen9718/portals_docker_repository:latest": failed to resolve reference "docker.io/xichen9718/portals_docker_repository:latest": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed Warning Failed 28s (x2 over 46s) kubelet Error: ErrImagePull Normal BackOff 15s (x2 over 45s) kubelet Back-off pulling image "xichen9718/portals_docker_repository:latest" Warning Failed 15s (x2 over 45s) kubelet Error: ImagePullBackOff Normal Pulling 2s (x3 over 49s) kubelet Pulling image "xichen9718/portals_docker_repository:latest" But I think I set the secret successfully, when I run kubectl get secret regcred --output=yaml, I can view my .dockerconfigjson and other data. and I tried to pull the image locally, I can pull it successfully, so that means my image name and tag is correct. I'm really confused now, can anybody help me with this please? Thanks a lot. Update: I deleted my secret, deployment, pod and re-created them again, then it works. I guess it might have something to do with the sequence of my developing process
With the node-exporter installed as daemonset, you have the metrics of the entire cluster. To have the overall cluster memory usage, in percentage: 100 * ( sum(node_memory_MemTotal_bytes{service="node-exporter"}) - sum(node_memory_MemAvailable_bytes{service="node-exporter"}) ) / sum(node_memory_MemTotal_bytes{service="node-exporter"}) Result, for example: {} 37.234674067149946 Memory usage by node: 100 * ( sum by (instance) (node_memory_MemTotal_bytes{service="node-exporter"}) - sum by (instance) (node_memory_MemAvailable_bytes{service="node-exporter"}) ) / sum by (instance) (node_memory_MemTotal_bytes{service="node-exporter"}) Result, for example: {instance="x.x.x.x:9100"} 42.51742364002058 {instance="y.y.y.y:9100"} 38.26956501095188 {instance="z.z.z.z:9100"} 36.57150031634585 Memory usage for a specific namespace: 100 * sum(container_memory_working_set_bytes{namespace="my-namespace"}) / sum(node_memory_MemTotal_bytes) Result, for example: {} 4.212481093513011
###Instruction: I have this query: 100 * (1 - ((avg_over_time(node_memory_MemFree_bytes[10m]) + avg_over_time(node_memory_Cached_bytes[10m]) + avg_over_time(node_memory_Buffers_bytes[10m])) / avg_over_time(node_memory_MemTotal_bytes[10m]))) However it only returns the namespace where Prometheus is installed: {instance="10.240.0.11:9100", job="kubernetes-service-endpoints", kubernetes_name="node-exporter", kubernetes_namespace="monitoring"} 5.58905365516873 {instance="10.240.0.11:9100", job="node-exporter"} 5.588556605118522 {instance="10.240.0.42:9100", job="kubernetes-service-endpoints", kubernetes_name="node-exporter", kubernetes_namespace="monitoring"} 5.093870850709847 {instance="10.240.0.42:9100", job="node-exporter"} 5.09401539556571 {instance="10.240.0.90:9100", job="kubernetes-service-endpoints", kubernetes_name="node-exporter", kubernetes_namespace="monitoring"} 5.103046564234582 {instance="10.240.0.90:9100", job="node-exporter"} Is it possible to have a similar query that queries the entire cluster, all nodes and namespaces? If yes, how? ### Response: With the node-exporter installed as daemonset, you have the metrics of the entire cluster. To have the overall cluster memory usage, in percentage: 100 * ( sum(node_memory_MemTotal_bytes{service="node-exporter"}) - sum(node_memory_MemAvailable_bytes{service="node-exporter"}) ) / sum(node_memory_MemTotal_bytes{service="node-exporter"}) Result, for example: {} 37.234674067149946 Memory usage by node: 100 * ( sum by (instance) (node_memory_MemTotal_bytes{service="node-exporter"}) - sum by (instance) (node_memory_MemAvailable_bytes{service="node-exporter"}) ) / sum by (instance) (node_memory_MemTotal_bytes{service="node-exporter"}) Result, for example: {instance="x.x.x.x:9100"} 42.51742364002058 {instance="y.y.y.y:9100"} 38.26956501095188 {instance="z.z.z.z:9100"} 36.57150031634585 Memory usage for a specific namespace: 100 * sum(container_memory_working_set_bytes{namespace="my-namespace"}) / sum(node_memory_MemTotal_bytes) Result, for example: {} 4.212481093513011
I have this query: 100 * (1 - ((avg_over_time(node_memory_MemFree_bytes[10m]) + avg_over_time(node_memory_Cached_bytes[10m]) + avg_over_time(node_memory_Buffers_bytes[10m])) / avg_over_time(node_memory_MemTotal_bytes[10m]))) However it only returns the namespace where Prometheus is installed: {instance="10.240.0.11:9100", job="kubernetes-service-endpoints", kubernetes_name="node-exporter", kubernetes_namespace="monitoring"} 5.58905365516873 {instance="10.240.0.11:9100", job="node-exporter"} 5.588556605118522 {instance="10.240.0.42:9100", job="kubernetes-service-endpoints", kubernetes_name="node-exporter", kubernetes_namespace="monitoring"} 5.093870850709847 {instance="10.240.0.42:9100", job="node-exporter"} 5.09401539556571 {instance="10.240.0.90:9100", job="kubernetes-service-endpoints", kubernetes_name="node-exporter", kubernetes_namespace="monitoring"} 5.103046564234582 {instance="10.240.0.90:9100", job="node-exporter"} Is it possible to have a similar query that queries the entire cluster, all nodes and namespaces? If yes, how?
After a couple of days' research, I managed to find a way to do this, and it is annoyingly simple... The key to all of this lies in the documentation here. Basically, in order to update from mongo 4.4.0 to 6.0.4, you need to do it in steps: First, change the mongo version from "4.4.0" to e.g. "5.0.4", whilst setting the featureCompatibilityVersion to "5.0": apiVersion: mongodbcommunity.mongodb.com/v1 kind: MongoDBCommunity metadata: name: mdb0 spec: version: "5.0.4" featureCompatibilityVersion: "5.0" ... After having applied this, verify that the featureCompatibilityVersion is indeed 5.0 and that all MongoDB pods are "5.0.4". If the MongoDB pods aren't "5.0.4", you need to restart the service (See "Restarting everything" below). You can now run the second step: Update the mongo version to "6.0.4" and the featureCompatibilityVersion to "6.0": apiVersion: mongodbcommunity.mongodb.com/v1 kind: MongoDBCommunity metadata: name: mdb0 spec: version: "6.0.4" featureCompatibilityVersion: "6.0" ... Apply this change and verify that the featureCompatibilityVersion is indeed 6.0, and that all MongoDB pods are "6.0.4". Once again, if the pods aren't "6.0.4", Restart everything according to the procedure below. Checking feature compatibility version The easiest way to do this is to: Port-forward the mongodb connection to your host: kubectl port-forward service/mdb0-svc -n mongodb 27017:27017 (according to the guide). Install mongosh on your host (if you haven't done so already). Run the following query: mongosh -u mongoadmin -p secretpassword --eval 'db.adminCommand({getParameter: 1, featureCompatibilityVersion: 1})' (if you're using the same credentials as the guide). Restarting everything During my development process, I had multiple occasions where I had to restart everything. Here's my way of doing that: Delete the config's resources: kubectl delete -f config/samples/arbitrary_statefulset_configuration/mongodb.com_v1_hostpath.yaml -n mongodb . Whilst (1) is pending, execute the following lines of code to make (1) being able to finish: kubectl patch pv data-volume-0 -p "{\"metadata\":{\"finalizers\":null}}" -n mongodb kubectl patch pv data-volume-1 -p "{\"metadata\":{\"finalizers\":null}}" -n mongodb kubectl patch pv data-volume-2 -p "{\"metadata\":{\"finalizers\":null}}" -n mongodb kubectl patch pv logs-volume-0 -p "{\"metadata\":{\"finalizers\":null}}" -n mongodb kubectl patch pv logs-volume-1 -p "{\"metadata\":{\"finalizers\":null}}" -n mongodb kubectl patch pv logs-volume-2 -p "{\"metadata\":{\"finalizers\":null}}" -n mongodb Run the following lines of code: kubectl delete deployments.apps mongodb-kubernetes-operator -n mongodb kubectl delete crd mongodbcommunity.mongodbcommunity.mongodb.com kubectl apply -f config/crd/bases/mongodbcommunity.mongodb.com_mongodbcommunity.yaml kubectl apply -k config/rbac/ -n mongodb kubectl create -f config/manager/manager.yaml -n mongodb kubectl apply -f new-user.yaml -n mongodb kubectl apply -f config/samples/arbitrary_statefulset_configuration/mongodb.com_v1_hostpath.yaml -n mongodb
###Instruction: I have set up the Kubernetes MongoDB operator according to this guide: https://adamtheautomator.com/mongodb-kubernetes/ and it works well. However, when I try to update the MongoDB version to 6.0.4, I get the following error: { "error":"UPGRADE PROBLEM: Found an invalid featureCompatibilityVersion document (ERROR: Location4926900: Invalid featureCompatibilityVersion document in admin.system.version: { _id: \"featureCompatibilityVersion\", version: \"4.4\" }. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility. :: caused by :: Invalid feature compatibility version value, expected '5.0' or '5.3' or '6.0. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility.). If the current featureCompatibilityVersion is below 5.0, see the documentation on upgrading at https://docs.mongodb.com/master/release-notes/5.0/#upgrade-procedures."} I have followed this guide: https://github.com/mongodb/mongodb-kubernetes-operator/blob/master/docs/deploy-configure.md#upgrade-your-mongodbcommunity-resource-version-and-feature-compatibility-version This means that my config/samples/arbitrary_statefulset_configuration/mongodb.com_v1_hostpath.yaml file looks like this: apiVersion: mongodbcommunity.mongodb.com/v1 kind: MongoDBCommunity metadata: name: mdb0 spec: members: 2 type: ReplicaSet version: "6.0.4" featureCompatibilityVersion: "6.0" security: ... The rest is set according to the linked guide (in the first link above). The error that is thrown suggests that, for whatever reason, the featureCompatibilityVersion field is ignored, even though I have explicitly set it to "6.0". However, since the documentation clearly states that this is a possible configuration, this shouldn't be the case. My question then is: am I doing something wrong, or is this a bug? ### Response: After a couple of days' research, I managed to find a way to do this, and it is annoyingly simple... The key to all of this lies in the documentation here. Basically, in order to update from mongo 4.4.0 to 6.0.4, you need to do it in steps: First, change the mongo version from "4.4.0" to e.g. "5.0.4", whilst setting the featureCompatibilityVersion to "5.0": apiVersion: mongodbcommunity.mongodb.com/v1 kind: MongoDBCommunity metadata: name: mdb0 spec: version: "5.0.4" featureCompatibilityVersion: "5.0" ... After having applied this, verify that the featureCompatibilityVersion is indeed 5.0 and that all MongoDB pods are "5.0.4". If the MongoDB pods aren't "5.0.4", you need to restart the service (See "Restarting everything" below). You can now run the second step: Update the mongo version to "6.0.4" and the featureCompatibilityVersion to "6.0": apiVersion: mongodbcommunity.mongodb.com/v1 kind: MongoDBCommunity metadata: name: mdb0 spec: version: "6.0.4" featureCompatibilityVersion: "6.0" ... Apply this change and verify that the featureCompatibilityVersion is indeed 6.0, and that all MongoDB pods are "6.0.4". Once again, if the pods aren't "6.0.4", Restart everything according to the procedure below. Checking feature compatibility version The easiest way to do this is to: Port-forward the mongodb connection to your host: kubectl port-forward service/mdb0-svc -n mongodb 27017:27017 (according to the guide). Install mongosh on your host (if you haven't done so already). Run the following query: mongosh -u mongoadmin -p secretpassword --eval 'db.adminCommand({getParameter: 1, featureCompatibilityVersion: 1})' (if you're using the same credentials as the guide). Restarting everything During my development process, I had multiple occasions where I had to restart everything. Here's my way of doing that: Delete the config's resources: kubectl delete -f config/samples/arbitrary_statefulset_configuration/mongodb.com_v1_hostpath.yaml -n mongodb . Whilst (1) is pending, execute the following lines of code to make (1) being able to finish: kubectl patch pv data-volume-0 -p "{\"metadata\":{\"finalizers\":null}}" -n mongodb kubectl patch pv data-volume-1 -p "{\"metadata\":{\"finalizers\":null}}" -n mongodb kubectl patch pv data-volume-2 -p "{\"metadata\":{\"finalizers\":null}}" -n mongodb kubectl patch pv logs-volume-0 -p "{\"metadata\":{\"finalizers\":null}}" -n mongodb kubectl patch pv logs-volume-1 -p "{\"metadata\":{\"finalizers\":null}}" -n mongodb kubectl patch pv logs-volume-2 -p "{\"metadata\":{\"finalizers\":null}}" -n mongodb Run the following lines of code: kubectl delete deployments.apps mongodb-kubernetes-operator -n mongodb kubectl delete crd mongodbcommunity.mongodbcommunity.mongodb.com kubectl apply -f config/crd/bases/mongodbcommunity.mongodb.com_mongodbcommunity.yaml kubectl apply -k config/rbac/ -n mongodb kubectl create -f config/manager/manager.yaml -n mongodb kubectl apply -f new-user.yaml -n mongodb kubectl apply -f config/samples/arbitrary_statefulset_configuration/mongodb.com_v1_hostpath.yaml -n mongodb
I have set up the Kubernetes MongoDB operator according to this guide: https://adamtheautomator.com/mongodb-kubernetes/ and it works well. However, when I try to update the MongoDB version to 6.0.4, I get the following error: { "error":"UPGRADE PROBLEM: Found an invalid featureCompatibilityVersion document (ERROR: Location4926900: Invalid featureCompatibilityVersion document in admin.system.version: { _id: \"featureCompatibilityVersion\", version: \"4.4\" }. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility. :: caused by :: Invalid feature compatibility version value, expected '5.0' or '5.3' or '6.0. See https://docs.mongodb.com/master/release-notes/5.0-compatibility/#feature-compatibility.). If the current featureCompatibilityVersion is below 5.0, see the documentation on upgrading at https://docs.mongodb.com/master/release-notes/5.0/#upgrade-procedures."} I have followed this guide: https://github.com/mongodb/mongodb-kubernetes-operator/blob/master/docs/deploy-configure.md#upgrade-your-mongodbcommunity-resource-version-and-feature-compatibility-version This means that my config/samples/arbitrary_statefulset_configuration/mongodb.com_v1_hostpath.yaml file looks like this: apiVersion: mongodbcommunity.mongodb.com/v1 kind: MongoDBCommunity metadata: name: mdb0 spec: members: 2 type: ReplicaSet version: "6.0.4" featureCompatibilityVersion: "6.0" security: ... The rest is set according to the linked guide (in the first link above). The error that is thrown suggests that, for whatever reason, the featureCompatibilityVersion field is ignored, even though I have explicitly set it to "6.0". However, since the documentation clearly states that this is a possible configuration, this shouldn't be the case. My question then is: am I doing something wrong, or is this a bug?
I want to dynamically change the amount of replicas of a deployment depending on the available resources, especially memory. You can use the Kubernetes HPA (Horizonatal POD autoscaling), which which dynamically changes the number of replicas based on the CPU/Memory utilization. While you can also set the default number of replicas for your deployment for deployment you have 1 scaling to 3 for staging you have a minimum 3 running scaling up to 5 and for Prod minimum running you 4 and scaling to 10 etc. Read more about the HPA : https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ Example kubectl autoscale deployment <Deployment-name> --cpu-percent=50 --min=1 --max=10 Example ref : https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/ If there is less traffic auto-scale down to zero is also possible and based on CPU usage it can scale up pods auto without user input. If you not looking for scaling or just for templating you are facing issues ignore above answer and you can use the scaffold or helm to mange the YAML templating dynamically. Deployment.yaml replicas: {{ .Values.replicas}} values-dev.yaml replicas=3 values-staging.yaml replicas=5 based on the environment you can pass the values-*.yaml replica to helm and it will create the template for you.
###Instruction: I am using Google Kubernetes Engine and some of my deployments are more important than others (For example, the staging environment can run on less replicas or even stop if needed). I want to dynamically change the amount of replicas of a deployment depending on the available resources, especially memory. Ideally, I would set a default number of replicas which is used normally and if the cluster is running low on resources, it should reduce the number of replicas of that deployment. This should happen for some deployments but not all of them. apiVersion: apps/v1 kind: Deployment metadata: labels: app: my-deployment name: my-deployment namespace: my-namespace spec: replicas: 3 # This should be lower if the cluster is running on low resources (memory or CPU) selector: matchLabels: app: my-deployment template: metadata: creationTimestamp: null labels: app: my-deployment spec: containers: - image: my/image:version name: my-deployment It should even be possible to reduce the number of replicas down to 0 for some deployments but not for others. Note that my deployments are distributed across multiple namespaces (if that matters). ### Response: I want to dynamically change the amount of replicas of a deployment depending on the available resources, especially memory. You can use the Kubernetes HPA (Horizonatal POD autoscaling), which which dynamically changes the number of replicas based on the CPU/Memory utilization. While you can also set the default number of replicas for your deployment for deployment you have 1 scaling to 3 for staging you have a minimum 3 running scaling up to 5 and for Prod minimum running you 4 and scaling to 10 etc. Read more about the HPA : https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ Example kubectl autoscale deployment <Deployment-name> --cpu-percent=50 --min=1 --max=10 Example ref : https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/ If there is less traffic auto-scale down to zero is also possible and based on CPU usage it can scale up pods auto without user input. If you not looking for scaling or just for templating you are facing issues ignore above answer and you can use the scaffold or helm to mange the YAML templating dynamically. Deployment.yaml replicas: {{ .Values.replicas}} values-dev.yaml replicas=3 values-staging.yaml replicas=5 based on the environment you can pass the values-*.yaml replica to helm and it will create the template for you.
I am using Google Kubernetes Engine and some of my deployments are more important than others (For example, the staging environment can run on less replicas or even stop if needed). I want to dynamically change the amount of replicas of a deployment depending on the available resources, especially memory. Ideally, I would set a default number of replicas which is used normally and if the cluster is running low on resources, it should reduce the number of replicas of that deployment. This should happen for some deployments but not all of them. apiVersion: apps/v1 kind: Deployment metadata: labels: app: my-deployment name: my-deployment namespace: my-namespace spec: replicas: 3 # This should be lower if the cluster is running on low resources (memory or CPU) selector: matchLabels: app: my-deployment template: metadata: creationTimestamp: null labels: app: my-deployment spec: containers: - image: my/image:version name: my-deployment It should even be possible to reduce the number of replicas down to 0 for some deployments but not for others. Note that my deployments are distributed across multiple namespaces (if that matters).
well... I feel dumb for not reading the docs more closely. Here I am to answer my own question. create a RuntimeClass like this: kind: RuntimeClass apiVersion: node.k8s.io/v1 metadata: name: nvidia handler: nvidia add runtimeClassName: nvidia to the container spec of any containers that you want to use the nvidia containerd engine. Thats all. It just works.
###Instruction: I want to run some gpu workloads on my bare metal k8s cluster. So I have installed the nvidia containerd runtime engine on my cluster. But the cilium cni pods crashes when I make nvidia the default runtime. (I'll post about that some other place) I'm thinking I should be able to work around this problem by scheduling only the gpu pods on the nvidia runtime and leave runc as the default. Is it possible to specify different runtime engines for different workloads? Is this a good workaround? If so, how do I configure it? This is how I've install the nvidia drivers and containerd runtime https://docs.nvidia.com/datacenter/cloud-native/kubernetes/install-k8s.html#option-2-installing-kubernetes-using-kubeadm I found this documentation, but it's a little dry https://kubernetes.io/docs/concepts/containers/runtime-class/ ### Response: well... I feel dumb for not reading the docs more closely. Here I am to answer my own question. create a RuntimeClass like this: kind: RuntimeClass apiVersion: node.k8s.io/v1 metadata: name: nvidia handler: nvidia add runtimeClassName: nvidia to the container spec of any containers that you want to use the nvidia containerd engine. Thats all. It just works.
I want to run some gpu workloads on my bare metal k8s cluster. So I have installed the nvidia containerd runtime engine on my cluster. But the cilium cni pods crashes when I make nvidia the default runtime. (I'll post about that some other place) I'm thinking I should be able to work around this problem by scheduling only the gpu pods on the nvidia runtime and leave runc as the default. Is it possible to specify different runtime engines for different workloads? Is this a good workaround? If so, how do I configure it? This is how I've install the nvidia drivers and containerd runtime https://docs.nvidia.com/datacenter/cloud-native/kubernetes/install-k8s.html#option-2-installing-kubernetes-using-kubeadm I found this documentation, but it's a little dry https://kubernetes.io/docs/concepts/containers/runtime-class/
Just use the options to the kubectl command to get the name column as the only output. kubectl get pods -o custom-columns=NAME:.metadata.name Then use a grep/sed to filter out what you do or do not want.
###Instruction: I tried the following but failed to get a list of all pods like these: my-pod-1 my-pod-19 my-pod-24 etc. kgp | grep -E my-pod-[0-9]+ kgp | grep -E my-pod-[0-9] kgp | grep -i -E my-pod-[0-9] kgp | grep -i -E "my-pod-[0-9]" kgp | grep -i -E "^my-pod-[0-9]" kgp | grep "^my-pod-[0-9]" kgp | grep -v 'my-pod-[0-9]' kgp == kubectl get pods Also following kind Timor's suggestion I tried the following: user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -E '^my- pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -F '^my-pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -G '^my-pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -e '^my-pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -P '^my-pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep my-pod my-pod-0 1/1 Running 0 4d9h my-pod-1 1/1 Running 0 22h my-pod-aggregator-cronjob-27946622-qnttj 1/1 Running 0 16m my-pod-manager-766d97676c-9hb5c 1/1 Running 0 4d8h Please advise. ### Response: Just use the options to the kubectl command to get the name column as the only output. kubectl get pods -o custom-columns=NAME:.metadata.name Then use a grep/sed to filter out what you do or do not want.
I tried the following but failed to get a list of all pods like these: my-pod-1 my-pod-19 my-pod-24 etc. kgp | grep -E my-pod-[0-9]+ kgp | grep -E my-pod-[0-9] kgp | grep -i -E my-pod-[0-9] kgp | grep -i -E "my-pod-[0-9]" kgp | grep -i -E "^my-pod-[0-9]" kgp | grep "^my-pod-[0-9]" kgp | grep -v 'my-pod-[0-9]' kgp == kubectl get pods Also following kind Timor's suggestion I tried the following: user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -E '^my- pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -F '^my-pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -G '^my-pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -e '^my-pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -P '^my-pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep my-pod my-pod-0 1/1 Running 0 4d9h my-pod-1 1/1 Running 0 22h my-pod-aggregator-cronjob-27946622-qnttj 1/1 Running 0 16m my-pod-manager-766d97676c-9hb5c 1/1 Running 0 4d8h Please advise.
One way to ensure that only one replica per partition subscribes to messages is to use a queue group subscription in NATS. When multiple subscribers are part of the queue group. Only one of them will receive each message. This allows you to ensure that only one replica per partition processes messages at a time. Example: Assign a unique identifier to each replica such as pod name or other unique identifier. If there are multiple subscribers in the queue group. NATS will distribute messages to them in a round-robin fashion. If there is only one replica subscriber in the queue group; then it will receive all the messages for the partition. By above methods only one replica per partition will receive messages and even if it goes down, NATS will automatically reassign remaining users to the group. For more information please check this official page.
###Instruction: I'm developing a NATS based solution with deterministic subject partitioning, I use this type of mapping: service.* --> service.*.<number of partition> Now I need a way to subscribe only one of my replicas per partition, what's the right way to do that? I was thinking about K8s ordinal index, but all the replicas should be stateless. ### Response: One way to ensure that only one replica per partition subscribes to messages is to use a queue group subscription in NATS. When multiple subscribers are part of the queue group. Only one of them will receive each message. This allows you to ensure that only one replica per partition processes messages at a time. Example: Assign a unique identifier to each replica such as pod name or other unique identifier. If there are multiple subscribers in the queue group. NATS will distribute messages to them in a round-robin fashion. If there is only one replica subscriber in the queue group; then it will receive all the messages for the partition. By above methods only one replica per partition will receive messages and even if it goes down, NATS will automatically reassign remaining users to the group. For more information please check this official page.
I'm developing a NATS based solution with deterministic subject partitioning, I use this type of mapping: service.* --> service.*.<number of partition> Now I need a way to subscribe only one of my replicas per partition, what's the right way to do that? I was thinking about K8s ordinal index, but all the replicas should be stateless.
Use grep like so: kubectl get pods | grep -P '^my-pod-\d+$' Here, GNU grep uses the following option: -P : Use Perl regexes. ^my-pod-\d+$ : matches the start of the line, followed by my_pod-, then 1 or more digits until the end of the line. SEE ALSO: perlre - Perl regular expressions
###Instruction: I tried the following but failed to get a list of all pods like these: my-pod-1 my-pod-19 my-pod-24 etc. kgp | grep -E my-pod-[0-9]+ kgp | grep -E my-pod-[0-9] kgp | grep -i -E my-pod-[0-9] kgp | grep -i -E "my-pod-[0-9]" kgp | grep -i -E "^my-pod-[0-9]" kgp | grep "^my-pod-[0-9]" kgp | grep -v 'my-pod-[0-9]' kgp == kubectl get pods Also following kind Timor's suggestion I tried the following: user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -E '^my- pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -F '^my-pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -G '^my-pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -e '^my-pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -P '^my-pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep my-pod my-pod-0 1/1 Running 0 4d9h my-pod-1 1/1 Running 0 22h my-pod-aggregator-cronjob-27946622-qnttj 1/1 Running 0 16m my-pod-manager-766d97676c-9hb5c 1/1 Running 0 4d8h Please advise. ### Response: Use grep like so: kubectl get pods | grep -P '^my-pod-\d+$' Here, GNU grep uses the following option: -P : Use Perl regexes. ^my-pod-\d+$ : matches the start of the line, followed by my_pod-, then 1 or more digits until the end of the line. SEE ALSO: perlre - Perl regular expressions
I tried the following but failed to get a list of all pods like these: my-pod-1 my-pod-19 my-pod-24 etc. kgp | grep -E my-pod-[0-9]+ kgp | grep -E my-pod-[0-9] kgp | grep -i -E my-pod-[0-9] kgp | grep -i -E "my-pod-[0-9]" kgp | grep -i -E "^my-pod-[0-9]" kgp | grep "^my-pod-[0-9]" kgp | grep -v 'my-pod-[0-9]' kgp == kubectl get pods Also following kind Timor's suggestion I tried the following: user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -E '^my- pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -F '^my-pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -G '^my-pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -e '^my-pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep -P '^my-pod-\d+$' user@complt130:/mnt/c/Users/usert$ kubectl get pods | grep my-pod my-pod-0 1/1 Running 0 4d9h my-pod-1 1/1 Running 0 22h my-pod-aggregator-cronjob-27946622-qnttj 1/1 Running 0 16m my-pod-manager-766d97676c-9hb5c 1/1 Running 0 4d8h Please advise.
I found the problem was with used port 8000 for websocket server. Used port was not open on firewall, after changing websocket server port to 80, everything started working.
###Instruction: I want to expose 2 services: web application at example.com port 80 web sockets at example.com/app/ws port 8000 with given configuration: apiVersion: "v1" kind: "Service" metadata: name: "web-app-service" annotations: cloud.google.com/load-balancer-type: "Internal" spec: type: "LoadBalancer" selector: app.kubernetes.io/instance: "web-app" ports: - protocol: "TCP" port: 80 targetPort: 80 name: "http" --- apiVersion: "v1" kind: "Service" metadata: name: "web-sockets-service" spec: type: "NodePort" selector: app.kubernetes.io/instance: "web-sockets" ports: - protocol: "TCP" port: 8000 targetPort: 8000 name: "http" --- apiVersion: "networking.k8s.io/v1" kind: "Ingress" metadata: name: "web-app-ingress" annotations: kubernetes.io/ingress.class: "gce" spec: rules: - host: "example.com" http: paths: - path: "/app/ws" pathType: "Prefix" backend: service: name: "web-sockets-service" port: number: 8000 - path: "/" pathType: "Prefix" backend: service: name: "web-app-service" port: number: 80 --- I can reach web application at example.com, but trying to communicate with example.com/app/ws results with failed to connect - timeout. Curl call to internal service IP at port 8000 results with success, so im misconfigured something at ingress configuation of web sockets curl -i -N -H "Connection: Upgrade" -H "Upgrade: websocket" example.com/app/ws:8000 results with timeout internal cluster call (different pod than websocket server): curl -i -N -H "Connection: Upgrade" -H "Upgrade: websocket" <web-sockets-service ip>/app/ws:8000 result with success websocket server response Websocat: request to service: > websocat ws://<web-sockets-service ip>:8000/app/ws {"event":"connection_established","data":"{\"socket_id\":\"620572367.722478523\",\"activity_timeout\":30}"} request to ingress: > websocat ws://example.com:8000/app/ws websocat: WebSocketError: I/O Failure websocat: error running ### Response: I found the problem was with used port 8000 for websocket server. Used port was not open on firewall, after changing websocket server port to 80, everything started working.
I want to expose 2 services: web application at example.com port 80 web sockets at example.com/app/ws port 8000 with given configuration: apiVersion: "v1" kind: "Service" metadata: name: "web-app-service" annotations: cloud.google.com/load-balancer-type: "Internal" spec: type: "LoadBalancer" selector: app.kubernetes.io/instance: "web-app" ports: - protocol: "TCP" port: 80 targetPort: 80 name: "http" --- apiVersion: "v1" kind: "Service" metadata: name: "web-sockets-service" spec: type: "NodePort" selector: app.kubernetes.io/instance: "web-sockets" ports: - protocol: "TCP" port: 8000 targetPort: 8000 name: "http" --- apiVersion: "networking.k8s.io/v1" kind: "Ingress" metadata: name: "web-app-ingress" annotations: kubernetes.io/ingress.class: "gce" spec: rules: - host: "example.com" http: paths: - path: "/app/ws" pathType: "Prefix" backend: service: name: "web-sockets-service" port: number: 8000 - path: "/" pathType: "Prefix" backend: service: name: "web-app-service" port: number: 80 --- I can reach web application at example.com, but trying to communicate with example.com/app/ws results with failed to connect - timeout. Curl call to internal service IP at port 8000 results with success, so im misconfigured something at ingress configuation of web sockets curl -i -N -H "Connection: Upgrade" -H "Upgrade: websocket" example.com/app/ws:8000 results with timeout internal cluster call (different pod than websocket server): curl -i -N -H "Connection: Upgrade" -H "Upgrade: websocket" <web-sockets-service ip>/app/ws:8000 result with success websocket server response Websocat: request to service: > websocat ws://<web-sockets-service ip>:8000/app/ws {"event":"connection_established","data":"{\"socket_id\":\"620572367.722478523\",\"activity_timeout\":30}"} request to ingress: > websocat ws://example.com:8000/app/ws websocat: WebSocketError: I/O Failure websocat: error running
This error might be due to not creating the label ‘jenkins-eks-pod’ on the jenkins server. To create a label on the jenkins server : go to manage jenkins > Manage nodes and Clouds > labels and then enter the label name. Post creating this label try to run the job and check if it works. Refer to this Blog by Bibin Wilson. As per the error No httpclient implementations found on the context classloader You need to upgrade jenkins kubernetes plugin to 3802. This error No httpclient implementations found on the context classloader is fixed in this release. Refer to this jenkinsci/kubernetes-plugin/releases/tag/3802.
###Instruction: After running a pipeline job in Jenkins that runs in my k8s cluster I am getting this error - ‘Jenkins’ doesn’t have label ‘jenkins-eks-pod’. What am I missing in my configuration? Pod Logs in k8s- 2023-02-20 14:37:03.379+0000 [id=1646] WARNING o.c.j.p.k.KubernetesLauncher#launch: Error in provisioning; agent=KubernetesSlave name: jenkins-eks-agent-h4z6t, template=PodTemplate{id='05395ad55cc56972ee3e4c69c2731189bc03a75c0b51e637dc7f868fa85d07e8', name='jenkins-eks-agent', namespace='default', slaveConnectTimeout=100, label='jenkins-non-prod-eks-global-slave', serviceAccount='default', nodeUsageMode=NORMAL, podRetention='Never', containers=[ContainerTemplate{name='jnlp', image='805787217936.dkr.ecr.us-west-2.amazonaws.com/aura-jenkins-slave:ecs-global-node_master_57', alwaysPullImage=true, workingDir='/home/jenkins/agent', command='', args='', ttyEnabled=true, resourceRequestCpu='512m', resourceRequestMemory='512Mi', resourceRequestEphemeralStorage='', resourceLimitCpu='512m', resourceLimitMemory='512Mi', resourceLimitEphemeralStorage='', envVars=[KeyValueEnvVar [getValue()=http://jenkins-non-prod.default.svc.cluster.local:8080/, getKey()=JENKINS_URL]], livenessProbe=ContainerLivenessProbe{execArgs='', timeoutSeconds=0, initialDelaySeconds=0, failureThreshold=0, periodSeconds=0, successThreshold=0}}]} java.lang.IllegalStateException: Containers are terminated with exit codes: {jnlp=0} at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.checkTerminatedContainers(KubernetesLauncher.java:275) at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:225) at hudson.slaves.SlaveComputer.lambda$_connect$0(SlaveComputer.java:298) at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:48) at jenkins.security.ImpersonatingExecutorService$2.call(ImpersonatingExecutorService.java:82) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) 2023-02-20 14:37:03.380+0000 [id=1646] INFO o.c.j.p.k.KubernetesSlave#_terminate: Terminating Kubernetes instance for agent jenkins-eks-agent-h4z6t 2023-02-20 14:37:03.380+0000 [id=1646] SEVERE o.c.j.p.k.KubernetesSlave#_terminate: Computer for agent is null: jenkins-eks-agent-h4z6t ### Response: This error might be due to not creating the label ‘jenkins-eks-pod’ on the jenkins server. To create a label on the jenkins server : go to manage jenkins > Manage nodes and Clouds > labels and then enter the label name. Post creating this label try to run the job and check if it works. Refer to this Blog by Bibin Wilson. As per the error No httpclient implementations found on the context classloader You need to upgrade jenkins kubernetes plugin to 3802. This error No httpclient implementations found on the context classloader is fixed in this release. Refer to this jenkinsci/kubernetes-plugin/releases/tag/3802.
After running a pipeline job in Jenkins that runs in my k8s cluster I am getting this error - ‘Jenkins’ doesn’t have label ‘jenkins-eks-pod’. What am I missing in my configuration? Pod Logs in k8s- 2023-02-20 14:37:03.379+0000 [id=1646] WARNING o.c.j.p.k.KubernetesLauncher#launch: Error in provisioning; agent=KubernetesSlave name: jenkins-eks-agent-h4z6t, template=PodTemplate{id='05395ad55cc56972ee3e4c69c2731189bc03a75c0b51e637dc7f868fa85d07e8', name='jenkins-eks-agent', namespace='default', slaveConnectTimeout=100, label='jenkins-non-prod-eks-global-slave', serviceAccount='default', nodeUsageMode=NORMAL, podRetention='Never', containers=[ContainerTemplate{name='jnlp', image='805787217936.dkr.ecr.us-west-2.amazonaws.com/aura-jenkins-slave:ecs-global-node_master_57', alwaysPullImage=true, workingDir='/home/jenkins/agent', command='', args='', ttyEnabled=true, resourceRequestCpu='512m', resourceRequestMemory='512Mi', resourceRequestEphemeralStorage='', resourceLimitCpu='512m', resourceLimitMemory='512Mi', resourceLimitEphemeralStorage='', envVars=[KeyValueEnvVar [getValue()=http://jenkins-non-prod.default.svc.cluster.local:8080/, getKey()=JENKINS_URL]], livenessProbe=ContainerLivenessProbe{execArgs='', timeoutSeconds=0, initialDelaySeconds=0, failureThreshold=0, periodSeconds=0, successThreshold=0}}]} java.lang.IllegalStateException: Containers are terminated with exit codes: {jnlp=0} at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.checkTerminatedContainers(KubernetesLauncher.java:275) at org.csanchez.jenkins.plugins.kubernetes.KubernetesLauncher.launch(KubernetesLauncher.java:225) at hudson.slaves.SlaveComputer.lambda$_connect$0(SlaveComputer.java:298) at jenkins.util.ContextResettingExecutorService$2.call(ContextResettingExecutorService.java:48) at jenkins.security.ImpersonatingExecutorService$2.call(ImpersonatingExecutorService.java:82) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829) 2023-02-20 14:37:03.380+0000 [id=1646] INFO o.c.j.p.k.KubernetesSlave#_terminate: Terminating Kubernetes instance for agent jenkins-eks-agent-h4z6t 2023-02-20 14:37:03.380+0000 [id=1646] SEVERE o.c.j.p.k.KubernetesSlave#_terminate: Computer for agent is null: jenkins-eks-agent-h4z6t
The securityContext are more related to the container itself and some access to the host machine. The allowPrivilegeEscalation allow a process to gain more permissions than its parent process. This is more related to setuid/setgid flags in binaries, but inside a container there is no much to get worried about. You can only control other containers in the host machine from inside a container if you have a hostPath volume, or something like that, allowing you to reach the .sock file as /run/crio/crio.sock or the docker.sock. Is pretty obvious that, if you are concerned about this, allowing requests to Docker API through the network should be disabled. Of course, all of these access are ruled by DAC and MAC restrictions. This is why podman uidmap is better, because the root inside the container do not have the same root id outside the container. From Kubernetes point of view, you don't need this kind of privileges, all you need is a ServiceAccount and the correct RBAC permissions to control other things inside Kubernetes. A ServiceAccount binded to a cluster-admin ClusterRole can do anything in the API and much more, like adding ssh keys to the hosts. If you are concerned about pods executing things in Kubernetes or in the host, just force the use of nonRoot containers, avoid indiscriminate use of hostPath volumes, and control your RBAC. Openshift uses a very nice restriction by default: Ensures that pods cannot run as privileged Ensures that pods cannot mount host directory volumes Requires that a pod is run as a user in a pre-allocated range of UIDs (openshift feature, random uid) Requires that a pod is run with a pre-allocated MCS label (selinux related) I don't answer exactly what you want, because I shifted the attention to RBAC, but I hope this can give you a nice idea.
###Instruction: I want to check if pod in the cluster running as privileged pods, which can indicate that we may have security issue, so I check if privileged: true However under the securityContext: spec there is additional fields like allowPrivilegeEscalation RunAsUser ProcMount Capabilities etc Which may be risky (not sure about it) , My question is in case the pod is marked as privileged:false and the other fields are true like the following example,if this indicate some security issue ? Does this pods can do some operation on other pods etc , access external data? For example the following configuration which indicate the the pod is not privileged but allowPrivilegeEscalation: true securityContext: allowPrivilegeEscalation: true privileged: false I want to know which securityContext combination of pod config can control other pods/process in the cluster ? ### Response: The securityContext are more related to the container itself and some access to the host machine. The allowPrivilegeEscalation allow a process to gain more permissions than its parent process. This is more related to setuid/setgid flags in binaries, but inside a container there is no much to get worried about. You can only control other containers in the host machine from inside a container if you have a hostPath volume, or something like that, allowing you to reach the .sock file as /run/crio/crio.sock or the docker.sock. Is pretty obvious that, if you are concerned about this, allowing requests to Docker API through the network should be disabled. Of course, all of these access are ruled by DAC and MAC restrictions. This is why podman uidmap is better, because the root inside the container do not have the same root id outside the container. From Kubernetes point of view, you don't need this kind of privileges, all you need is a ServiceAccount and the correct RBAC permissions to control other things inside Kubernetes. A ServiceAccount binded to a cluster-admin ClusterRole can do anything in the API and much more, like adding ssh keys to the hosts. If you are concerned about pods executing things in Kubernetes or in the host, just force the use of nonRoot containers, avoid indiscriminate use of hostPath volumes, and control your RBAC. Openshift uses a very nice restriction by default: Ensures that pods cannot run as privileged Ensures that pods cannot mount host directory volumes Requires that a pod is run as a user in a pre-allocated range of UIDs (openshift feature, random uid) Requires that a pod is run with a pre-allocated MCS label (selinux related) I don't answer exactly what you want, because I shifted the attention to RBAC, but I hope this can give you a nice idea.
I want to check if pod in the cluster running as privileged pods, which can indicate that we may have security issue, so I check if privileged: true However under the securityContext: spec there is additional fields like allowPrivilegeEscalation RunAsUser ProcMount Capabilities etc Which may be risky (not sure about it) , My question is in case the pod is marked as privileged:false and the other fields are true like the following example,if this indicate some security issue ? Does this pods can do some operation on other pods etc , access external data? For example the following configuration which indicate the the pod is not privileged but allowPrivilegeEscalation: true securityContext: allowPrivilegeEscalation: true privileged: false I want to know which securityContext combination of pod config can control other pods/process in the cluster ?
I'd reckon the issue is with the resources, not with Ansible. Take a look: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: myapp-cluster-role namespace: myapp-ns # <-- NOT A NAMESPACED RESOURCE rules: - apiGroups: ["*"] resources: ["*"] verbs: ["*"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: my-app-role-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: myapp-cluster-role subjects: - kind: ServiceAccount # <-- Added (-) name: my-app-service-account namespace: myapp-ns # <-- MOVED FROM METADATA To summarize: Clusterrole is a not a namespaced resource, hence you should not specify it You've missed a - in the .subjects You should move .namespace from .metadata to .suspects... More explanation on namespaced/non namespaced resources: kubectl api-resources NAME SHORTNAMES APIVERSION roles rbac.authorization.k8s.io/v1 true Role clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding clusterroles rbac.authorization.k8s.io/v1 false ClusterRole rolebindings rbac.authorization.k8s.io/v1 true RoleBinding I encourage you to check on the following docs: Kubernetes.io: Docs: Reference: Access Authn Authz: RBAC Kubernetes.io: Docs: Concepts: Overview: Working with objects: Namespaces: Not all objects are in a namespace
###Instruction: My NodeJS microservice is deployed to k8s cluster. I would like this microservice to access the k8s API server. For that, I guess I need to create a ServiceAccount for it. So I did this: apiVersion: v1 kind: ServiceAccount metadata: name: my-app-service-account namespace: myapp-ns Then, I also created a ClusterRole to define the permissions: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: myapp-cluster-role namespace: myapp-ns rules: - apiGroups: ["*"] resources: ["*"] verbs: ["*"] Finally, I created a ClusterRoleBinding: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: my-app-role-binding namespace: myapp-ns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: myapp-cluster-role subjects: kind: ServiceAccount name: my-app-service-account When I deploy them (I use Ansible to do the deployment), I get the following error: "error": 400, "msg": "Failed to create object: b'{\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"ClusterRoleBinding in version \\\\\"v1\\\\\" cannot be handled as a ClusterRoleBinding: json: cannot unmarshal object into Go struct field ClusterRoleBinding.subjects of type []v1.Subject\",\"reason\":\"BadRequest\",\"code\":400}\\n'", Why this error? Where am I wrong? ### Response: I'd reckon the issue is with the resources, not with Ansible. Take a look: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: myapp-cluster-role namespace: myapp-ns # <-- NOT A NAMESPACED RESOURCE rules: - apiGroups: ["*"] resources: ["*"] verbs: ["*"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: my-app-role-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: myapp-cluster-role subjects: - kind: ServiceAccount # <-- Added (-) name: my-app-service-account namespace: myapp-ns # <-- MOVED FROM METADATA To summarize: Clusterrole is a not a namespaced resource, hence you should not specify it You've missed a - in the .subjects You should move .namespace from .metadata to .suspects... More explanation on namespaced/non namespaced resources: kubectl api-resources NAME SHORTNAMES APIVERSION roles rbac.authorization.k8s.io/v1 true Role clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding clusterroles rbac.authorization.k8s.io/v1 false ClusterRole rolebindings rbac.authorization.k8s.io/v1 true RoleBinding I encourage you to check on the following docs: Kubernetes.io: Docs: Reference: Access Authn Authz: RBAC Kubernetes.io: Docs: Concepts: Overview: Working with objects: Namespaces: Not all objects are in a namespace
My NodeJS microservice is deployed to k8s cluster. I would like this microservice to access the k8s API server. For that, I guess I need to create a ServiceAccount for it. So I did this: apiVersion: v1 kind: ServiceAccount metadata: name: my-app-service-account namespace: myapp-ns Then, I also created a ClusterRole to define the permissions: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: myapp-cluster-role namespace: myapp-ns rules: - apiGroups: ["*"] resources: ["*"] verbs: ["*"] Finally, I created a ClusterRoleBinding: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: my-app-role-binding namespace: myapp-ns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: myapp-cluster-role subjects: kind: ServiceAccount name: my-app-service-account When I deploy them (I use Ansible to do the deployment), I get the following error: "error": 400, "msg": "Failed to create object: b'{\"kind\":\"Status\",\"apiVersion\":\"v1\",\"metadata\":{},\"status\":\"Failure\",\"message\":\"ClusterRoleBinding in version \\\\\"v1\\\\\" cannot be handled as a ClusterRoleBinding: json: cannot unmarshal object into Go struct field ClusterRoleBinding.subjects of type []v1.Subject\",\"reason\":\"BadRequest\",\"code\":400}\\n'", Why this error? Where am I wrong?
The Operator Lifecycle Manager is really well suited for the task. When you create operator Foo, you can package it in the OLM way by creating a bundle which contains the ClusterServiceVersion needed to inform OLM of dependencies that need to be resolved before install and during upgrades. These can just be a list of APIs you need - and OLM will find and install the set of latest versions of the operators that own each API. All your dependencies are operators available in the Operatorhub.io Catalog so they are available for install and dependency resolution as soon as you install OLM. You can also configure certain dependencies by including these objects in the bundle itself. According to the docs, the following objects are supported as of the time of this post: Secret ClusterRole ClusterRoleBinding ConfigMap ServiceAccount Service Role RoleBinding PrometheusRule ServiceMonitor PodDisruptionBudget PriorityClasse VerticalPodAutoscaler ConsoleYAMLSample ConsoleQuickStart ConsoleCLIDownload ConsoleLink The Operator SDK can help you with bootstrapping the bundle.
###Instruction: I need to automate the provisioning of a complex application in Kubernetes. It's a complex, multi-step process that involves provisioning of some cluster-wide resources and some app-specific resources. The cluster-wide resources are: Istio A few Operators (Cert Manager, Prometheus Operator, Postgres Operator, among others) Then I want to create an application (let's call it Foo) which leverages Istio and the aforementioned operators. It will create statefulsets, services, Certificates, a Postgres database, Istio gateways, Prometheus PodMonitors, etc. There will be multiple Foo's created, each configured differently (since the Kubernetes cluster will be used to provide Foo applications as a multi-tenant service). What's the idiomatic way to do this? I think I should write a Foo controller which assumes that Istio and the other operators (prometheus, cert-manager, postgres, etc) already exist. Is it possible to write a meta ClusterOfFoos operator that installs Istio, installs the required operators, and then installs the Foo controller? If so, how does one go about provisioning operators (normally installed through Helm) from within a controller? So far I have looked into using helm to do this, but there are too many dependencies and Helm just tends to create all resources at once, which makes some things fail (eg. when a deployment refers to a Secret that hasn't yet been created by cert-manager). ### Response: The Operator Lifecycle Manager is really well suited for the task. When you create operator Foo, you can package it in the OLM way by creating a bundle which contains the ClusterServiceVersion needed to inform OLM of dependencies that need to be resolved before install and during upgrades. These can just be a list of APIs you need - and OLM will find and install the set of latest versions of the operators that own each API. All your dependencies are operators available in the Operatorhub.io Catalog so they are available for install and dependency resolution as soon as you install OLM. You can also configure certain dependencies by including these objects in the bundle itself. According to the docs, the following objects are supported as of the time of this post: Secret ClusterRole ClusterRoleBinding ConfigMap ServiceAccount Service Role RoleBinding PrometheusRule ServiceMonitor PodDisruptionBudget PriorityClasse VerticalPodAutoscaler ConsoleYAMLSample ConsoleQuickStart ConsoleCLIDownload ConsoleLink The Operator SDK can help you with bootstrapping the bundle.
I need to automate the provisioning of a complex application in Kubernetes. It's a complex, multi-step process that involves provisioning of some cluster-wide resources and some app-specific resources. The cluster-wide resources are: Istio A few Operators (Cert Manager, Prometheus Operator, Postgres Operator, among others) Then I want to create an application (let's call it Foo) which leverages Istio and the aforementioned operators. It will create statefulsets, services, Certificates, a Postgres database, Istio gateways, Prometheus PodMonitors, etc. There will be multiple Foo's created, each configured differently (since the Kubernetes cluster will be used to provide Foo applications as a multi-tenant service). What's the idiomatic way to do this? I think I should write a Foo controller which assumes that Istio and the other operators (prometheus, cert-manager, postgres, etc) already exist. Is it possible to write a meta ClusterOfFoos operator that installs Istio, installs the required operators, and then installs the Foo controller? If so, how does one go about provisioning operators (normally installed through Helm) from within a controller? So far I have looked into using helm to do this, but there are too many dependencies and Helm just tends to create all resources at once, which makes some things fail (eg. when a deployment refers to a Secret that hasn't yet been created by cert-manager).
It seems you have missing broker config, usually I have below config exposed from my docker compose when I run Kafka on my CI KAFKA_LISTENERS: 'LISTENER_INTERNAL://kafka:29092,LISTENER_HOST://:9092' KAFKA_ADVERTISED_LISTENERS: LISTENER_INTERNAL://kafka:29092,LISTENER_HOST://localhost:9092 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_INTERNAL:PLAINTEXT,LISTENER_HOST:PLAINTEXT KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_INTERNAL And when some other service tries to internally connect within the docker compose cluster it uses below config KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:29092 KAFKA_CLUSTERS_0_ZOOKEEPER: zookeeper-kafka:2181 This is also explained https://www.confluent.io/blog/kafka-listeners-explained/ I hope this helps
###Instruction: I'm trying to deploy kafka on local k8s, then I need to connect to it by application and using offset explorer so, using kubectl I created zookeeper service and deployment using this yml file apiVersion: v1 kind: Service metadata: labels: app: zookeeper-service name: zookeeper-service spec: type: NodePort ports: - name: zookeeper-port port: 2181 nodePort: 30091 targetPort: 2181 selector: app: zookeeper --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: zookeeper name: zookeeper spec: replicas: 1 selector: matchLabels: app: zookeeper template: metadata: labels: app: zookeeper spec: containers: - image: bitnami/zookeeper imagePullPolicy: IfNotPresent name: zookeeper ports: - containerPort: 2181 env: - name: ALLOW_PLAINTEXT_LISTENER value: "yes" - name: ALLOW_ANONYMOUS_LOGIN value: "yes" Then, I created kafka service and deployment using this yml apiVersion: v1 kind: Service metadata: labels: app: kafka-service name: kafka-service spec: type: NodePort ports: - name: kafka-port port: 9092 nodePort: 30092 targetPort: 9092 selector: app: kafka-broker --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: kafka-broker name: kafka-broker spec: replicas: 1 selector: matchLabels: app: kafka-broker template: metadata: labels: app: kafka-broker spec: hostname: kafka-broker containers: - image: bitnami/kafka imagePullPolicy: IfNotPresent name: kafka-broker ports: - containerPort: 9092 env: - name: KAFKA_BROKER_ID value: "1" - name: KAFKA_ZOOKEEPER_CONNECT value: "zookeeper-service:2181" - name: KAFKA_LISTENERS value: PLAINTEXT://localhost:9092 - name: KAFKA_ADVERTISED_LISTENERS value: PLAINTEXT://localhost:9092 # Creates a topic with one partition and one replica. - name: KAFKA_CREATE_TOPICS value: "bomc:1:1" - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: ALLOW_PLAINTEXT_LISTENER value: "yes" And both services and deployment created and running And I have ingress for this services apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx-ingress annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - http: paths: - path: /health pathType: Prefix backend: service: name: health-app-service port: number: 80 - path: /actuator pathType: Prefix backend: service: name: health-app-service port: number: 80 - path: /jsonrpc pathType: Prefix backend: service: name: core-service port: number: 80 - path: / pathType: Prefix backend: service: name: kafka-service # Название вашего Kafka-сервиса port: number: 9092 # Название порта, используемого для Kafka - path: / pathType: Prefix backend: service: name: kafka-service # Название вашего Kafka-сервиса port: number: 30092 # Название порта, используемого для Kafka - path: / pathType: Prefix backend: service: name: kafka-service # Название вашего Kafka-сервиса port: name: kafka-port # Название порта, используемого для Kafka - path: / pathType: Prefix backend: service: name: zookeeper-service port: name: zookeeper-port but, when I try to connect to this kafka using offset key tool, there is error connection. When I use localhost:30092 like a bootstrap server - error with logs: 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - Starting application : Offset Explorer 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - Version : 2.3 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - Built : Jun 30, 2022 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - user.home : C:\Users\Roberto 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - user.dir : C:\Program Files\OffsetExplorer2 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - os.name : Windows 10 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - java.runtime.version : 1.8.0_232-b09 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - max memory=3586 MB 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - available processors=8 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - java.security.auth.login.config=null 12/мар/2023 22:32:46.121 INFO com.kafkatool.common.ExternalDecoderManager - Finding plugins in directory C:\Program Files\OffsetExplorer2\plugins 12/мар/2023 22:32:46.121 INFO com.kafkatool.common.ExternalDecoderManager - Found files in plugin directory, count=1 12/мар/2023 22:32:46.121 INFO com.kafkatool.ui.MainApp - Loading user settings 12/мар/2023 22:32:46.153 INFO com.kafkatool.ui.MainApp - Loading server group settings 12/мар/2023 22:32:46.153 INFO com.kafkatool.ui.MainApp - Loading server connection settings 12/мар/2023 22:32:50.103 INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values: bootstrap.servers = [localhost:30092] client.dns.lookup = default client.id = connections.max.idle.ms = 300000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 120000 retries = 5 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS 12/мар/2023 22:32:50.126 DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=adminclient-1] Setting bootstrap cluster metadata Cluster(id = null, nodes = [localhost:30092 (id: -1 rack: null)], partitions = [], controller = null). 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed: 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created: 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name successful-authentication: 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name successful-reauthentication: 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name successful-authentication-no-reauth: 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name failed-authentication: 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name failed-reauthentication: 12/мар/2023 22:32:50.198 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name reauthentication-latency: 12/мар/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received: 12/мар/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent: 12/мар/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received: 12/мар/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time: 12/мар/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time: 12/мар/2023 22:32:50.204 WARN org.apache.kafka.clients.admin.AdminClientConfig - The configuration 'group.id' was supplied but isn't a known config. 12/мар/2023 22:32:50.204 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.4.0 12/мар/2023 22:32:50.204 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 77a89fcf8d7fa018 12/мар/2023 22:32:50.204 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1678649570204 12/мар/2023 22:32:50.214 DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=adminclient-1] Kafka admin client initialized 12/мар/2023 22:32:50.215 DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=adminclient-1] Queueing Call(callName=listNodes, deadlineMs=1678649690215) with a timeout 120000 ms from now. 12/мар/2023 22:32:50.215 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Initiating connection to node localhost:30092 (id: -1 rack: null) using address localhost/127.0.0.1 12/мар/2023 22:32:50.228 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-sent 12/мар/2023 22:32:50.230 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-received 12/мар/2023 22:32:50.232 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.latency 12/мар/2023 22:32:50.232 DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 12/мар/2023 22:32:50.320 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Completed connection to node -1. Fetching API versions. 12/мар/2023 22:32:50.320 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Initiating API versions fetch from node -1. 12/мар/2023 22:32:50.376 DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=adminclient-1] Connection with localhost/127.0.0.1 disconnected java.io.EOFException at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572) at org.apache.kafka.common.network.Selector.poll(Selector.java:483) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:540) at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1196) at java.lang.Thread.run(Thread.java:748) 12/мар/2023 22:33:11.787 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Node -1 disconnected. 12/мар/2023 22:33:12.766 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Initiating connection to node localhost:30092 (id: -1 rack: null) using address localhost/127.0.0.1 12/мар/2023 22:33:12.767 DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 12/мар/2023 22:33:12.767 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Completed connection to node -1. Fetching API versions. 12/мар/2023 22:33:12.767 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Initiating API versions fetch from node -1. 12/мар/2023 22:33:12.768 DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=adminclient-1] Connection with localhost/127.0.0.1 disconnected java.io.EOFException at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572) at org.apache.kafka.common.network.Selector.poll(Selector.java:483) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:540) at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1196) at java.lang.Thread.run(Thread.java:748) ### Response: It seems you have missing broker config, usually I have below config exposed from my docker compose when I run Kafka on my CI KAFKA_LISTENERS: 'LISTENER_INTERNAL://kafka:29092,LISTENER_HOST://:9092' KAFKA_ADVERTISED_LISTENERS: LISTENER_INTERNAL://kafka:29092,LISTENER_HOST://localhost:9092 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_INTERNAL:PLAINTEXT,LISTENER_HOST:PLAINTEXT KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_INTERNAL And when some other service tries to internally connect within the docker compose cluster it uses below config KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:29092 KAFKA_CLUSTERS_0_ZOOKEEPER: zookeeper-kafka:2181 This is also explained https://www.confluent.io/blog/kafka-listeners-explained/ I hope this helps
I'm trying to deploy kafka on local k8s, then I need to connect to it by application and using offset explorer so, using kubectl I created zookeeper service and deployment using this yml file apiVersion: v1 kind: Service metadata: labels: app: zookeeper-service name: zookeeper-service spec: type: NodePort ports: - name: zookeeper-port port: 2181 nodePort: 30091 targetPort: 2181 selector: app: zookeeper --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: zookeeper name: zookeeper spec: replicas: 1 selector: matchLabels: app: zookeeper template: metadata: labels: app: zookeeper spec: containers: - image: bitnami/zookeeper imagePullPolicy: IfNotPresent name: zookeeper ports: - containerPort: 2181 env: - name: ALLOW_PLAINTEXT_LISTENER value: "yes" - name: ALLOW_ANONYMOUS_LOGIN value: "yes" Then, I created kafka service and deployment using this yml apiVersion: v1 kind: Service metadata: labels: app: kafka-service name: kafka-service spec: type: NodePort ports: - name: kafka-port port: 9092 nodePort: 30092 targetPort: 9092 selector: app: kafka-broker --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: kafka-broker name: kafka-broker spec: replicas: 1 selector: matchLabels: app: kafka-broker template: metadata: labels: app: kafka-broker spec: hostname: kafka-broker containers: - image: bitnami/kafka imagePullPolicy: IfNotPresent name: kafka-broker ports: - containerPort: 9092 env: - name: KAFKA_BROKER_ID value: "1" - name: KAFKA_ZOOKEEPER_CONNECT value: "zookeeper-service:2181" - name: KAFKA_LISTENERS value: PLAINTEXT://localhost:9092 - name: KAFKA_ADVERTISED_LISTENERS value: PLAINTEXT://localhost:9092 # Creates a topic with one partition and one replica. - name: KAFKA_CREATE_TOPICS value: "bomc:1:1" - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: ALLOW_PLAINTEXT_LISTENER value: "yes" And both services and deployment created and running And I have ingress for this services apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx-ingress annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - http: paths: - path: /health pathType: Prefix backend: service: name: health-app-service port: number: 80 - path: /actuator pathType: Prefix backend: service: name: health-app-service port: number: 80 - path: /jsonrpc pathType: Prefix backend: service: name: core-service port: number: 80 - path: / pathType: Prefix backend: service: name: kafka-service # Название вашего Kafka-сервиса port: number: 9092 # Название порта, используемого для Kafka - path: / pathType: Prefix backend: service: name: kafka-service # Название вашего Kafka-сервиса port: number: 30092 # Название порта, используемого для Kafka - path: / pathType: Prefix backend: service: name: kafka-service # Название вашего Kafka-сервиса port: name: kafka-port # Название порта, используемого для Kafka - path: / pathType: Prefix backend: service: name: zookeeper-service port: name: zookeeper-port but, when I try to connect to this kafka using offset key tool, there is error connection. When I use localhost:30092 like a bootstrap server - error with logs: 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - Starting application : Offset Explorer 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - Version : 2.3 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - Built : Jun 30, 2022 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - user.home : C:\Users\Roberto 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - user.dir : C:\Program Files\OffsetExplorer2 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - os.name : Windows 10 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - java.runtime.version : 1.8.0_232-b09 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - max memory=3586 MB 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - available processors=8 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - java.security.auth.login.config=null 12/мар/2023 22:32:46.121 INFO com.kafkatool.common.ExternalDecoderManager - Finding plugins in directory C:\Program Files\OffsetExplorer2\plugins 12/мар/2023 22:32:46.121 INFO com.kafkatool.common.ExternalDecoderManager - Found files in plugin directory, count=1 12/мар/2023 22:32:46.121 INFO com.kafkatool.ui.MainApp - Loading user settings 12/мар/2023 22:32:46.153 INFO com.kafkatool.ui.MainApp - Loading server group settings 12/мар/2023 22:32:46.153 INFO com.kafkatool.ui.MainApp - Loading server connection settings 12/мар/2023 22:32:50.103 INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values: bootstrap.servers = [localhost:30092] client.dns.lookup = default client.id = connections.max.idle.ms = 300000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 120000 retries = 5 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS 12/мар/2023 22:32:50.126 DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=adminclient-1] Setting bootstrap cluster metadata Cluster(id = null, nodes = [localhost:30092 (id: -1 rack: null)], partitions = [], controller = null). 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed: 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created: 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name successful-authentication: 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name successful-reauthentication: 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name successful-authentication-no-reauth: 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name failed-authentication: 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name failed-reauthentication: 12/мар/2023 22:32:50.198 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name reauthentication-latency: 12/мар/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received: 12/мар/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent: 12/мар/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received: 12/мар/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time: 12/мар/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time: 12/мар/2023 22:32:50.204 WARN org.apache.kafka.clients.admin.AdminClientConfig - The configuration 'group.id' was supplied but isn't a known config. 12/мар/2023 22:32:50.204 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.4.0 12/мар/2023 22:32:50.204 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 77a89fcf8d7fa018 12/мар/2023 22:32:50.204 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1678649570204 12/мар/2023 22:32:50.214 DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=adminclient-1] Kafka admin client initialized 12/мар/2023 22:32:50.215 DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=adminclient-1] Queueing Call(callName=listNodes, deadlineMs=1678649690215) with a timeout 120000 ms from now. 12/мар/2023 22:32:50.215 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Initiating connection to node localhost:30092 (id: -1 rack: null) using address localhost/127.0.0.1 12/мар/2023 22:32:50.228 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-sent 12/мар/2023 22:32:50.230 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-received 12/мар/2023 22:32:50.232 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.latency 12/мар/2023 22:32:50.232 DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 12/мар/2023 22:32:50.320 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Completed connection to node -1. Fetching API versions. 12/мар/2023 22:32:50.320 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Initiating API versions fetch from node -1. 12/мар/2023 22:32:50.376 DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=adminclient-1] Connection with localhost/127.0.0.1 disconnected java.io.EOFException at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572) at org.apache.kafka.common.network.Selector.poll(Selector.java:483) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:540) at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1196) at java.lang.Thread.run(Thread.java:748) 12/мар/2023 22:33:11.787 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Node -1 disconnected. 12/мар/2023 22:33:12.766 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Initiating connection to node localhost:30092 (id: -1 rack: null) using address localhost/127.0.0.1 12/мар/2023 22:33:12.767 DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 12/мар/2023 22:33:12.767 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Completed connection to node -1. Fetching API versions. 12/мар/2023 22:33:12.767 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Initiating API versions fetch from node -1. 12/мар/2023 22:33:12.768 DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=adminclient-1] Connection with localhost/127.0.0.1 disconnected java.io.EOFException at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572) at org.apache.kafka.common.network.Selector.poll(Selector.java:483) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:540) at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1196) at java.lang.Thread.run(Thread.java:748)
In summary, I want the overlay to be able to define when the sidecar should be added, and to which specific deployments to add it to. My first thought was that you could have a label that means "apply the sidecar patch", and use that in the Component: apiVersion: kustomize.config.k8s.io/v1alpha1 kind: Component patches: - path: deployment-sidecar.yaml target: labelSelector: "inject-sidecar=true" And then in your overlay(s), use a patch to apply that label to specific deployments: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../../base components: - ../../sidecar patches: - target: kind: Deployment labelSelector: "app=app1" patch: | - op: add path: /metadata/labels/inject-sidecar value: "true" Unfortunately, this won't work because patches are applied after processing all resources and components. We can still do this, but it requires an intermediate stage. We can get that by creating another component inside the dev overlay that is responsible for applying the labels. In overlays/dev/apply-labels/kustomization.yaml you have a kustomization.yaml that contains the logic for applying the inject-sidecar label to specific Deployments (using a label selector, name patterns, or other criteria): apiVersion: kustomize.config.k8s.io/v1alpha1 kind: Component patches: - target: kind: Deployment labelSelector: "app=app1" patch: | - op: add path: /metadata/labels/inject-sidecar value: "true" And then in overlays/dev/kustomization.yaml you have: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization components: - apply-labels - ../../components/sidecar This gets you what you want: The sidecar patch is specified in a single place Your overlay determines to which deployments you apply the sidecar patch There's a level of complexity here that is only necessary if: You have multiple overlays You want to selectively apply the sidecar only to some deployments You want the overlay to control to which deployments the patch is applied If any of those things aren't true you can simplify the configuration.
###Instruction: On a Kubernetes cluster, I have multiple Deployment resources. For security, I am using a sidecar proxy pattern where the Service will proxy traffic to the sidecar, which will ensure authentication before passing on to the deployed application. I am trying to set up Kustomize to do this. Since the sidecar definition is likely environment specific, I don't want to include the sidecar in my base manifests, but would like it to be an overlay. Since I have multiple deployments that will need to attach that sidecar, it seemed appropriate to have the sidecar specification be a common shared component. This seemed like appropriate use of the Kustomize Component resource, but perhaps I'm wrong. I have something similar to the following: . ├── base │ ├── app1 │ │ ├── deployment.yaml │ │ └── kustomization.yaml │ ├── app2 │ │ ├── deployment.yaml │ │ └── kustomization.yaml │ └── app3 │ ├── deployment.yaml │ └── kustomization.yaml ├── components │ └── sidecar │ ├── deployment-sidecar.yaml │ └── kustomization.yaml └── overlays └── dev └── kustomization.yaml I'd like the sidecar component to be applied to the 3 app deployments, but I can't seem to find a way to do this. Am I misusing components here? My components/sidecar/kustomization.yaml file looks like: apiVersion: kustomize.config.k8s.io/v1alpha1 kind: Component patches: - path: deployment-sidecar.yaml target: labelSelector: xxx This works, however it specifies the target of the patch in the component, whereas I would like to leave the component more generic and instead specify the target in overlays/dev. Is there a better way to be handling this? In summary, I want the overlay to be able to define when the sidecar should be added, and to which specific deployments to add it to. ### Response: In summary, I want the overlay to be able to define when the sidecar should be added, and to which specific deployments to add it to. My first thought was that you could have a label that means "apply the sidecar patch", and use that in the Component: apiVersion: kustomize.config.k8s.io/v1alpha1 kind: Component patches: - path: deployment-sidecar.yaml target: labelSelector: "inject-sidecar=true" And then in your overlay(s), use a patch to apply that label to specific deployments: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ../../base components: - ../../sidecar patches: - target: kind: Deployment labelSelector: "app=app1" patch: | - op: add path: /metadata/labels/inject-sidecar value: "true" Unfortunately, this won't work because patches are applied after processing all resources and components. We can still do this, but it requires an intermediate stage. We can get that by creating another component inside the dev overlay that is responsible for applying the labels. In overlays/dev/apply-labels/kustomization.yaml you have a kustomization.yaml that contains the logic for applying the inject-sidecar label to specific Deployments (using a label selector, name patterns, or other criteria): apiVersion: kustomize.config.k8s.io/v1alpha1 kind: Component patches: - target: kind: Deployment labelSelector: "app=app1" patch: | - op: add path: /metadata/labels/inject-sidecar value: "true" And then in overlays/dev/kustomization.yaml you have: apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization components: - apply-labels - ../../components/sidecar This gets you what you want: The sidecar patch is specified in a single place Your overlay determines to which deployments you apply the sidecar patch There's a level of complexity here that is only necessary if: You have multiple overlays You want to selectively apply the sidecar only to some deployments You want the overlay to control to which deployments the patch is applied If any of those things aren't true you can simplify the configuration.
On a Kubernetes cluster, I have multiple Deployment resources. For security, I am using a sidecar proxy pattern where the Service will proxy traffic to the sidecar, which will ensure authentication before passing on to the deployed application. I am trying to set up Kustomize to do this. Since the sidecar definition is likely environment specific, I don't want to include the sidecar in my base manifests, but would like it to be an overlay. Since I have multiple deployments that will need to attach that sidecar, it seemed appropriate to have the sidecar specification be a common shared component. This seemed like appropriate use of the Kustomize Component resource, but perhaps I'm wrong. I have something similar to the following: . ├── base │ ├── app1 │ │ ├── deployment.yaml │ │ └── kustomization.yaml │ ├── app2 │ │ ├── deployment.yaml │ │ └── kustomization.yaml │ └── app3 │ ├── deployment.yaml │ └── kustomization.yaml ├── components │ └── sidecar │ ├── deployment-sidecar.yaml │ └── kustomization.yaml └── overlays └── dev └── kustomization.yaml I'd like the sidecar component to be applied to the 3 app deployments, but I can't seem to find a way to do this. Am I misusing components here? My components/sidecar/kustomization.yaml file looks like: apiVersion: kustomize.config.k8s.io/v1alpha1 kind: Component patches: - path: deployment-sidecar.yaml target: labelSelector: xxx This works, however it specifies the target of the patch in the component, whereas I would like to leave the component more generic and instead specify the target in overlays/dev. Is there a better way to be handling this? In summary, I want the overlay to be able to define when the sidecar should be added, and to which specific deployments to add it to.
I was able to solve the issue by adding two things to description: PGDATA env variable inside containers: env: - name: PGDATA value: /var/lib/postgresql/data/pgdata subPath: postgres under volumeMounts Now my deployment file looks like this: --- apiVersion: apps/v1 kind: StatefulSet metadata: name: imgress-db namespace: db spec: serviceName: imgress-db replicas: 1 selector: matchLabels: app: imgress-db template: metadata: labels: app: imgress-db spec: containers: - name: imgress-db image: postgres env: - name: POSTGRES_HOST valueFrom: configMapKeyRef: name: db-configmap key: DATABASE_HOST - name: POSTGRES_DB valueFrom: configMapKeyRef: name: db-configmap key: POSTGRES_DB - name: POSTGRES_USER valueFrom: configMapKeyRef: name: db-configmap key: POSTGRES_USER - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: db-secret key: POSTGRES_PASSWORD - name: PGDATA value: /var/lib/postgresql/data/pgdata ports: - containerPort: 5432 volumeMounts: - name: postgres-data mountPath: /var/lib/postgresql/data subPath: postgres volumes: - name: postgres-data persistentVolumeClaim: claimName: postgres-pvc restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: imgress-db namespace: db spec: selector: app: imgress-db ports: - name: postgres port: 5432
###Instruction: I am trying to deploy PostgreSQL to GKE and here is my PersistentVolumeClaim definition: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: postgres-pvc namespace: db labels: app: imgress-db spec: accessModes: - ReadWriteOnce resources: requests: storage: 400Mi and this is deployment/service definition: --- apiVersion: apps/v1 kind: StatefulSet metadata: name: imgress-db namespace: db spec: serviceName: imgress-db replicas: 1 selector: matchLabels: app: imgress-db template: metadata: labels: app: imgress-db spec: containers: - name: imgress-db image: postgres env: - name: POSTGRES_HOST valueFrom: configMapKeyRef: name: db-configmap key: DATABASE_HOST - name: POSTGRES_DB valueFrom: configMapKeyRef: name: db-configmap key: POSTGRES_DB - name: POSTGRES_USER valueFrom: configMapKeyRef: name: db-configmap key: POSTGRES_USER - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: db-secret key: POSTGRES_PASSWORD ports: - containerPort: 5432 volumeMounts: - name: postgres-data mountPath: /var/lib/postgresql/data volumes: - name: postgres-data persistentVolumeClaim: claimName: postgres-pvc restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: imgress-db namespace: db spec: selector: app: imgress-db ports: - name: postgres port: 5432 First I run: kubectl apply -f postgres-pvc.yaml and then: kubectl apply -f postgres-deployment.yaml but I get this notorious error when I run kubectl get pods -A: NAMESPACE NAME READY STATUS RESTARTS AGE db imgress-db-0 0/1 CrashLoopBackOff 6 (2m15s ago) 8m26s For kubectl describe pvc postgres-pvc -n db I get this result: Name: postgres-pvc Namespace: db StorageClass: standard Status: Bound Volume: pvc-c6369764-1106-4a7d-887c-0e4009968115 Labels: app=imgress-db Annotations: pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io volume.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io Finalizers: [kubernetes.io/pvc-protection] Capacity: 1Gi Access Modes: RWO VolumeMode: Filesystem Used By: imgress-db-0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ExternalProvisioning 31m persistentvolume-controller waiting for a volume to be created, either by external provisioner "pd.csi.storage.gke.io" or manually created by system administrator Normal Provisioning 31m pd.csi.storage.gke.io_gke-e0f710dc594c4eb5ac14-5c62-e039-vm_ca2409ad-83a8-4139-93b4-4fffbacbf44f External provisioner is provisioning volume for claim "db/postgres-pvc" Normal ProvisioningSucceeded 31m pd.csi.storage.gke.io_gke-e0f710dc594c4eb5ac14-5c62-e039-vm_ca2409ad-83a8-4139-93b4-4fffbacbf44f Successfully provisioned volume pvc-c6369764-1106-4a7d-887c-0e4009968115 and for kubectl describe pod imgress-db-0 -n db I get this result (please pay attention to Back-off restarting failed container on the last line): Name: imgress-db-0 Namespace: db Priority: 0 Service Account: default Node: gke-imgress-default-pool-e9bdef38-hjhv/10.156.0.5 Start Time: Fri, 24 Feb 2023 13:44:15 +0500 Labels: app=imgress-db controller-revision-hash=imgress-db-7f557d4b88 statefulset.kubernetes.io/pod-name=imgress-db-0 Annotations: <none> Status: Running IP: 10.84.2.49 IPs: IP: 10.84.2.49 Controlled By: StatefulSet/imgress-db Containers: imgress-db: Container ID: containerd://96140ec0b0e369ca97822361a770abcb82e27b7924bc90e17111ab354e51d6aa Image: postgres Image ID: docker.io/library/postgres@sha256:901df890146ec46a5cab7a33f4ac84e81bac2fe92b2c9a14fd649502c4adf954 Port: 5432/TCP Host Port: 0/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Fri, 24 Feb 2023 13:50:09 +0500 Finished: Fri, 24 Feb 2023 13:50:11 +0500 Ready: False Restart Count: 6 Environment: POSTGRES_HOST: <set to the key 'DATABASE_HOST' of config map 'db-configmap'> Optional: false POSTGRES_DB: <set to the key 'POSTGRES_DB' of config map 'db-configmap'> Optional: false POSTGRES_USER: <set to the key 'POSTGRES_USER' of config map 'db-configmap'> Optional: false POSTGRES_PASSWORD: <set to the key 'POSTGRES_PASSWORD' in secret 'db-secret'> Optional: false Mounts: /var/lib/postgresql/data from postgres-data (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tfsf9 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: postgres-data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: postgres-pvc ReadOnly: false kube-api-access-tfsf9: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 6m51s default-scheduler Successfully assigned db/imgress-db-0 to gke-imgress-default-pool-e9bdef38-hjhv Normal SuccessfulAttachVolume 6m46s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-c6369764-1106-4a7d-887c-0e4009968115" Normal Pulled 6m42s kubelet Successfully pulled image "postgres" in 365.355391ms Normal Pulled 6m39s kubelet Successfully pulled image "postgres" in 236.532115ms Normal Pulled 6m26s kubelet Successfully pulled image "postgres" in 278.12995ms Normal Created 5m59s (x4 over 6m42s) kubelet Created container imgress-db Normal Started 5m59s (x4 over 6m41s) kubelet Started container imgress-db Normal Pulled 5m59s kubelet Successfully pulled image "postgres" in 348.694043ms Normal Pulling 5m10s (x5 over 6m42s) kubelet Pulling image "postgres" Normal Pulled 5m9s kubelet Successfully pulled image "postgres" in 288.848351ms Warning BackOff 95s (x25 over 6m38s) kubelet Back-off restarting failed container Result of kubectl logs imgress-db-0 -n db: The files belonging to this database system will be owned by user "postgres". This user must also own the server process. The database cluster will be initialized with locale "en_US.utf8". The default database encoding has accordingly been set to "UTF8". The default text search configuration will be set to "english". Data page checksums are disabled. initdb: error: directory "/var/lib/postgresql/data" exists but is not empty initdb: detail: It contains a lost+found directory, perhaps due to it being a mount point. initdb: hint: Using a mount point directly as the data directory is not recommended. Create a subdirectory under the mount point. I've already seen some related questions asked on SO but I guess it can be caused by different problems. What I tried: adding user and password as an env variables deleting everything in the namespace and recreating deleting livenessprobe and readinessprobe from the description ### Response: I was able to solve the issue by adding two things to description: PGDATA env variable inside containers: env: - name: PGDATA value: /var/lib/postgresql/data/pgdata subPath: postgres under volumeMounts Now my deployment file looks like this: --- apiVersion: apps/v1 kind: StatefulSet metadata: name: imgress-db namespace: db spec: serviceName: imgress-db replicas: 1 selector: matchLabels: app: imgress-db template: metadata: labels: app: imgress-db spec: containers: - name: imgress-db image: postgres env: - name: POSTGRES_HOST valueFrom: configMapKeyRef: name: db-configmap key: DATABASE_HOST - name: POSTGRES_DB valueFrom: configMapKeyRef: name: db-configmap key: POSTGRES_DB - name: POSTGRES_USER valueFrom: configMapKeyRef: name: db-configmap key: POSTGRES_USER - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: db-secret key: POSTGRES_PASSWORD - name: PGDATA value: /var/lib/postgresql/data/pgdata ports: - containerPort: 5432 volumeMounts: - name: postgres-data mountPath: /var/lib/postgresql/data subPath: postgres volumes: - name: postgres-data persistentVolumeClaim: claimName: postgres-pvc restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: imgress-db namespace: db spec: selector: app: imgress-db ports: - name: postgres port: 5432
I am trying to deploy PostgreSQL to GKE and here is my PersistentVolumeClaim definition: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: postgres-pvc namespace: db labels: app: imgress-db spec: accessModes: - ReadWriteOnce resources: requests: storage: 400Mi and this is deployment/service definition: --- apiVersion: apps/v1 kind: StatefulSet metadata: name: imgress-db namespace: db spec: serviceName: imgress-db replicas: 1 selector: matchLabels: app: imgress-db template: metadata: labels: app: imgress-db spec: containers: - name: imgress-db image: postgres env: - name: POSTGRES_HOST valueFrom: configMapKeyRef: name: db-configmap key: DATABASE_HOST - name: POSTGRES_DB valueFrom: configMapKeyRef: name: db-configmap key: POSTGRES_DB - name: POSTGRES_USER valueFrom: configMapKeyRef: name: db-configmap key: POSTGRES_USER - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: db-secret key: POSTGRES_PASSWORD ports: - containerPort: 5432 volumeMounts: - name: postgres-data mountPath: /var/lib/postgresql/data volumes: - name: postgres-data persistentVolumeClaim: claimName: postgres-pvc restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: imgress-db namespace: db spec: selector: app: imgress-db ports: - name: postgres port: 5432 First I run: kubectl apply -f postgres-pvc.yaml and then: kubectl apply -f postgres-deployment.yaml but I get this notorious error when I run kubectl get pods -A: NAMESPACE NAME READY STATUS RESTARTS AGE db imgress-db-0 0/1 CrashLoopBackOff 6 (2m15s ago) 8m26s For kubectl describe pvc postgres-pvc -n db I get this result: Name: postgres-pvc Namespace: db StorageClass: standard Status: Bound Volume: pvc-c6369764-1106-4a7d-887c-0e4009968115 Labels: app=imgress-db Annotations: pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io volume.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io Finalizers: [kubernetes.io/pvc-protection] Capacity: 1Gi Access Modes: RWO VolumeMode: Filesystem Used By: imgress-db-0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ExternalProvisioning 31m persistentvolume-controller waiting for a volume to be created, either by external provisioner "pd.csi.storage.gke.io" or manually created by system administrator Normal Provisioning 31m pd.csi.storage.gke.io_gke-e0f710dc594c4eb5ac14-5c62-e039-vm_ca2409ad-83a8-4139-93b4-4fffbacbf44f External provisioner is provisioning volume for claim "db/postgres-pvc" Normal ProvisioningSucceeded 31m pd.csi.storage.gke.io_gke-e0f710dc594c4eb5ac14-5c62-e039-vm_ca2409ad-83a8-4139-93b4-4fffbacbf44f Successfully provisioned volume pvc-c6369764-1106-4a7d-887c-0e4009968115 and for kubectl describe pod imgress-db-0 -n db I get this result (please pay attention to Back-off restarting failed container on the last line): Name: imgress-db-0 Namespace: db Priority: 0 Service Account: default Node: gke-imgress-default-pool-e9bdef38-hjhv/10.156.0.5 Start Time: Fri, 24 Feb 2023 13:44:15 +0500 Labels: app=imgress-db controller-revision-hash=imgress-db-7f557d4b88 statefulset.kubernetes.io/pod-name=imgress-db-0 Annotations: <none> Status: Running IP: 10.84.2.49 IPs: IP: 10.84.2.49 Controlled By: StatefulSet/imgress-db Containers: imgress-db: Container ID: containerd://96140ec0b0e369ca97822361a770abcb82e27b7924bc90e17111ab354e51d6aa Image: postgres Image ID: docker.io/library/postgres@sha256:901df890146ec46a5cab7a33f4ac84e81bac2fe92b2c9a14fd649502c4adf954 Port: 5432/TCP Host Port: 0/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Fri, 24 Feb 2023 13:50:09 +0500 Finished: Fri, 24 Feb 2023 13:50:11 +0500 Ready: False Restart Count: 6 Environment: POSTGRES_HOST: <set to the key 'DATABASE_HOST' of config map 'db-configmap'> Optional: false POSTGRES_DB: <set to the key 'POSTGRES_DB' of config map 'db-configmap'> Optional: false POSTGRES_USER: <set to the key 'POSTGRES_USER' of config map 'db-configmap'> Optional: false POSTGRES_PASSWORD: <set to the key 'POSTGRES_PASSWORD' in secret 'db-secret'> Optional: false Mounts: /var/lib/postgresql/data from postgres-data (rw) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tfsf9 (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: postgres-data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: postgres-pvc ReadOnly: false kube-api-access-tfsf9: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 6m51s default-scheduler Successfully assigned db/imgress-db-0 to gke-imgress-default-pool-e9bdef38-hjhv Normal SuccessfulAttachVolume 6m46s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-c6369764-1106-4a7d-887c-0e4009968115" Normal Pulled 6m42s kubelet Successfully pulled image "postgres" in 365.355391ms Normal Pulled 6m39s kubelet Successfully pulled image "postgres" in 236.532115ms Normal Pulled 6m26s kubelet Successfully pulled image "postgres" in 278.12995ms Normal Created 5m59s (x4 over 6m42s) kubelet Created container imgress-db Normal Started 5m59s (x4 over 6m41s) kubelet Started container imgress-db Normal Pulled 5m59s kubelet Successfully pulled image "postgres" in 348.694043ms Normal Pulling 5m10s (x5 over 6m42s) kubelet Pulling image "postgres" Normal Pulled 5m9s kubelet Successfully pulled image "postgres" in 288.848351ms Warning BackOff 95s (x25 over 6m38s) kubelet Back-off restarting failed container Result of kubectl logs imgress-db-0 -n db: The files belonging to this database system will be owned by user "postgres". This user must also own the server process. The database cluster will be initialized with locale "en_US.utf8". The default database encoding has accordingly been set to "UTF8". The default text search configuration will be set to "english". Data page checksums are disabled. initdb: error: directory "/var/lib/postgresql/data" exists but is not empty initdb: detail: It contains a lost+found directory, perhaps due to it being a mount point. initdb: hint: Using a mount point directly as the data directory is not recommended. Create a subdirectory under the mount point. I've already seen some related questions asked on SO but I guess it can be caused by different problems. What I tried: adding user and password as an env variables deleting everything in the namespace and recreating deleting livenessprobe and readinessprobe from the description
Indeed, there seem to be an issue. Docs still suggests that URL should be used. Yet I see to 500 too. Checking google status page ( https://status.cloud.google.com/ ), there could be an issue with FileStore (file access issues for non-root users). With a bunch of affected locations. and right now it works.
###Instruction: I am following the instructions on the official Kubernetes documentation to install Kubernetes repository, I run sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg but I get 500 error curl: (22) The requested URL returned error: 500 Visiting the URL or curling it returns 500 Error curl https://packages.cloud.google.com/apt/doc/apt-key.gpg <!DOCTYPE html> <html lang=en> <meta charset=utf-8> <meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width"> <title>Error 500 (Internal Server Error)!!1</title> <style> *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/logos/errorpage/error_logo-150x54.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/logos/errorpage/error_logo-150x54-2x.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/logos/errorpage/error_logo-150x54-2x.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/logos/errorpage/error_logo-150x54-2x.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px} </style> <a href=//www.google.com/><span id=logo aria-label=Google></span></a> <p><b>500.</b> <ins>That’s an error.</ins> <p> <ins>That’s all we know.</ins> Is Google having issues at the moment or am I missing something? This is the official documentation URL (Debian distribution tab) https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/ UPDATE There is a GitHub issue opened as well https://github.com/kubernetes/kubernetes/issues/116068 ### Response: Indeed, there seem to be an issue. Docs still suggests that URL should be used. Yet I see to 500 too. Checking google status page ( https://status.cloud.google.com/ ), there could be an issue with FileStore (file access issues for non-root users). With a bunch of affected locations. and right now it works.
I am following the instructions on the official Kubernetes documentation to install Kubernetes repository, I run sudo curl -fsSLo /etc/apt/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg but I get 500 error curl: (22) The requested URL returned error: 500 Visiting the URL or curling it returns 500 Error curl https://packages.cloud.google.com/apt/doc/apt-key.gpg <!DOCTYPE html> <html lang=en> <meta charset=utf-8> <meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width"> <title>Error 500 (Internal Server Error)!!1</title> <style> *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/logos/errorpage/error_logo-150x54.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/logos/errorpage/error_logo-150x54-2x.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/logos/errorpage/error_logo-150x54-2x.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/logos/errorpage/error_logo-150x54-2x.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px} </style> <a href=//www.google.com/><span id=logo aria-label=Google></span></a> <p><b>500.</b> <ins>That’s an error.</ins> <p> <ins>That’s all we know.</ins> Is Google having issues at the moment or am I missing something? This is the official documentation URL (Debian distribution tab) https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/ UPDATE There is a GitHub issue opened as well https://github.com/kubernetes/kubernetes/issues/116068
I have tried from inside minikube to curl the backend-service-name did not work If you explicitly meant that you've: connected to your minikube instance (i.e. minikube ssh) curl the service name directly from minikube instance. Then it should not work. It will work from a Pod/container that is targeting internal Kubernetes DNS resolution. Let me explain: DNS resolution within Kubernetes environment is handled by coredns residing in kube-system namespace. It is a containerized service that is hosted inside of Kubernetes. Information on how to connect to it is injected to Pods via Kubelet. You can see it by: kubectl run -it basic-pod --image=nginx -- /bin/bash cat /etc/resolv.conf nameserver 10.96.0.10 # <-- SERVICE KUBE-DNS IN KUBE-SYSTEM (CLUSTER-IP) search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5 Minikube itself does not have the core-dns configured in /etc/hosts. Try to contact your Service with an actual Pod: kubectl run -it basic-pod --image=nginx -- /bin/bash apt update && apt install dnsutils -y - nginx image used for simplicity nslookup nginx - there is a Service named nginx in my minikube root@basic-pod:/# nslookup nginx Server: 10.96.0.10 Address: 10.96.0.10#53 Name: nginx.default.svc.cluster.local Address: 10.109.51.22 I encourage you to take a look on following documentation: Kubernetes.io: Docs: Concepts: Services networking: DNS Pod Service
###Instruction: I have a ruby on rails deployment and I want to use it in the frontend deployment so I created a service exposing port 3000 called "flicron-backend-service" here is the description of the service kubectl describe svc flicron-backend-service Name: flicron-backend-service Namespace: default Labels: io.kompose.service=flicron-backend-service Annotations: kompose.cmd: kompose convert -f docker-compose.yml kompose.version: 1.28.0 (c4137012e) Selector: io.kompose.service=flicron-backend Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.107.112.244 IPs: 10.107.112.244 Port: 3000 3000/TCP TargetPort: 3000/TCP Endpoints: 10.244.0.144:3000 Session Affinity: None Events: <none> I am trying to use the service name but it does not get resolved I have tried from inside minikube to curl the backend-service-name did not work curl flicron-backend-service:3000 curl: (6) Could not resolve host: flicron-backend-service curl flicron-backend-service.default.svc.cluster.local:3000 curl: (6) Could not resolve host: flicron-backend-service.default.svc.cluster.local but if I used the ip it works fine curl 10.107.112.244:3000 # some HTML Here is my DNS configuration kubectl describe cm coredns -n kube-system Name: coredns Namespace: kube-system Labels: <none> Annotations: <none> Data ==== Corefile: ---- .:53 { log errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 hosts { 192.168.49.1 host.minikube.internal fallthrough } forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } BinaryData ==== Events: <none> FrontEnd Service apiVersion: v1 kind: Service metadata: labels: io.kompose.service: flicron-frontend-service name: flicron-frontend-service spec: selector: app: flicron-frontend type: NodePort ports: - name: http port: 4000 targetPort: 4000 nodePort: 32123 backend service apiVersion: v1 kind: Service metadata: annotations: kompose.cmd: kompose convert -f docker-compose.yml kompose.version: 1.28.0 (c4137012e) creationTimestamp: null labels: io.kompose.service: flicron-backend-service name: flicron-backend-service spec: # clusterIP: None ports: - name: "3000" port: 3000 targetPort: 3000 selector: io.kompose.service: flicron-backend # this maps this service to that deployment ### Response: I have tried from inside minikube to curl the backend-service-name did not work If you explicitly meant that you've: connected to your minikube instance (i.e. minikube ssh) curl the service name directly from minikube instance. Then it should not work. It will work from a Pod/container that is targeting internal Kubernetes DNS resolution. Let me explain: DNS resolution within Kubernetes environment is handled by coredns residing in kube-system namespace. It is a containerized service that is hosted inside of Kubernetes. Information on how to connect to it is injected to Pods via Kubelet. You can see it by: kubectl run -it basic-pod --image=nginx -- /bin/bash cat /etc/resolv.conf nameserver 10.96.0.10 # <-- SERVICE KUBE-DNS IN KUBE-SYSTEM (CLUSTER-IP) search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5 Minikube itself does not have the core-dns configured in /etc/hosts. Try to contact your Service with an actual Pod: kubectl run -it basic-pod --image=nginx -- /bin/bash apt update && apt install dnsutils -y - nginx image used for simplicity nslookup nginx - there is a Service named nginx in my minikube root@basic-pod:/# nslookup nginx Server: 10.96.0.10 Address: 10.96.0.10#53 Name: nginx.default.svc.cluster.local Address: 10.109.51.22 I encourage you to take a look on following documentation: Kubernetes.io: Docs: Concepts: Services networking: DNS Pod Service
I have a ruby on rails deployment and I want to use it in the frontend deployment so I created a service exposing port 3000 called "flicron-backend-service" here is the description of the service kubectl describe svc flicron-backend-service Name: flicron-backend-service Namespace: default Labels: io.kompose.service=flicron-backend-service Annotations: kompose.cmd: kompose convert -f docker-compose.yml kompose.version: 1.28.0 (c4137012e) Selector: io.kompose.service=flicron-backend Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.107.112.244 IPs: 10.107.112.244 Port: 3000 3000/TCP TargetPort: 3000/TCP Endpoints: 10.244.0.144:3000 Session Affinity: None Events: <none> I am trying to use the service name but it does not get resolved I have tried from inside minikube to curl the backend-service-name did not work curl flicron-backend-service:3000 curl: (6) Could not resolve host: flicron-backend-service curl flicron-backend-service.default.svc.cluster.local:3000 curl: (6) Could not resolve host: flicron-backend-service.default.svc.cluster.local but if I used the ip it works fine curl 10.107.112.244:3000 # some HTML Here is my DNS configuration kubectl describe cm coredns -n kube-system Name: coredns Namespace: kube-system Labels: <none> Annotations: <none> Data ==== Corefile: ---- .:53 { log errors health { lameduck 5s } ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 hosts { 192.168.49.1 host.minikube.internal fallthrough } forward . /etc/resolv.conf { max_concurrent 1000 } cache 30 loop reload loadbalance } BinaryData ==== Events: <none> FrontEnd Service apiVersion: v1 kind: Service metadata: labels: io.kompose.service: flicron-frontend-service name: flicron-frontend-service spec: selector: app: flicron-frontend type: NodePort ports: - name: http port: 4000 targetPort: 4000 nodePort: 32123 backend service apiVersion: v1 kind: Service metadata: annotations: kompose.cmd: kompose convert -f docker-compose.yml kompose.version: 1.28.0 (c4137012e) creationTimestamp: null labels: io.kompose.service: flicron-backend-service name: flicron-backend-service spec: # clusterIP: None ports: - name: "3000" port: 3000 targetPort: 3000 selector: io.kompose.service: flicron-backend # this maps this service to that deployment
You need to specify a real path not an empty string, let's say in your image you have a tmp folder under /opt/spark, then the conf should be set like this: --conf spark.kubernetes.file.upload.path='local:///opt/spark/tmp' If you don't want to use the cmd = f""" {SPARK_HOME}/bin/spark-submit --master {SPARK_MASTER} --deploy-mode cluster --name spark-policy-engine --executor-memory {EXECUTOR_MEMORY} \ --conf spark.executor.instances={N_EXECUTORS} --conf spark.kubernetes.container.image={SPARK_IMAGE} --packages org.mongodb.spark:mongo-spark-connector_2.12:3.0.1,org.apache.hadoop:hadoop-aws:3.3.1,com.amazonaws:aws-java-sdk-bundle:1.11.901,org.apache.hadoop:hadoop-common:3.3.1 --archives pyspark_venv.tar.gz#environment {spark_files} --format_id {format_id} local:///opt/spark/work-dir/xxx.jar """
###Instruction: Im trying to deploy spark (pyspark) in kubernetes using spark-submit, but I'm getting the following error : Exception in thread "main" org.apache.spark.SparkException: Please specify spark.kubernetes.file.upload.path property.    at org.apache.spark.deploy.k8s.KubernetesUtils$.uploadFileUri(KubernetesUtils.scala:330)    at org.apache.spark.deploy.k8s.KubernetesUtils$.renameMainAppResource(KubernetesUtils.scala:300)    at Since i'm packing my dependencies trhough a virtual environment, I don't have the need of specify a remote cluster to retrieve them, so I'm no setting the parameter spark.kubernetes.file.upload.path I tried to include that parameter anyway, leaving an empty value, but it does'nt work. My spark-submit command (which I trigger from a python script) is as follows: cmd = f""" {SPARK_HOME}/bin/spark-submit --master {SPARK_MASTER} --deploy-mode cluster --name spark-policy-engine --executor-memory {EXECUTOR_MEMORY} \ --conf spark.executor.instances={N_EXECUTORS} --conf spark.kubernetes.container.image={SPARK_IMAGE} --conf spark.kubernetes.file.upload.path='' --packages org.mongodb.spark:mongo-spark-connector_2.12:3.0.1,org.apache.hadoop:hadoop-aws:3.3.1,com.amazonaws:aws-java-sdk-bundle:1.11.901,org.apache.hadoop:hadoop-common:3.3.1 --archives pyspark_venv.tar.gz#environment {spark_files} --format_id {format_id} """ As shown I'm including the parameter with within a --conf tag (as shown in https://spark.apache.org/docs/3.0.0-preview/running-on-kubernetes.html#:~:text=It%20can%20be%20found%20in,use%20with%20the%20Kubernetes%20backend.&text=This%20will%20build%20using%20the%20projects%20provided%20default%20Dockerfiles%20.), but wether is present or not, it just doesn't work ### Response: You need to specify a real path not an empty string, let's say in your image you have a tmp folder under /opt/spark, then the conf should be set like this: --conf spark.kubernetes.file.upload.path='local:///opt/spark/tmp' If you don't want to use the cmd = f""" {SPARK_HOME}/bin/spark-submit --master {SPARK_MASTER} --deploy-mode cluster --name spark-policy-engine --executor-memory {EXECUTOR_MEMORY} \ --conf spark.executor.instances={N_EXECUTORS} --conf spark.kubernetes.container.image={SPARK_IMAGE} --packages org.mongodb.spark:mongo-spark-connector_2.12:3.0.1,org.apache.hadoop:hadoop-aws:3.3.1,com.amazonaws:aws-java-sdk-bundle:1.11.901,org.apache.hadoop:hadoop-common:3.3.1 --archives pyspark_venv.tar.gz#environment {spark_files} --format_id {format_id} local:///opt/spark/work-dir/xxx.jar """
Im trying to deploy spark (pyspark) in kubernetes using spark-submit, but I'm getting the following error : Exception in thread "main" org.apache.spark.SparkException: Please specify spark.kubernetes.file.upload.path property.    at org.apache.spark.deploy.k8s.KubernetesUtils$.uploadFileUri(KubernetesUtils.scala:330)    at org.apache.spark.deploy.k8s.KubernetesUtils$.renameMainAppResource(KubernetesUtils.scala:300)    at Since i'm packing my dependencies trhough a virtual environment, I don't have the need of specify a remote cluster to retrieve them, so I'm no setting the parameter spark.kubernetes.file.upload.path I tried to include that parameter anyway, leaving an empty value, but it does'nt work. My spark-submit command (which I trigger from a python script) is as follows: cmd = f""" {SPARK_HOME}/bin/spark-submit --master {SPARK_MASTER} --deploy-mode cluster --name spark-policy-engine --executor-memory {EXECUTOR_MEMORY} \ --conf spark.executor.instances={N_EXECUTORS} --conf spark.kubernetes.container.image={SPARK_IMAGE} --conf spark.kubernetes.file.upload.path='' --packages org.mongodb.spark:mongo-spark-connector_2.12:3.0.1,org.apache.hadoop:hadoop-aws:3.3.1,com.amazonaws:aws-java-sdk-bundle:1.11.901,org.apache.hadoop:hadoop-common:3.3.1 --archives pyspark_venv.tar.gz#environment {spark_files} --format_id {format_id} """ As shown I'm including the parameter with within a --conf tag (as shown in https://spark.apache.org/docs/3.0.0-preview/running-on-kubernetes.html#:~:text=It%20can%20be%20found%20in,use%20with%20the%20Kubernetes%20backend.&text=This%20will%20build%20using%20the%20projects%20provided%20default%20Dockerfiles%20.), but wether is present or not, it just doesn't work
Edit your network settings in /etc/containers/net.d/podman-default-kube-network.conflist Change this line "dns_enabled": false, To this: "dns_enabled": true, Then reboot and up your yaml and try to resolve this: postgres_pgadmin-container_1 Or maybe this: postgres_postgres_1 Edit1 Just copy the config file: sudo cp /usr/share/containers/containers.conf /etc/containers/containers.conf Then in the file change the network backend to netavark using the following command: sed -i "/^\s*\#*\s*network_backend\s*=.*$/ s/^.*$/network_backend = \"netavark\"/" /etc/containers/containers.conf Notice: I think it's better to restart you system in order to apply changes.
###Instruction: I'm using podman 4.5-dev I have two pods deployed using: podman kube play foo.yaml podman kube play bar.yaml I specified the pods' hostnames in the files, but they won't get resolved inside the containers. I verified that the pods are in the same network. Is there some DNS configuration missing? Should I use a Services? The official docs lack of a precise indication about this topic Here's one of the two pods's YAML (the other one has the same keys with different values): apiVersion: v1 kind: Pod metadata: labels: app: postgres name: postgres spec: hostname: postgres containers: - name: pgadmin-container image: docker.io/dpage/pgadmin4:latest ports: - containerPort: 80 hostPort: 9876 Here's some terminal output that could be useful: [daniele@localhost]$ podman pod inspect xptssrv |grep -ni network 25: "HostNetwork": false, 34: "Networks": [ 35: "podman-default-kube-network" 37: "NetworkOptions": null, [daniele@localhost]$ podman pod inspect postgres |grep -ni network 25: "HostNetwork": false, 34: "Networks": [ 35: "podman-default-kube-network" 37: "NetworkOptions": null, [daniele@localhost]$ podman network inspect podman-default-kube-network |grep "" -n 1:[ 2: { 3: "name": "podman-default-kube-network", 4: "id": "a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc", 5: "driver": "bridge", 6: "network_interface": "cni-podman1", 7: "created": "2023-02-21T23:03:25.800256942+01:00", 8: "subnets": [ 9: { 10: "subnet": "10.89.0.0/24", 11: "gateway": "10.89.0.1" 12: } 13: ], 14: "ipv6_enabled": false, 15: "internal": false, 16: "dns_enabled": false, 17: "ipam_options": { 18: "driver": "host-local" 19: } 20: } 21:] [daniele@localhost pods]$ podman exec xptssrv-xptssrv-container cat /etc/resolv.conf nameserver 192.168.1.6 nameserver 8.8.8.8 ### Response: Edit your network settings in /etc/containers/net.d/podman-default-kube-network.conflist Change this line "dns_enabled": false, To this: "dns_enabled": true, Then reboot and up your yaml and try to resolve this: postgres_pgadmin-container_1 Or maybe this: postgres_postgres_1 Edit1 Just copy the config file: sudo cp /usr/share/containers/containers.conf /etc/containers/containers.conf Then in the file change the network backend to netavark using the following command: sed -i "/^\s*\#*\s*network_backend\s*=.*$/ s/^.*$/network_backend = \"netavark\"/" /etc/containers/containers.conf Notice: I think it's better to restart you system in order to apply changes.
I'm using podman 4.5-dev I have two pods deployed using: podman kube play foo.yaml podman kube play bar.yaml I specified the pods' hostnames in the files, but they won't get resolved inside the containers. I verified that the pods are in the same network. Is there some DNS configuration missing? Should I use a Services? The official docs lack of a precise indication about this topic Here's one of the two pods's YAML (the other one has the same keys with different values): apiVersion: v1 kind: Pod metadata: labels: app: postgres name: postgres spec: hostname: postgres containers: - name: pgadmin-container image: docker.io/dpage/pgadmin4:latest ports: - containerPort: 80 hostPort: 9876 Here's some terminal output that could be useful: [daniele@localhost]$ podman pod inspect xptssrv |grep -ni network 25: "HostNetwork": false, 34: "Networks": [ 35: "podman-default-kube-network" 37: "NetworkOptions": null, [daniele@localhost]$ podman pod inspect postgres |grep -ni network 25: "HostNetwork": false, 34: "Networks": [ 35: "podman-default-kube-network" 37: "NetworkOptions": null, [daniele@localhost]$ podman network inspect podman-default-kube-network |grep "" -n 1:[ 2: { 3: "name": "podman-default-kube-network", 4: "id": "a4dcf21f020ee4e36651c11256cbe884182552e835eaaafd409153cd21dca4cc", 5: "driver": "bridge", 6: "network_interface": "cni-podman1", 7: "created": "2023-02-21T23:03:25.800256942+01:00", 8: "subnets": [ 9: { 10: "subnet": "10.89.0.0/24", 11: "gateway": "10.89.0.1" 12: } 13: ], 14: "ipv6_enabled": false, 15: "internal": false, 16: "dns_enabled": false, 17: "ipam_options": { 18: "driver": "host-local" 19: } 20: } 21:] [daniele@localhost pods]$ podman exec xptssrv-xptssrv-container cat /etc/resolv.conf nameserver 192.168.1.6 nameserver 8.8.8.8
There is no difference serviceAccount is DEPRECATED and you shoud use serviceAccountName instead. Quoting from the Kubernetes API docs > pod spec: serviceAccount: DeprecatedServiceAccount is a deprecated alias for ServiceAccountName: Deprecated: Use serviceAccountName instead
###Instruction: I have seen both serviceAccount and serviceAccountName been used in a pod manifest. What is the difference? ### Response: There is no difference serviceAccount is DEPRECATED and you shoud use serviceAccountName instead. Quoting from the Kubernetes API docs > pod spec: serviceAccount: DeprecatedServiceAccount is a deprecated alias for ServiceAccountName: Deprecated: Use serviceAccountName instead
I have seen both serviceAccount and serviceAccountName been used in a pod manifest. What is the difference?
If by hybrid k8s cluster you mean a cluster that has nodes over different cloud providers, then yes that is entirely possible. You can create a simple example cluster of this by using k3s (lightweight Kubernetes) and then using the --node-external-ip flag. This tells your nodes to talk to eachother via their public IP. This sort of setup is described in Running in Multiple Zones on the Kubernetes Documentation. You will have to configure the different places you place nodes at as different zones. You can fix storage on a cluster like this by using CSI drivers for the different environments you use, like AWS, GCP, AKS, etc. When you then deploy a PVC and it creates a PV at AWS for example, when you mount this volume on a pod, that pod will always be scheduled in the zone the PV resides in, otherwise scheduling will be impossible. I personally am not running this set up in production, but I am using a technique that also suits this multiple zones idea with regards to networking. To save money on my personal cluster, I am telling my Nginx ingress controller to not make a LoadBalancer resource and to run the controllers as a DaemonSet. The Nginx controller pods have a HostPort open on the node they run on (since its a DaemonSet there won't be more than one of those pods per node) and this HostPort opens ports 80 and 443 on the host. When you then add more nodes, every one of the nodes with an ingress controller pod on it will become an ingress entrypoint. Just set up your DNS records to include all of those nodes and you'll have them load balanced.
###Instruction: I am a student and have to make a Bachelor thesis for my school. Is it possible to make a hybrid Kubernetes cluster, and how is this possible? Is there a good application I can run in this cluster to show that it works? I have made an AKS cluster and a on-prem cluster. Then I made a nginx loadbalancer and loadbalanced the 2, but the application isn't synced (which is logical). I tried using rancher but somehow I always got errors while trying to make a local cluster. Is it possible to have the storages synced somehow and be able to control the 2 clusters from one place or just make them one cluster? I have found you can use Azure Arc with azure AKS, is this a viable solution? Should I use a VPN instead? ### Response: If by hybrid k8s cluster you mean a cluster that has nodes over different cloud providers, then yes that is entirely possible. You can create a simple example cluster of this by using k3s (lightweight Kubernetes) and then using the --node-external-ip flag. This tells your nodes to talk to eachother via their public IP. This sort of setup is described in Running in Multiple Zones on the Kubernetes Documentation. You will have to configure the different places you place nodes at as different zones. You can fix storage on a cluster like this by using CSI drivers for the different environments you use, like AWS, GCP, AKS, etc. When you then deploy a PVC and it creates a PV at AWS for example, when you mount this volume on a pod, that pod will always be scheduled in the zone the PV resides in, otherwise scheduling will be impossible. I personally am not running this set up in production, but I am using a technique that also suits this multiple zones idea with regards to networking. To save money on my personal cluster, I am telling my Nginx ingress controller to not make a LoadBalancer resource and to run the controllers as a DaemonSet. The Nginx controller pods have a HostPort open on the node they run on (since its a DaemonSet there won't be more than one of those pods per node) and this HostPort opens ports 80 and 443 on the host. When you then add more nodes, every one of the nodes with an ingress controller pod on it will become an ingress entrypoint. Just set up your DNS records to include all of those nodes and you'll have them load balanced.
I am a student and have to make a Bachelor thesis for my school. Is it possible to make a hybrid Kubernetes cluster, and how is this possible? Is there a good application I can run in this cluster to show that it works? I have made an AKS cluster and a on-prem cluster. Then I made a nginx loadbalancer and loadbalanced the 2, but the application isn't synced (which is logical). I tried using rancher but somehow I always got errors while trying to make a local cluster. Is it possible to have the storages synced somehow and be able to control the 2 clusters from one place or just make them one cluster? I have found you can use Azure Arc with azure AKS, is this a viable solution? Should I use a VPN instead?
My solution was to create write the credentials to a file and create a background thread to refresh that file. I can then pass tell aws-iam-authenticator to read the credentials from the file via the AWS_SHARED_CREDENTIALS_FILE environment variable. This might also be possible using AWS_WEB_IDENTITY_TOKEN_FILE to save some steps, but I didn't look further. The updated code looks like this func updateCredentials(ctx context.Context) { creds, err := c.credentialsProvider.Retrieve(ctx) s := fmt.Sprintf(`[default] aws_access_key_id=%s aws_secret_access_key=%s aws_session_token=%s`, creds.AccessKeyID, creds.SecretAccessKey, creds.SessionToken) err = os.WriteFile(credentialsFile.Name(), []byte(s), 0666) return nil } func updateCredentialsLoop(ctx context.Context) { for { err := updateCredentials(ctx) time.Sleep(5*time.Minute) } } credentialsProvider := aws.NewCredentialsCache(stscreds.NewWebIdentityRoleProvider(...)) credentialsFile, err := os.CreateTemp("", "credentials") updateCredentials(ctx) go updateCredentialsLoop(ctx) config := clientcmdapi.NewConfig() // ... config.AuthInfos["eks"] = &clientcmdapi.AuthInfo{ Exec: &clientcmdapi.ExecConfig{ Command: "aws-iam-authenticator", Args: []string{ "token", "-i", clusterName, }, Env: []clientcmdapi.ExecEnvVar{ { Name: "AWS_SHARED_CREDENTIALS_FILE", Value: credentialsFile.Name(), }, }, APIVersion: "client.authentication.k8s.io/v1beta1", InteractiveMode: clientcmdapi.NeverExecInteractiveMode, }, } restConfig, err := config.ClientConfig() clientset, err = kubernetes.NewForConfig(restConfig) informerFactory := informers.NewSharedInformerFactory(clientset, time.Second*30) podInformer := cw.informerFactory.Core().V1().Pods().Informer()
###Instruction: I'm trying to construct a Kubernetes informer outside of the EKS cluster that it's watching. I'm using aws-iam-authenticator plugin to provide the exec-based credentials to the EKS cluster. For the plugin to work, I'm assuming an IAM role and passing the AWS IAM credentials as environment variables. The problem is that these credentials expire after an hour and cause the informer to fail with E0301 23:34:22.167817 582 runtime.go:79] Observed a panic: &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"the server has asked for the client to provide credentials (get pods)", Reason:"Unauthorized", Details:(*v1.StatusDetails)(0xc0005b0300), Code:401}} (the server has asked for the client to provide credentials (get pods)) Is there a better way of getting ClientConfig and aws-iam-authenticator to refresh the credentials? Here's a rough skeleton of my code: credentialsProvider := aws.NewCredentialsCache(stscreds.NewWebIdentityRoleProvider(...)) creds, err := credentialsProvider.Retrieve(ctx) config := clientcmdapi.NewConfig() // ... config.AuthInfos["eks"] = &clientcmdapi.AuthInfo{ Exec: &clientcmdapi.ExecConfig{ Command: "aws-iam-authenticator", Args: []string{ "token", "-i", clusterName, }, // These env vars are static! :( Env: []clientcmdapi.ExecEnvVar{ { Name: "AWS_ACCESS_KEY_ID", Value: creds.AccessKeyID, }, { Name: "AWS_SECRET_ACCESS_KEY", Value: creds.SecretAccessKey, }, { Name: "AWS_SESSION_TOKEN", Value: creds.SessionToken, }, }, APIVersion: "client.authentication.k8s.io/v1beta1", InteractiveMode: clientcmdapi.NeverExecInteractiveMode, }, } restConfig, err := config.ClientConfig() clientset, err = kubernetes.NewForConfig(restConfig) informerFactory := informers.NewSharedInformerFactory(clientset, time.Second*30) podInformer := cw.informerFactory.Core().V1().Pods().Informer() Here are a couple similar threads I found: Kubernetes client-go informers getting "Unauthorized" error after 15 mins https://github.com/kubernetes/client-go/issues/1189 ### Response: My solution was to create write the credentials to a file and create a background thread to refresh that file. I can then pass tell aws-iam-authenticator to read the credentials from the file via the AWS_SHARED_CREDENTIALS_FILE environment variable. This might also be possible using AWS_WEB_IDENTITY_TOKEN_FILE to save some steps, but I didn't look further. The updated code looks like this func updateCredentials(ctx context.Context) { creds, err := c.credentialsProvider.Retrieve(ctx) s := fmt.Sprintf(`[default] aws_access_key_id=%s aws_secret_access_key=%s aws_session_token=%s`, creds.AccessKeyID, creds.SecretAccessKey, creds.SessionToken) err = os.WriteFile(credentialsFile.Name(), []byte(s), 0666) return nil } func updateCredentialsLoop(ctx context.Context) { for { err := updateCredentials(ctx) time.Sleep(5*time.Minute) } } credentialsProvider := aws.NewCredentialsCache(stscreds.NewWebIdentityRoleProvider(...)) credentialsFile, err := os.CreateTemp("", "credentials") updateCredentials(ctx) go updateCredentialsLoop(ctx) config := clientcmdapi.NewConfig() // ... config.AuthInfos["eks"] = &clientcmdapi.AuthInfo{ Exec: &clientcmdapi.ExecConfig{ Command: "aws-iam-authenticator", Args: []string{ "token", "-i", clusterName, }, Env: []clientcmdapi.ExecEnvVar{ { Name: "AWS_SHARED_CREDENTIALS_FILE", Value: credentialsFile.Name(), }, }, APIVersion: "client.authentication.k8s.io/v1beta1", InteractiveMode: clientcmdapi.NeverExecInteractiveMode, }, } restConfig, err := config.ClientConfig() clientset, err = kubernetes.NewForConfig(restConfig) informerFactory := informers.NewSharedInformerFactory(clientset, time.Second*30) podInformer := cw.informerFactory.Core().V1().Pods().Informer()
I'm trying to construct a Kubernetes informer outside of the EKS cluster that it's watching. I'm using aws-iam-authenticator plugin to provide the exec-based credentials to the EKS cluster. For the plugin to work, I'm assuming an IAM role and passing the AWS IAM credentials as environment variables. The problem is that these credentials expire after an hour and cause the informer to fail with E0301 23:34:22.167817 582 runtime.go:79] Observed a panic: &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"the server has asked for the client to provide credentials (get pods)", Reason:"Unauthorized", Details:(*v1.StatusDetails)(0xc0005b0300), Code:401}} (the server has asked for the client to provide credentials (get pods)) Is there a better way of getting ClientConfig and aws-iam-authenticator to refresh the credentials? Here's a rough skeleton of my code: credentialsProvider := aws.NewCredentialsCache(stscreds.NewWebIdentityRoleProvider(...)) creds, err := credentialsProvider.Retrieve(ctx) config := clientcmdapi.NewConfig() // ... config.AuthInfos["eks"] = &clientcmdapi.AuthInfo{ Exec: &clientcmdapi.ExecConfig{ Command: "aws-iam-authenticator", Args: []string{ "token", "-i", clusterName, }, // These env vars are static! :( Env: []clientcmdapi.ExecEnvVar{ { Name: "AWS_ACCESS_KEY_ID", Value: creds.AccessKeyID, }, { Name: "AWS_SECRET_ACCESS_KEY", Value: creds.SecretAccessKey, }, { Name: "AWS_SESSION_TOKEN", Value: creds.SessionToken, }, }, APIVersion: "client.authentication.k8s.io/v1beta1", InteractiveMode: clientcmdapi.NeverExecInteractiveMode, }, } restConfig, err := config.ClientConfig() clientset, err = kubernetes.NewForConfig(restConfig) informerFactory := informers.NewSharedInformerFactory(clientset, time.Second*30) podInformer := cw.informerFactory.Core().V1().Pods().Informer() Here are a couple similar threads I found: Kubernetes client-go informers getting "Unauthorized" error after 15 mins https://github.com/kubernetes/client-go/issues/1189
Not really, unless you are using indexed jobs (Kubernetes 1.21, Apr. 2021). For indexed jobs, the index is exposed to each Pod in the batch.kubernetes.io/job-completion-index annotation and the JOB_COMPLETION_INDEX environment variable. Official documentation: "Indexed Job for Parallel Processing with Static Work Assignment" You can use the builtin JOB_COMPLETION_INDEX environment variable set by the Job controller for all containers. Optionally, you can define your own environment variable through the downward API to publish the index to containers There is also kubernetes/enhancements PR2630, where the Pods hostname are set to $(job-name)-$(index). This is not yet integrated to Kubernetes, but could means you can derive the pod hostname from the job name and index, allowing you to get its IP. That means pods can address each other with a DNS lookup and communicate directly using Pod IPs.
###Instruction: In, nomad, we have an env variable named NOMAD_ALLOC_INDEX, that gives me the index of the container, is there a similar env variable in Kubernetes for the pods to get the pod index? Could you please provide your inputs? Thanks, Sarita ### Response: Not really, unless you are using indexed jobs (Kubernetes 1.21, Apr. 2021). For indexed jobs, the index is exposed to each Pod in the batch.kubernetes.io/job-completion-index annotation and the JOB_COMPLETION_INDEX environment variable. Official documentation: "Indexed Job for Parallel Processing with Static Work Assignment" You can use the builtin JOB_COMPLETION_INDEX environment variable set by the Job controller for all containers. Optionally, you can define your own environment variable through the downward API to publish the index to containers There is also kubernetes/enhancements PR2630, where the Pods hostname are set to $(job-name)-$(index). This is not yet integrated to Kubernetes, but could means you can derive the pod hostname from the job name and index, allowing you to get its IP. That means pods can address each other with a DNS lookup and communicate directly using Pod IPs.
In, nomad, we have an env variable named NOMAD_ALLOC_INDEX, that gives me the index of the container, is there a similar env variable in Kubernetes for the pods to get the pod index? Could you please provide your inputs? Thanks, Sarita
This is an issue I am comming across too which lead me hear, it looks like it was just recently fixed with this patch https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/release-notes.html#id2 Which will evict pods requesting gpus while the operator starts up again. This should solve your issue as it did mine.
###Instruction: I have run into an issue where helm installing my charts will work fine, but when I go to restart the system, the nvidia gpu operator will fail to validate. Bootstrapping is simple: $ microk8s enable gpu < watching dashboard for all the pods to turn green > $ microk8s helm install -n morpheus morpheus-ai-engine morpheus-ai-engine < watching for the morpheus pods to turn green > Now I can check if the ai-engine pod has GPU access: $ kubectl exec ai-engine-897d65cff-b2trz -- nvidia-smi Wed Feb 22 16:35:32 2023 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.78.01 Driver Version: 525.78.01 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Quadro P400 Off | 00000000:04:00.0 Off | N/A | | 0% 38C P8 N/A / 30W | 98MiB / 2048MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| +-----------------------------------------------------------------------------+ Running the test vector-add pod returns a Test PASSED. The trouble comes when I restart microk8s. The nvidia-device-plugin-validator pod fails to load with an UnexpectedAdmissionError claiming that no GPUs are available. And running nvidia-smi in the ai-engine pod returns a "command not found". The vector-add test pod won't start due to insufficient GPUs. But if I uninstall the ai-engine chart and restart microk8s (waiting for the gpu operator pods to all turn green), I can then reinstall ai-engine and it works fine again, as does the vector-add test. ### Response: This is an issue I am comming across too which lead me hear, it looks like it was just recently fixed with this patch https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/release-notes.html#id2 Which will evict pods requesting gpus while the operator starts up again. This should solve your issue as it did mine.
I have run into an issue where helm installing my charts will work fine, but when I go to restart the system, the nvidia gpu operator will fail to validate. Bootstrapping is simple: $ microk8s enable gpu < watching dashboard for all the pods to turn green > $ microk8s helm install -n morpheus morpheus-ai-engine morpheus-ai-engine < watching for the morpheus pods to turn green > Now I can check if the ai-engine pod has GPU access: $ kubectl exec ai-engine-897d65cff-b2trz -- nvidia-smi Wed Feb 22 16:35:32 2023 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.78.01 Driver Version: 525.78.01 CUDA Version: 12.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Quadro P400 Off | 00000000:04:00.0 Off | N/A | | 0% 38C P8 N/A / 30W | 98MiB / 2048MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| +-----------------------------------------------------------------------------+ Running the test vector-add pod returns a Test PASSED. The trouble comes when I restart microk8s. The nvidia-device-plugin-validator pod fails to load with an UnexpectedAdmissionError claiming that no GPUs are available. And running nvidia-smi in the ai-engine pod returns a "command not found". The vector-add test pod won't start due to insufficient GPUs. But if I uninstall the ai-engine chart and restart microk8s (waiting for the gpu operator pods to all turn green), I can then reinstall ai-engine and it works fine again, as does the vector-add test.
Answering the first part: Output of rpm -q --changes which-2.21-28.el9.x86_64 * Fri Apr 15 14:00:00 2022 Than Ngo <than@redhat.com> - 2.21-28 - Resolves: #2050996, error on login when using ksh as the default shell * Tue Aug 10 14:00:00 2021 Mohan Boddu <mboddu@redhat.com> - 2.21-27 - Rebuilt for IMA sigs, glibc 2.34, aarch64 flags Related: rhbz#1991688 * Fri May 7 14:00:00 2021 Than Ngo <than@redhat.com> - 2.21-26 - Related: #1940464, fixed unbound variable * Fri Apr 16 14:00:00 2021 Mohan Boddu <mboddu@redhat.com> - 2.21-25 - Rebuilt for RHEL 9 BETA on Apr 15th 2021. Related: rhbz#1947937 * Tue Mar 23 13:00:00 2021 Than Ngo <than@redhat.com> - 2.21-24 - fix regression in zsh * Sun Mar 21 13:00:00 2021 Than Ngo <than@redhat.com> - 2.21-23 - improved which2.sh Possibly, than@redhat.com can elaborate what is going on? Or, since you use RHEL, you could contact RedHat support?
###Instruction: Problem After moving to RHEL 8.5 from 8.4, started having the issue of K8S pods failure. spec.template.spec.containers[0].env[52].name: Invalid value: "BASH_FUNC_which%%": a valid environment variable name must consist of alphabetic characters, digits, '_', '-', or '.', and must not start with a digit (e.g. 'my.env-name', or 'MY_ENV.NAME', or 'MyEnvName1', regex used for validation is '[-._a-zA-Z][-._a-zA-Z0-9]*') The env command in the login shell shows BASH_FUNC_which%% defined as below. BASH_FUNC_which%%=() { ( alias; eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot "$@" } Suggeted that /etc/profile.d/which2.sh is the one that sets up the BASH_FUNC_which%%. /etc/profile.d/which2.sh # shellcheck shell=sh # Initialization script for bash, sh, mksh and ksh which_declare="declare -f" which_opt="-f" which_shell="$(cat /proc/$$/comm)" if [ "$which_shell" = "ksh" ] || [ "$which_shell" = "mksh" ] || [ "$which_shell" = "zsh" ] ; then which_declare="typeset -f" which_opt="" fi which () { (alias; eval ${which_declare}) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot "$@" } export which_declare export ${which_opt} which By removing it, the issue was fixed. Question Please help understand where exactly BASH_FUNC_which%% is setup in RHEL8.5 and what is the purpose of this BASH_FUNC_which%%, why is has been introduced in RHEL. ### Response: Answering the first part: Output of rpm -q --changes which-2.21-28.el9.x86_64 * Fri Apr 15 14:00:00 2022 Than Ngo <than@redhat.com> - 2.21-28 - Resolves: #2050996, error on login when using ksh as the default shell * Tue Aug 10 14:00:00 2021 Mohan Boddu <mboddu@redhat.com> - 2.21-27 - Rebuilt for IMA sigs, glibc 2.34, aarch64 flags Related: rhbz#1991688 * Fri May 7 14:00:00 2021 Than Ngo <than@redhat.com> - 2.21-26 - Related: #1940464, fixed unbound variable * Fri Apr 16 14:00:00 2021 Mohan Boddu <mboddu@redhat.com> - 2.21-25 - Rebuilt for RHEL 9 BETA on Apr 15th 2021. Related: rhbz#1947937 * Tue Mar 23 13:00:00 2021 Than Ngo <than@redhat.com> - 2.21-24 - fix regression in zsh * Sun Mar 21 13:00:00 2021 Than Ngo <than@redhat.com> - 2.21-23 - improved which2.sh Possibly, than@redhat.com can elaborate what is going on? Or, since you use RHEL, you could contact RedHat support?
Problem After moving to RHEL 8.5 from 8.4, started having the issue of K8S pods failure. spec.template.spec.containers[0].env[52].name: Invalid value: "BASH_FUNC_which%%": a valid environment variable name must consist of alphabetic characters, digits, '_', '-', or '.', and must not start with a digit (e.g. 'my.env-name', or 'MY_ENV.NAME', or 'MyEnvName1', regex used for validation is '[-._a-zA-Z][-._a-zA-Z0-9]*') The env command in the login shell shows BASH_FUNC_which%% defined as below. BASH_FUNC_which%%=() { ( alias; eval ${which_declare} ) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot "$@" } Suggeted that /etc/profile.d/which2.sh is the one that sets up the BASH_FUNC_which%%. /etc/profile.d/which2.sh # shellcheck shell=sh # Initialization script for bash, sh, mksh and ksh which_declare="declare -f" which_opt="-f" which_shell="$(cat /proc/$$/comm)" if [ "$which_shell" = "ksh" ] || [ "$which_shell" = "mksh" ] || [ "$which_shell" = "zsh" ] ; then which_declare="typeset -f" which_opt="" fi which () { (alias; eval ${which_declare}) | /usr/bin/which --tty-only --read-alias --read-functions --show-tilde --show-dot "$@" } export which_declare export ${which_opt} which By removing it, the issue was fixed. Question Please help understand where exactly BASH_FUNC_which%% is setup in RHEL8.5 and what is the purpose of this BASH_FUNC_which%%, why is has been introduced in RHEL.
The issue can be solved by providing a valid GitHub API token via the GITHUB_TOKEN environment variable. Thanks a lot Christian Nunciato for the hint. However, if anybody knows some background about this and why this happened so surprisingly, I would be very interested in some insights/explanations.
###Instruction: I have a little Go app which creates YAML ressources for me I then deploy to a Kubernetes cluster. It worked quite well but since some hours (?) it fails saying: error: Could not automatically download and install resource plugin 'pulumi-resource-kubernetes', install the plugin using pulumi plugin install resource kubernetes. Underlying error: 401 HTTP error fetching plugin from https://api.github.com/repos/pulumi/pulumi-kubernetes/releases/latest Executing pulumi plugin install resource kubernetes manually returns the same result: error: 401 HTTP error fetching plugin from https://api.github.com/repos/pulumi/pulumi-kubernetes/releases/latest I have no idea what's wrong; I don't really get the message and more detail is not available - as far as I can see. The link (https://api.github.com/repos/pulumi/pulumi-kubernetes/releases/latest) seems to work properly. If a GitHub API throttle is the reason, I don't see where to place an API key. I am running MacOS Ventura 13.1 (22C65) on a MacBook Pro 2019. No recent changes here. Go has version 1.19 and the lib is github.com/pulumi/pulumi-kubernetes/sdk/v3 v3.24.1 Any hint is highly appreciated. ### Response: The issue can be solved by providing a valid GitHub API token via the GITHUB_TOKEN environment variable. Thanks a lot Christian Nunciato for the hint. However, if anybody knows some background about this and why this happened so surprisingly, I would be very interested in some insights/explanations.
I have a little Go app which creates YAML ressources for me I then deploy to a Kubernetes cluster. It worked quite well but since some hours (?) it fails saying: error: Could not automatically download and install resource plugin 'pulumi-resource-kubernetes', install the plugin using pulumi plugin install resource kubernetes. Underlying error: 401 HTTP error fetching plugin from https://api.github.com/repos/pulumi/pulumi-kubernetes/releases/latest Executing pulumi plugin install resource kubernetes manually returns the same result: error: 401 HTTP error fetching plugin from https://api.github.com/repos/pulumi/pulumi-kubernetes/releases/latest I have no idea what's wrong; I don't really get the message and more detail is not available - as far as I can see. The link (https://api.github.com/repos/pulumi/pulumi-kubernetes/releases/latest) seems to work properly. If a GitHub API throttle is the reason, I don't see where to place an API key. I am running MacOS Ventura 13.1 (22C65) on a MacBook Pro 2019. No recent changes here. Go has version 1.19 and the lib is github.com/pulumi/pulumi-kubernetes/sdk/v3 v3.24.1 Any hint is highly appreciated.
Update: This problem was resolved after upgrading Docker-Desktop to version 4.20.1 which uses a new docker/desktop-vpnkit-controller image tagged with dc331cb22850be0cdd97c84a9cfecaf44a1afb6e Original Answer you can get the log of crashed vpnkit-controller pod via command as follow: kubectl logs -p -n kube-system vpnkit-controller this shows me as follows 2023/03/09 03:45:54 Starting kube-vpnkit-forwarder... W0309 04:03:37.888472 1 reflector.go:334] github.com/moby/vpnkit/go/cmd/kube-vpnkit-forwarder/main.go:49: watch of *v1.Service ended with: The resourceVersion for the provided watch is too old. log: exiting because of error: log: cannot create log: open /tmp/kube-vpnkit-forwarder.vpnkit-controller.unknownuser.log.WARNING.20230309-040337.1: no such file or directory
###Instruction: I've just enabled docker desktop's kubernetes feature on my m2 macbook air, and I observed that the vpnkit-controller pod under kube-system namespace has restarted several times since enabling the feature, is this normal or I missed some of the settings? screen shot of pod status ### Response: Update: This problem was resolved after upgrading Docker-Desktop to version 4.20.1 which uses a new docker/desktop-vpnkit-controller image tagged with dc331cb22850be0cdd97c84a9cfecaf44a1afb6e Original Answer you can get the log of crashed vpnkit-controller pod via command as follow: kubectl logs -p -n kube-system vpnkit-controller this shows me as follows 2023/03/09 03:45:54 Starting kube-vpnkit-forwarder... W0309 04:03:37.888472 1 reflector.go:334] github.com/moby/vpnkit/go/cmd/kube-vpnkit-forwarder/main.go:49: watch of *v1.Service ended with: The resourceVersion for the provided watch is too old. log: exiting because of error: log: cannot create log: open /tmp/kube-vpnkit-forwarder.vpnkit-controller.unknownuser.log.WARNING.20230309-040337.1: no such file or directory
I've just enabled docker desktop's kubernetes feature on my m2 macbook air, and I observed that the vpnkit-controller pod under kube-system namespace has restarted several times since enabling the feature, is this normal or I missed some of the settings? screen shot of pod status
pod will remember the PVC it is using, but am struggling to find the official documentation to support this I think here is the exact line you are looking for You can set the .spec.volumeClaimTemplates which can provide stable storage using PersistentVolumes provisioned by a PersistentVolume Provisioner. Ref doc : https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#volume-claim-templates The general flow goes like You have StorageClass(backed by csi) which list of zones K8s goes round robin to create the PV Based on the PVC pod access the volume and schedule to AZ
###Instruction: How does a StatefulSet ensure that a PersistentVolume and Pod will always be provisioned in the same Availability Zone? I understand that each pod in a StatefulSet has a storage identity, and that each pod will remember the PVC it is using, but am struggling to find the official documentation to support this. ### Response: pod will remember the PVC it is using, but am struggling to find the official documentation to support this I think here is the exact line you are looking for You can set the .spec.volumeClaimTemplates which can provide stable storage using PersistentVolumes provisioned by a PersistentVolume Provisioner. Ref doc : https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#volume-claim-templates The general flow goes like You have StorageClass(backed by csi) which list of zones K8s goes round robin to create the PV Based on the PVC pod access the volume and schedule to AZ
How does a StatefulSet ensure that a PersistentVolume and Pod will always be provisioned in the same Availability Zone? I understand that each pod in a StatefulSet has a storage identity, and that each pod will remember the PVC it is using, but am struggling to find the official documentation to support this.
We have faced the same issue we were able to reach through ipv4 and not working with ipv6. We did the following, # Reset the Cluster kubeadm reset # Create the cluster by adding the ipv6 CIDR along with the ipv4 address also enable IPv6DualStack # NOTE: The ipv6 given below matches the first 4 octet with Host ipv6 address ie., fde1 kubeadm init --kubernetes-version=v1.21.1 --apiserver-advertise-address=172.16.2.1 --feature-gates="IPv6DualStack=true" --pod-network-cidr=172.16.0.0/24,fde1::/64 --service-cidr=172.16.1.0/24,fde1::/112 By now we will have the ipv6 address seen for interfaces in all our PODs starting with "fde1:" Even after configuring the above, we were not able to ping the external ipv6 address. Now we found that the packet reaches the target machine and comes back to the k8s host machine, but didn't reach the respective POD. Now we found that we were missing some ip6tables rules. we enables the natOuting to true as below, # cat /etc/calico/default-ipv6-ippool.yaml apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: default-ipv6-ippool spec: blockSize: 122 cidr: fde1::/64 ipipMode: Never nodeSelector: all() natOutgoing: true vxlanMode: Never # calicoctl apply -f /etc/calico/default-ipv6-ippool.yaml Now above created the entry in ip6tables and we can able to ping the external IPv6
###Instruction: Below is the manifest file i used to enable calico CNI for k8s, pods are able to communicate over ipv4 but i am unable to reach outside using ipv6, k8s version v1.14 and calico version v3.11, am i missing some settings, forwarding is enabled on host with "sysctl -w net.ipv6.conf.all.forwarding=1" --- # Source: calico/templates/calico-config.yaml # This ConfigMap is used to configure a self-hosted Calico installation. kind: ConfigMap apiVersion: v1 metadata: name: calico-config namespace: kube-system data: # Typha is disabled. typha_service_name: "none" # Configure the backend to use. calico_backend: "vxlan" # Configure the MTU to use veth_mtu: "1440" # The CNI network configuration to install on each node. The special # values in this config will be automatically populated. cni_network_config: |- { "name": "k8s-pod-network", "cniVersion": "0.3.1", "plugins": [ { "type": "calico", "log_level": "info", "datastore_type": "kubernetes", "nodename": "__KUBERNETES_NODE_NAME__", "mtu": __CNI_MTU__, "ipam": { "type": "calico-ipam", "assign_ipv4": "true", "assign_ipv6": "true" }, "container_settings": { "allow_ip_forwarding": true }, "policy": { "type": "k8s" }, "kubernetes": { "kubeconfig": "__KUBECONFIG_FILEPATH__" } }, { "type": "portmap", "snat": true, "capabilities": {"portMappings": true} } ] } --- # Source: calico/templates/kdd-crds.yaml apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: felixconfigurations.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: FelixConfiguration plural: felixconfigurations singular: felixconfiguration --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: ipamblocks.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: IPAMBlock plural: ipamblocks singular: ipamblock --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: blockaffinities.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: BlockAffinity plural: blockaffinities singular: blockaffinity --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: ipamhandles.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: IPAMHandle plural: ipamhandles singular: ipamhandle --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: ipamconfigs.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: IPAMConfig plural: ipamconfigs singular: ipamconfig --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: ippools.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: IPPool plural: ippools singular: ippool --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: hostendpoints.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: HostEndpoint plural: hostendpoints singular: hostendpoint --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: clusterinformations.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: ClusterInformation plural: clusterinformations singular: clusterinformation --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: globalnetworkpolicies.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: GlobalNetworkPolicy plural: globalnetworkpolicies singular: globalnetworkpolicy --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: globalnetworksets.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: GlobalNetworkSet plural: globalnetworksets singular: globalnetworkset --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: networkpolicies.crd.projectcalico.org spec: scope: Namespaced group: crd.projectcalico.org version: v1 names: kind: NetworkPolicy plural: networkpolicies singular: networkpolicy --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: networksets.crd.projectcalico.org spec: scope: Namespaced group: crd.projectcalico.org version: v1 names: kind: NetworkSet plural: networksets singular: networkset --- # Source: calico/templates/rbac.yaml # Include a clusterrole for the kube-controllers component, # and bind it to the calico-kube-controllers serviceaccount. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-kube-controllers rules: # Nodes are watched to monitor for deletions. - apiGroups: [""] resources: - nodes verbs: - watch - list - get # Pods are queried to check for existence. - apiGroups: [""] resources: - pods verbs: - get # IPAM resources are manipulated when nodes are deleted. - apiGroups: ["crd.projectcalico.org"] resources: - ippools verbs: - list - apiGroups: ["crd.projectcalico.org"] resources: - blockaffinities - ipamblocks - ipamhandles verbs: - get - list - create - update - delete # Needs access to update clusterinformations. - apiGroups: ["crd.projectcalico.org"] resources: - clusterinformations verbs: - get - create - update --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-kube-controllers roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: calico-kube-controllers subjects: - kind: ServiceAccount name: calico-kube-controllers namespace: kube-system --- # Include a clusterrole for the calico-node DaemonSet, # and bind it to the calico-node serviceaccount. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-node rules: # The CNI plugin needs to get pods, nodes, and namespaces. - apiGroups: [""] resources: - pods - nodes - namespaces verbs: - get - apiGroups: [""] resources: - endpoints - services verbs: # Used to discover service IPs for advertisement. - watch - list # Used to discover Typhas. - get - apiGroups: [""] resources: - nodes/status verbs: # Needed for clearing NodeNetworkUnavailable flag. - patch # Calico stores some configuration information in node annotations. - update # Watch for changes to Kubernetes NetworkPolicies. - apiGroups: ["networking.k8s.io"] resources: - networkpolicies verbs: - watch - list # Used by Calico for policy information. - apiGroups: [""] resources: - pods - namespaces - serviceaccounts verbs: - list - watch # The CNI plugin patches pods/status. - apiGroups: [""] resources: - pods/status verbs: - patch # Calico monitors various CRDs for config. - apiGroups: ["crd.projectcalico.org"] resources: - globalfelixconfigs - felixconfigurations - ippools - ipamblocks - globalnetworkpolicies - globalnetworksets - networkpolicies - networksets - clusterinformations - hostendpoints - blockaffinities verbs: - get - list - watch # Calico must create and update some CRDs on startup. - apiGroups: ["crd.projectcalico.org"] resources: - ippools - felixconfigurations - clusterinformations verbs: - create - update # Calico stores some configuration information on the node. - apiGroups: [""] resources: - nodes verbs: - get - list - watch # These permissions are required for Calico CNI to perform IPAM allocations. - apiGroups: ["crd.projectcalico.org"] resources: - blockaffinities - ipamblocks - ipamhandles verbs: - get - list - create - update - delete - apiGroups: ["crd.projectcalico.org"] resources: - ipamconfigs verbs: - get # Block affinities must also be watchable by confd for route aggregation. - apiGroups: ["crd.projectcalico.org"] resources: - blockaffinities verbs: - watch # The Calico IPAM migration needs to get daemonsets. These permissions can be # removed if not upgrading from an installation using host-local IPAM. - apiGroups: ["apps"] resources: - daemonsets verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: calico-node roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: calico-node subjects: - kind: ServiceAccount name: calico-node namespace: kube-system --- # Source: calico/templates/calico-node.yaml # This manifest installs the calico-node container, as well # as the CNI plugins and network config on # each master and worker node in a Kubernetes cluster. kind: DaemonSet apiVersion: apps/v1 metadata: name: calico-node namespace: kube-system labels: k8s-app: calico-node spec: selector: matchLabels: k8s-app: calico-node updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 template: metadata: labels: k8s-app: calico-node annotations: # This, along with the CriticalAddonsOnly toleration below, # marks the pod as a critical add-on, ensuring it gets # priority scheduling and that its resources are reserved # if it ever gets evicted. scheduler.alpha.kubernetes.io/critical-pod: '' spec: nodeSelector: beta.kubernetes.io/os: linux hostNetwork: true tolerations: # Make sure calico-node gets scheduled on all nodes. - effect: NoSchedule operator: Exists # Mark the pod as a critical add-on for rescheduling. - key: CriticalAddonsOnly operator: Exists - effect: NoExecute operator: Exists serviceAccountName: calico-node # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods. terminationGracePeriodSeconds: 0 priorityClassName: system-node-critical initContainers: # This container performs upgrade from host-local IPAM to calico-ipam. # It can be deleted if this is a fresh installation, or if you have already # upgraded to use calico-ipam. - name: upgrade-ipam image: calico/cni:v3.11.3 command: ["/opt/cni/bin/calico-ipam", "-upgrade"] env: - name: KUBERNETES_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: CALICO_NETWORKING_BACKEND valueFrom: configMapKeyRef: name: calico-config key: calico_backend volumeMounts: - mountPath: /var/lib/cni/networks name: host-local-net-dir - mountPath: /host/opt/cni/bin name: cni-bin-dir securityContext: privileged: true # This container installs the CNI binaries # and CNI network config file on each node. - name: install-cni image: calico/cni:v3.11.3 command: ["/install-cni.sh"] env: # Name of the CNI config file to create. - name: CNI_CONF_NAME value: "10-calico.conflist" # The CNI network config to install on each node. - name: CNI_NETWORK_CONFIG valueFrom: configMapKeyRef: name: calico-config key: cni_network_config # Set the hostname based on the k8s node name. - name: KUBERNETES_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName # CNI MTU Config variable - name: CNI_MTU valueFrom: configMapKeyRef: name: calico-config key: veth_mtu # Prevents the container from sleeping forever. - name: SLEEP value: "false" volumeMounts: - mountPath: /host/opt/cni/bin name: cni-bin-dir - mountPath: /host/etc/cni/net.d name: cni-net-dir securityContext: privileged: true # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes # to communicate with Felix over the Policy Sync API. - name: flexvol-driver image: calico/pod2daemon-flexvol:v3.11.3 volumeMounts: - name: flexvol-driver-host mountPath: /host/driver securityContext: privileged: true containers: # Runs calico-node container on each Kubernetes node. This # container programs network policy and routes on each # host. - name: calico-node image: calico/node:v3.11.3 env: # Use Kubernetes API as the backing datastore. - name: DATASTORE_TYPE value: "kubernetes" # Wait for the datastore. - name: WAIT_FOR_DATASTORE value: "true" # Set based on the k8s node name. - name: NODENAME valueFrom: fieldRef: fieldPath: spec.nodeName # Choose the backend to use. - name: CALICO_NETWORKING_BACKEND valueFrom: configMapKeyRef: name: calico-config key: calico_backend # Cluster type to identify the deployment type - name: CLUSTER_TYPE value: "k8s" # Enable IPIP - name: CALICO_IPV4POOL_VXLAN value: "Always" - name: CALICO_IPV6POOL_VXLAN value: "Always" # Set MTU for tunnel device used if ipip is enabled - name: FELIX_VXLAN valueFrom: configMapKeyRef: name: calico-config key: veth_mtu # The default IPv4 pool to create on startup if none exists. Pod IPs will be # chosen from this range. Changing this value after installation will have # no effect. This should fall within `--cluster-cidr`. - name: CALICO_IPV4POOL_CIDR value: "192.168.128.1/18" - name: CALICO_IPV6POOL_CIDR value: "fd00::/80" # Disable file logging so `kubectl logs` works. - name: CALICO_DISABLE_FILE_LOGGING value: "true" # Set Felix endpoint to host default action to ACCEPT. - name: FELIX_DEFAULTENDPOINTTOHOSTACTION value: "ACCEPT" # Disable IPv6 on Kubernetes. - name: FELIX_IPV6SUPPORT value: "true" # Set Felix logging to "info" - name: FELIX_LOGSEVERITYSCREEN value: "info" - name: FELIX_HEALTHENABLED value: "true" securityContext: privileged: true resources: requests: cpu: 250m livenessProbe: exec: command: - /bin/calico-node - -felix-live #- -bird-live periodSeconds: 10 initialDelaySeconds: 10 failureThreshold: 6 readinessProbe: exec: command: - /bin/calico-node - -felix-ready #- -bird-ready periodSeconds: 10 volumeMounts: - mountPath: /lib/modules name: lib-modules readOnly: true - mountPath: /run/xtables.lock name: xtables-lock readOnly: false - mountPath: /var/run/calico name: var-run-calico readOnly: false - mountPath: /var/lib/calico name: var-lib-calico readOnly: false - name: policysync mountPath: /var/run/nodeagent volumes: # Used by calico-node. - name: lib-modules hostPath: path: /lib/modules - name: var-run-calico hostPath: path: /var/run/calico - name: var-lib-calico hostPath: path: /var/lib/calico - name: xtables-lock hostPath: path: /run/xtables.lock type: FileOrCreate # Used to install CNI. - name: cni-bin-dir hostPath: path: /opt/cni/bin - name: cni-net-dir hostPath: path: /etc/cni/net.d # Mount in the directory for host-local IPAM allocations. This is # used when upgrading from host-local to calico-ipam, and can be removed # if not using the upgrade-ipam init container. - name: host-local-net-dir hostPath: path: /var/lib/cni/networks # Used to create per-pod Unix Domain Sockets - name: policysync hostPath: type: DirectoryOrCreate path: /var/run/nodeagent # Used to install Flex Volume Driver - name: flexvol-driver-host hostPath: type: DirectoryOrCreate path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds --- apiVersion: v1 kind: ServiceAccount metadata: name: calico-node namespace: kube-system --- # Source: calico/templates/calico-kube-controllers.yaml # See https://github.com/projectcalico/kube-controllers apiVersion: apps/v1 kind: Deployment metadata: name: calico-kube-controllers namespace: kube-system labels: k8s-app: calico-kube-controllers spec: # The controllers can only have a single active instance. replicas: 1 selector: matchLabels: k8s-app: calico-kube-controllers strategy: type: Recreate template: metadata: name: calico-kube-controllers namespace: kube-system labels: k8s-app: calico-kube-controllers annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: nodeSelector: beta.kubernetes.io/os: linux tolerations: # Mark the pod as a critical add-on for rescheduling. - key: CriticalAddonsOnly operator: Exists - key: node-role.kubernetes.io/master effect: NoSchedule serviceAccountName: calico-kube-controllers priorityClassName: system-cluster-critical containers: - name: calico-kube-controllers image: calico/kube-controllers:v3.11.3 env: # Choose which controllers to run. - name: ENABLED_CONTROLLERS value: node - name: DATASTORE_TYPE value: kubernetes readinessProbe: exec: command: - /usr/bin/check-status - -r --- apiVersion: v1 kind: ServiceAccount metadata: name: calico-kube-controllers namespace: kube-system --- # Source: calico/templates/calico-etcd-secrets.yaml --- # Source: calico/templates/calico-typha.yaml --- # Source: calico/templates/configure-canal.yaml also i saw that all calico interfaces on host got same ipv6 fe80::ecee:eeff:feee:eeee/64 which is default ipv6 gateway for all pods also observed when enabled vxlan it was not adding routes vxlan.calico Link encap:Ethernet HWaddr 66:6a:cb:79:4e:d7 inet addr:192.168.191.64 Bcast:192.168.191.64 Mask:255.255.255.255 inet6 addr: fe80::646a:cbff:fe79:4ed7/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1410 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:125 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Destination Gateway Genmask Flags Metric Ref Use Iface default _gateway 0.0.0.0 UG 0 0 0 eth0 10.243.0.0 * 255.255.224.0 U 0 0 0 eth0 192.168.0.0 * 255.255.192.0 U 0 0 0 eth0 192.168.191.70 * 255.255.255.255 UH 0 0 0 cali1c1c9f58b42 192.168.191.71 * 255.255.255.255 UH 0 0 0 cali2fc31b4251c 192.168.191.72 * 255.255.255.255 UH 0 0 0 cali5be518cf856 192.168.191.73 * 255.255.255.255 UH 0 0 0 cali5e3a68f7b5f 192.168.191.74 * 255.255.255.255 UH 0 0 0 cali59474b079db ### Response: We have faced the same issue we were able to reach through ipv4 and not working with ipv6. We did the following, # Reset the Cluster kubeadm reset # Create the cluster by adding the ipv6 CIDR along with the ipv4 address also enable IPv6DualStack # NOTE: The ipv6 given below matches the first 4 octet with Host ipv6 address ie., fde1 kubeadm init --kubernetes-version=v1.21.1 --apiserver-advertise-address=172.16.2.1 --feature-gates="IPv6DualStack=true" --pod-network-cidr=172.16.0.0/24,fde1::/64 --service-cidr=172.16.1.0/24,fde1::/112 By now we will have the ipv6 address seen for interfaces in all our PODs starting with "fde1:" Even after configuring the above, we were not able to ping the external ipv6 address. Now we found that the packet reaches the target machine and comes back to the k8s host machine, but didn't reach the respective POD. Now we found that we were missing some ip6tables rules. we enables the natOuting to true as below, # cat /etc/calico/default-ipv6-ippool.yaml apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: default-ipv6-ippool spec: blockSize: 122 cidr: fde1::/64 ipipMode: Never nodeSelector: all() natOutgoing: true vxlanMode: Never # calicoctl apply -f /etc/calico/default-ipv6-ippool.yaml Now above created the entry in ip6tables and we can able to ping the external IPv6
Below is the manifest file i used to enable calico CNI for k8s, pods are able to communicate over ipv4 but i am unable to reach outside using ipv6, k8s version v1.14 and calico version v3.11, am i missing some settings, forwarding is enabled on host with "sysctl -w net.ipv6.conf.all.forwarding=1" --- # Source: calico/templates/calico-config.yaml # This ConfigMap is used to configure a self-hosted Calico installation. kind: ConfigMap apiVersion: v1 metadata: name: calico-config namespace: kube-system data: # Typha is disabled. typha_service_name: "none" # Configure the backend to use. calico_backend: "vxlan" # Configure the MTU to use veth_mtu: "1440" # The CNI network configuration to install on each node. The special # values in this config will be automatically populated. cni_network_config: |- { "name": "k8s-pod-network", "cniVersion": "0.3.1", "plugins": [ { "type": "calico", "log_level": "info", "datastore_type": "kubernetes", "nodename": "__KUBERNETES_NODE_NAME__", "mtu": __CNI_MTU__, "ipam": { "type": "calico-ipam", "assign_ipv4": "true", "assign_ipv6": "true" }, "container_settings": { "allow_ip_forwarding": true }, "policy": { "type": "k8s" }, "kubernetes": { "kubeconfig": "__KUBECONFIG_FILEPATH__" } }, { "type": "portmap", "snat": true, "capabilities": {"portMappings": true} } ] } --- # Source: calico/templates/kdd-crds.yaml apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: felixconfigurations.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: FelixConfiguration plural: felixconfigurations singular: felixconfiguration --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: ipamblocks.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: IPAMBlock plural: ipamblocks singular: ipamblock --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: blockaffinities.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: BlockAffinity plural: blockaffinities singular: blockaffinity --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: ipamhandles.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: IPAMHandle plural: ipamhandles singular: ipamhandle --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: ipamconfigs.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: IPAMConfig plural: ipamconfigs singular: ipamconfig --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: ippools.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: IPPool plural: ippools singular: ippool --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: hostendpoints.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: HostEndpoint plural: hostendpoints singular: hostendpoint --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: clusterinformations.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: ClusterInformation plural: clusterinformations singular: clusterinformation --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: globalnetworkpolicies.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: GlobalNetworkPolicy plural: globalnetworkpolicies singular: globalnetworkpolicy --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: globalnetworksets.crd.projectcalico.org spec: scope: Cluster group: crd.projectcalico.org version: v1 names: kind: GlobalNetworkSet plural: globalnetworksets singular: globalnetworkset --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: networkpolicies.crd.projectcalico.org spec: scope: Namespaced group: crd.projectcalico.org version: v1 names: kind: NetworkPolicy plural: networkpolicies singular: networkpolicy --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: networksets.crd.projectcalico.org spec: scope: Namespaced group: crd.projectcalico.org version: v1 names: kind: NetworkSet plural: networksets singular: networkset --- # Source: calico/templates/rbac.yaml # Include a clusterrole for the kube-controllers component, # and bind it to the calico-kube-controllers serviceaccount. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-kube-controllers rules: # Nodes are watched to monitor for deletions. - apiGroups: [""] resources: - nodes verbs: - watch - list - get # Pods are queried to check for existence. - apiGroups: [""] resources: - pods verbs: - get # IPAM resources are manipulated when nodes are deleted. - apiGroups: ["crd.projectcalico.org"] resources: - ippools verbs: - list - apiGroups: ["crd.projectcalico.org"] resources: - blockaffinities - ipamblocks - ipamhandles verbs: - get - list - create - update - delete # Needs access to update clusterinformations. - apiGroups: ["crd.projectcalico.org"] resources: - clusterinformations verbs: - get - create - update --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-kube-controllers roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: calico-kube-controllers subjects: - kind: ServiceAccount name: calico-kube-controllers namespace: kube-system --- # Include a clusterrole for the calico-node DaemonSet, # and bind it to the calico-node serviceaccount. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: calico-node rules: # The CNI plugin needs to get pods, nodes, and namespaces. - apiGroups: [""] resources: - pods - nodes - namespaces verbs: - get - apiGroups: [""] resources: - endpoints - services verbs: # Used to discover service IPs for advertisement. - watch - list # Used to discover Typhas. - get - apiGroups: [""] resources: - nodes/status verbs: # Needed for clearing NodeNetworkUnavailable flag. - patch # Calico stores some configuration information in node annotations. - update # Watch for changes to Kubernetes NetworkPolicies. - apiGroups: ["networking.k8s.io"] resources: - networkpolicies verbs: - watch - list # Used by Calico for policy information. - apiGroups: [""] resources: - pods - namespaces - serviceaccounts verbs: - list - watch # The CNI plugin patches pods/status. - apiGroups: [""] resources: - pods/status verbs: - patch # Calico monitors various CRDs for config. - apiGroups: ["crd.projectcalico.org"] resources: - globalfelixconfigs - felixconfigurations - ippools - ipamblocks - globalnetworkpolicies - globalnetworksets - networkpolicies - networksets - clusterinformations - hostendpoints - blockaffinities verbs: - get - list - watch # Calico must create and update some CRDs on startup. - apiGroups: ["crd.projectcalico.org"] resources: - ippools - felixconfigurations - clusterinformations verbs: - create - update # Calico stores some configuration information on the node. - apiGroups: [""] resources: - nodes verbs: - get - list - watch # These permissions are required for Calico CNI to perform IPAM allocations. - apiGroups: ["crd.projectcalico.org"] resources: - blockaffinities - ipamblocks - ipamhandles verbs: - get - list - create - update - delete - apiGroups: ["crd.projectcalico.org"] resources: - ipamconfigs verbs: - get # Block affinities must also be watchable by confd for route aggregation. - apiGroups: ["crd.projectcalico.org"] resources: - blockaffinities verbs: - watch # The Calico IPAM migration needs to get daemonsets. These permissions can be # removed if not upgrading from an installation using host-local IPAM. - apiGroups: ["apps"] resources: - daemonsets verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: calico-node roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: calico-node subjects: - kind: ServiceAccount name: calico-node namespace: kube-system --- # Source: calico/templates/calico-node.yaml # This manifest installs the calico-node container, as well # as the CNI plugins and network config on # each master and worker node in a Kubernetes cluster. kind: DaemonSet apiVersion: apps/v1 metadata: name: calico-node namespace: kube-system labels: k8s-app: calico-node spec: selector: matchLabels: k8s-app: calico-node updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 template: metadata: labels: k8s-app: calico-node annotations: # This, along with the CriticalAddonsOnly toleration below, # marks the pod as a critical add-on, ensuring it gets # priority scheduling and that its resources are reserved # if it ever gets evicted. scheduler.alpha.kubernetes.io/critical-pod: '' spec: nodeSelector: beta.kubernetes.io/os: linux hostNetwork: true tolerations: # Make sure calico-node gets scheduled on all nodes. - effect: NoSchedule operator: Exists # Mark the pod as a critical add-on for rescheduling. - key: CriticalAddonsOnly operator: Exists - effect: NoExecute operator: Exists serviceAccountName: calico-node # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods. terminationGracePeriodSeconds: 0 priorityClassName: system-node-critical initContainers: # This container performs upgrade from host-local IPAM to calico-ipam. # It can be deleted if this is a fresh installation, or if you have already # upgraded to use calico-ipam. - name: upgrade-ipam image: calico/cni:v3.11.3 command: ["/opt/cni/bin/calico-ipam", "-upgrade"] env: - name: KUBERNETES_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: CALICO_NETWORKING_BACKEND valueFrom: configMapKeyRef: name: calico-config key: calico_backend volumeMounts: - mountPath: /var/lib/cni/networks name: host-local-net-dir - mountPath: /host/opt/cni/bin name: cni-bin-dir securityContext: privileged: true # This container installs the CNI binaries # and CNI network config file on each node. - name: install-cni image: calico/cni:v3.11.3 command: ["/install-cni.sh"] env: # Name of the CNI config file to create. - name: CNI_CONF_NAME value: "10-calico.conflist" # The CNI network config to install on each node. - name: CNI_NETWORK_CONFIG valueFrom: configMapKeyRef: name: calico-config key: cni_network_config # Set the hostname based on the k8s node name. - name: KUBERNETES_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName # CNI MTU Config variable - name: CNI_MTU valueFrom: configMapKeyRef: name: calico-config key: veth_mtu # Prevents the container from sleeping forever. - name: SLEEP value: "false" volumeMounts: - mountPath: /host/opt/cni/bin name: cni-bin-dir - mountPath: /host/etc/cni/net.d name: cni-net-dir securityContext: privileged: true # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes # to communicate with Felix over the Policy Sync API. - name: flexvol-driver image: calico/pod2daemon-flexvol:v3.11.3 volumeMounts: - name: flexvol-driver-host mountPath: /host/driver securityContext: privileged: true containers: # Runs calico-node container on each Kubernetes node. This # container programs network policy and routes on each # host. - name: calico-node image: calico/node:v3.11.3 env: # Use Kubernetes API as the backing datastore. - name: DATASTORE_TYPE value: "kubernetes" # Wait for the datastore. - name: WAIT_FOR_DATASTORE value: "true" # Set based on the k8s node name. - name: NODENAME valueFrom: fieldRef: fieldPath: spec.nodeName # Choose the backend to use. - name: CALICO_NETWORKING_BACKEND valueFrom: configMapKeyRef: name: calico-config key: calico_backend # Cluster type to identify the deployment type - name: CLUSTER_TYPE value: "k8s" # Enable IPIP - name: CALICO_IPV4POOL_VXLAN value: "Always" - name: CALICO_IPV6POOL_VXLAN value: "Always" # Set MTU for tunnel device used if ipip is enabled - name: FELIX_VXLAN valueFrom: configMapKeyRef: name: calico-config key: veth_mtu # The default IPv4 pool to create on startup if none exists. Pod IPs will be # chosen from this range. Changing this value after installation will have # no effect. This should fall within `--cluster-cidr`. - name: CALICO_IPV4POOL_CIDR value: "192.168.128.1/18" - name: CALICO_IPV6POOL_CIDR value: "fd00::/80" # Disable file logging so `kubectl logs` works. - name: CALICO_DISABLE_FILE_LOGGING value: "true" # Set Felix endpoint to host default action to ACCEPT. - name: FELIX_DEFAULTENDPOINTTOHOSTACTION value: "ACCEPT" # Disable IPv6 on Kubernetes. - name: FELIX_IPV6SUPPORT value: "true" # Set Felix logging to "info" - name: FELIX_LOGSEVERITYSCREEN value: "info" - name: FELIX_HEALTHENABLED value: "true" securityContext: privileged: true resources: requests: cpu: 250m livenessProbe: exec: command: - /bin/calico-node - -felix-live #- -bird-live periodSeconds: 10 initialDelaySeconds: 10 failureThreshold: 6 readinessProbe: exec: command: - /bin/calico-node - -felix-ready #- -bird-ready periodSeconds: 10 volumeMounts: - mountPath: /lib/modules name: lib-modules readOnly: true - mountPath: /run/xtables.lock name: xtables-lock readOnly: false - mountPath: /var/run/calico name: var-run-calico readOnly: false - mountPath: /var/lib/calico name: var-lib-calico readOnly: false - name: policysync mountPath: /var/run/nodeagent volumes: # Used by calico-node. - name: lib-modules hostPath: path: /lib/modules - name: var-run-calico hostPath: path: /var/run/calico - name: var-lib-calico hostPath: path: /var/lib/calico - name: xtables-lock hostPath: path: /run/xtables.lock type: FileOrCreate # Used to install CNI. - name: cni-bin-dir hostPath: path: /opt/cni/bin - name: cni-net-dir hostPath: path: /etc/cni/net.d # Mount in the directory for host-local IPAM allocations. This is # used when upgrading from host-local to calico-ipam, and can be removed # if not using the upgrade-ipam init container. - name: host-local-net-dir hostPath: path: /var/lib/cni/networks # Used to create per-pod Unix Domain Sockets - name: policysync hostPath: type: DirectoryOrCreate path: /var/run/nodeagent # Used to install Flex Volume Driver - name: flexvol-driver-host hostPath: type: DirectoryOrCreate path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds --- apiVersion: v1 kind: ServiceAccount metadata: name: calico-node namespace: kube-system --- # Source: calico/templates/calico-kube-controllers.yaml # See https://github.com/projectcalico/kube-controllers apiVersion: apps/v1 kind: Deployment metadata: name: calico-kube-controllers namespace: kube-system labels: k8s-app: calico-kube-controllers spec: # The controllers can only have a single active instance. replicas: 1 selector: matchLabels: k8s-app: calico-kube-controllers strategy: type: Recreate template: metadata: name: calico-kube-controllers namespace: kube-system labels: k8s-app: calico-kube-controllers annotations: scheduler.alpha.kubernetes.io/critical-pod: '' spec: nodeSelector: beta.kubernetes.io/os: linux tolerations: # Mark the pod as a critical add-on for rescheduling. - key: CriticalAddonsOnly operator: Exists - key: node-role.kubernetes.io/master effect: NoSchedule serviceAccountName: calico-kube-controllers priorityClassName: system-cluster-critical containers: - name: calico-kube-controllers image: calico/kube-controllers:v3.11.3 env: # Choose which controllers to run. - name: ENABLED_CONTROLLERS value: node - name: DATASTORE_TYPE value: kubernetes readinessProbe: exec: command: - /usr/bin/check-status - -r --- apiVersion: v1 kind: ServiceAccount metadata: name: calico-kube-controllers namespace: kube-system --- # Source: calico/templates/calico-etcd-secrets.yaml --- # Source: calico/templates/calico-typha.yaml --- # Source: calico/templates/configure-canal.yaml also i saw that all calico interfaces on host got same ipv6 fe80::ecee:eeff:feee:eeee/64 which is default ipv6 gateway for all pods also observed when enabled vxlan it was not adding routes vxlan.calico Link encap:Ethernet HWaddr 66:6a:cb:79:4e:d7 inet addr:192.168.191.64 Bcast:192.168.191.64 Mask:255.255.255.255 inet6 addr: fe80::646a:cbff:fe79:4ed7/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1410 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:125 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Destination Gateway Genmask Flags Metric Ref Use Iface default _gateway 0.0.0.0 UG 0 0 0 eth0 10.243.0.0 * 255.255.224.0 U 0 0 0 eth0 192.168.0.0 * 255.255.192.0 U 0 0 0 eth0 192.168.191.70 * 255.255.255.255 UH 0 0 0 cali1c1c9f58b42 192.168.191.71 * 255.255.255.255 UH 0 0 0 cali2fc31b4251c 192.168.191.72 * 255.255.255.255 UH 0 0 0 cali5be518cf856 192.168.191.73 * 255.255.255.255 UH 0 0 0 cali5e3a68f7b5f 192.168.191.74 * 255.255.255.255 UH 0 0 0 cali59474b079db
We have been running multiple databases on K8s like Elasticsearch, Redis some message brokers so it you can definitely run on Production. Now i would suggest going with the Operator managed Database deployment. As on Production, you might be expecting an HA, data replication which you won't be having an idea in depth. Operator manages the deployment in simple words and do the operation as need. Here is the document to follow for postgres operator is has the details on how CRD, stateful sets volume work together. Zalando postgres-operator : https://github.com/zalando/postgres-operator Comparision between available operator : https://blog.palark.com/comparing-kubernetes-operators-for-postgresql/
###Instruction: Good afternoon! I am relatively new to Kubernetes. I would like to ask for your help in knowledge! What is the best way to deploy Postgresql or MySql in kubernetes for production? Suggest articles or resources, I would be very grateful. I only see examples and information on how to deploy Postgresql dev, very little information on how to set up a production environment. Thank you!!! ### Response: We have been running multiple databases on K8s like Elasticsearch, Redis some message brokers so it you can definitely run on Production. Now i would suggest going with the Operator managed Database deployment. As on Production, you might be expecting an HA, data replication which you won't be having an idea in depth. Operator manages the deployment in simple words and do the operation as need. Here is the document to follow for postgres operator is has the details on how CRD, stateful sets volume work together. Zalando postgres-operator : https://github.com/zalando/postgres-operator Comparision between available operator : https://blog.palark.com/comparing-kubernetes-operators-for-postgresql/
Good afternoon! I am relatively new to Kubernetes. I would like to ask for your help in knowledge! What is the best way to deploy Postgresql or MySql in kubernetes for production? Suggest articles or resources, I would be very grateful. I only see examples and information on how to deploy Postgresql dev, very little information on how to set up a production environment. Thank you!!!
You configured the NodePort of 30092 for your kafka broker so to be able to connect to it you should use <node-ip>:30092 as your bootstrap server address. If you're running minikube, you can find the node ip with the minikube ip command. (Although you mentioned that the connection to your zookeeper at localhost:30181 was successful so you might as well try localhost:30092 for your kafka.) Also, you don't need to port forward via kubectl in this case since NodePort is exposed to the outside world anyway. To debug the connection problems with Offset Explorer, enable debug logging in its settings, make an attempt to connect to your broker and check the tool's debug logs for the details of the error. The logs should be located in the same directory as the tool executable if you're running on Windows.
###Instruction: I'm trying to deploy kafka on local k8s, then I need to connect to it by application and using offset explorer so, using kubectl I created zookeeper service and deployment using this yml file apiVersion: v1 kind: Service metadata: labels: app: zookeeper-service name: zookeeper-service spec: type: NodePort ports: - name: zookeeper-port port: 2181 nodePort: 30091 targetPort: 2181 selector: app: zookeeper --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: zookeeper name: zookeeper spec: replicas: 1 selector: matchLabels: app: zookeeper template: metadata: labels: app: zookeeper spec: containers: - image: bitnami/zookeeper imagePullPolicy: IfNotPresent name: zookeeper ports: - containerPort: 2181 env: - name: ALLOW_PLAINTEXT_LISTENER value: "yes" - name: ALLOW_ANONYMOUS_LOGIN value: "yes" Then, I created kafka service and deployment using this yml apiVersion: v1 kind: Service metadata: labels: app: kafka-service name: kafka-service spec: type: NodePort ports: - name: kafka-port port: 9092 nodePort: 30092 targetPort: 9092 selector: app: kafka-broker --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: kafka-broker name: kafka-broker spec: replicas: 1 selector: matchLabels: app: kafka-broker template: metadata: labels: app: kafka-broker spec: hostname: kafka-broker containers: - image: bitnami/kafka imagePullPolicy: IfNotPresent name: kafka-broker ports: - containerPort: 9092 env: - name: KAFKA_BROKER_ID value: "1" - name: KAFKA_ZOOKEEPER_CONNECT value: "zookeeper-service:2181" - name: KAFKA_LISTENERS value: PLAINTEXT://localhost:9092 - name: KAFKA_ADVERTISED_LISTENERS value: PLAINTEXT://localhost:9092 # Creates a topic with one partition and one replica. - name: KAFKA_CREATE_TOPICS value: "bomc:1:1" - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: ALLOW_PLAINTEXT_LISTENER value: "yes" And both services and deployment created and running And I have ingress for this services apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx-ingress annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - http: paths: - path: /health pathType: Prefix backend: service: name: health-app-service port: number: 80 - path: /actuator pathType: Prefix backend: service: name: health-app-service port: number: 80 - path: /jsonrpc pathType: Prefix backend: service: name: core-service port: number: 80 - path: / pathType: Prefix backend: service: name: kafka-service # Название вашего Kafka-сервиса port: number: 9092 # Название порта, используемого для Kafka - path: / pathType: Prefix backend: service: name: kafka-service # Название вашего Kafka-сервиса port: number: 30092 # Название порта, используемого для Kafka - path: / pathType: Prefix backend: service: name: kafka-service # Название вашего Kafka-сервиса port: name: kafka-port # Название порта, используемого для Kafka - path: / pathType: Prefix backend: service: name: zookeeper-service port: name: zookeeper-port but, when I try to connect to this kafka using offset key tool, there is error connection. When I use localhost:30092 like a bootstrap server - error with logs: 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - Starting application : Offset Explorer 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - Version : 2.3 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - Built : Jun 30, 2022 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - user.home : C:\Users\Roberto 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - user.dir : C:\Program Files\OffsetExplorer2 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - os.name : Windows 10 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - java.runtime.version : 1.8.0_232-b09 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - max memory=3586 MB 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - available processors=8 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - java.security.auth.login.config=null 12/мар/2023 22:32:46.121 INFO com.kafkatool.common.ExternalDecoderManager - Finding plugins in directory C:\Program Files\OffsetExplorer2\plugins 12/мар/2023 22:32:46.121 INFO com.kafkatool.common.ExternalDecoderManager - Found files in plugin directory, count=1 12/мар/2023 22:32:46.121 INFO com.kafkatool.ui.MainApp - Loading user settings 12/мар/2023 22:32:46.153 INFO com.kafkatool.ui.MainApp - Loading server group settings 12/мар/2023 22:32:46.153 INFO com.kafkatool.ui.MainApp - Loading server connection settings 12/мар/2023 22:32:50.103 INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values: bootstrap.servers = [localhost:30092] client.dns.lookup = default client.id = connections.max.idle.ms = 300000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 120000 retries = 5 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS 12/мар/2023 22:32:50.126 DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=adminclient-1] Setting bootstrap cluster metadata Cluster(id = null, nodes = [localhost:30092 (id: -1 rack: null)], partitions = [], controller = null). 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed: 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created: 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name successful-authentication: 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name successful-reauthentication: 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name successful-authentication-no-reauth: 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name failed-authentication: 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name failed-reauthentication: 12/мар/2023 22:32:50.198 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name reauthentication-latency: 12/мар/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received: 12/мар/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent: 12/мар/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received: 12/мар/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time: 12/мар/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time: 12/мар/2023 22:32:50.204 WARN org.apache.kafka.clients.admin.AdminClientConfig - The configuration 'group.id' was supplied but isn't a known config. 12/мар/2023 22:32:50.204 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.4.0 12/мар/2023 22:32:50.204 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 77a89fcf8d7fa018 12/мар/2023 22:32:50.204 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1678649570204 12/мар/2023 22:32:50.214 DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=adminclient-1] Kafka admin client initialized 12/мар/2023 22:32:50.215 DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=adminclient-1] Queueing Call(callName=listNodes, deadlineMs=1678649690215) with a timeout 120000 ms from now. 12/мар/2023 22:32:50.215 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Initiating connection to node localhost:30092 (id: -1 rack: null) using address localhost/127.0.0.1 12/мар/2023 22:32:50.228 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-sent 12/мар/2023 22:32:50.230 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-received 12/мар/2023 22:32:50.232 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.latency 12/мар/2023 22:32:50.232 DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 12/мар/2023 22:32:50.320 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Completed connection to node -1. Fetching API versions. 12/мар/2023 22:32:50.320 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Initiating API versions fetch from node -1. 12/мар/2023 22:32:50.376 DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=adminclient-1] Connection with localhost/127.0.0.1 disconnected java.io.EOFException at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572) at org.apache.kafka.common.network.Selector.poll(Selector.java:483) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:540) at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1196) at java.lang.Thread.run(Thread.java:748) 12/мар/2023 22:33:11.787 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Node -1 disconnected. 12/мар/2023 22:33:12.766 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Initiating connection to node localhost:30092 (id: -1 rack: null) using address localhost/127.0.0.1 12/мар/2023 22:33:12.767 DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 12/мар/2023 22:33:12.767 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Completed connection to node -1. Fetching API versions. 12/мар/2023 22:33:12.767 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Initiating API versions fetch from node -1. 12/мар/2023 22:33:12.768 DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=adminclient-1] Connection with localhost/127.0.0.1 disconnected java.io.EOFException at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572) at org.apache.kafka.common.network.Selector.poll(Selector.java:483) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:540) at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1196) at java.lang.Thread.run(Thread.java:748) ### Response: You configured the NodePort of 30092 for your kafka broker so to be able to connect to it you should use <node-ip>:30092 as your bootstrap server address. If you're running minikube, you can find the node ip with the minikube ip command. (Although you mentioned that the connection to your zookeeper at localhost:30181 was successful so you might as well try localhost:30092 for your kafka.) Also, you don't need to port forward via kubectl in this case since NodePort is exposed to the outside world anyway. To debug the connection problems with Offset Explorer, enable debug logging in its settings, make an attempt to connect to your broker and check the tool's debug logs for the details of the error. The logs should be located in the same directory as the tool executable if you're running on Windows.
I'm trying to deploy kafka on local k8s, then I need to connect to it by application and using offset explorer so, using kubectl I created zookeeper service and deployment using this yml file apiVersion: v1 kind: Service metadata: labels: app: zookeeper-service name: zookeeper-service spec: type: NodePort ports: - name: zookeeper-port port: 2181 nodePort: 30091 targetPort: 2181 selector: app: zookeeper --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: zookeeper name: zookeeper spec: replicas: 1 selector: matchLabels: app: zookeeper template: metadata: labels: app: zookeeper spec: containers: - image: bitnami/zookeeper imagePullPolicy: IfNotPresent name: zookeeper ports: - containerPort: 2181 env: - name: ALLOW_PLAINTEXT_LISTENER value: "yes" - name: ALLOW_ANONYMOUS_LOGIN value: "yes" Then, I created kafka service and deployment using this yml apiVersion: v1 kind: Service metadata: labels: app: kafka-service name: kafka-service spec: type: NodePort ports: - name: kafka-port port: 9092 nodePort: 30092 targetPort: 9092 selector: app: kafka-broker --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: kafka-broker name: kafka-broker spec: replicas: 1 selector: matchLabels: app: kafka-broker template: metadata: labels: app: kafka-broker spec: hostname: kafka-broker containers: - image: bitnami/kafka imagePullPolicy: IfNotPresent name: kafka-broker ports: - containerPort: 9092 env: - name: KAFKA_BROKER_ID value: "1" - name: KAFKA_ZOOKEEPER_CONNECT value: "zookeeper-service:2181" - name: KAFKA_LISTENERS value: PLAINTEXT://localhost:9092 - name: KAFKA_ADVERTISED_LISTENERS value: PLAINTEXT://localhost:9092 # Creates a topic with one partition and one replica. - name: KAFKA_CREATE_TOPICS value: "bomc:1:1" - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: ALLOW_PLAINTEXT_LISTENER value: "yes" And both services and deployment created and running And I have ingress for this services apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx-ingress annotations: kubernetes.io/ingress.class: "nginx" spec: rules: - http: paths: - path: /health pathType: Prefix backend: service: name: health-app-service port: number: 80 - path: /actuator pathType: Prefix backend: service: name: health-app-service port: number: 80 - path: /jsonrpc pathType: Prefix backend: service: name: core-service port: number: 80 - path: / pathType: Prefix backend: service: name: kafka-service # Название вашего Kafka-сервиса port: number: 9092 # Название порта, используемого для Kafka - path: / pathType: Prefix backend: service: name: kafka-service # Название вашего Kafka-сервиса port: number: 30092 # Название порта, используемого для Kafka - path: / pathType: Prefix backend: service: name: kafka-service # Название вашего Kafka-сервиса port: name: kafka-port # Название порта, используемого для Kafka - path: / pathType: Prefix backend: service: name: zookeeper-service port: name: zookeeper-port but, when I try to connect to this kafka using offset key tool, there is error connection. When I use localhost:30092 like a bootstrap server - error with logs: 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - Starting application : Offset Explorer 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - Version : 2.3 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - Built : Jun 30, 2022 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - user.home : C:\Users\Roberto 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - user.dir : C:\Program Files\OffsetExplorer2 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - os.name : Windows 10 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - java.runtime.version : 1.8.0_232-b09 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - max memory=3586 MB 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - available processors=8 12/мар/2023 22:32:46.111 INFO com.kafkatool.ui.MainApp - java.security.auth.login.config=null 12/мар/2023 22:32:46.121 INFO com.kafkatool.common.ExternalDecoderManager - Finding plugins in directory C:\Program Files\OffsetExplorer2\plugins 12/мар/2023 22:32:46.121 INFO com.kafkatool.common.ExternalDecoderManager - Found files in plugin directory, count=1 12/мар/2023 22:32:46.121 INFO com.kafkatool.ui.MainApp - Loading user settings 12/мар/2023 22:32:46.153 INFO com.kafkatool.ui.MainApp - Loading server group settings 12/мар/2023 22:32:46.153 INFO com.kafkatool.ui.MainApp - Loading server connection settings 12/мар/2023 22:32:50.103 INFO org.apache.kafka.clients.admin.AdminClientConfig - AdminClientConfig values: bootstrap.servers = [localhost:30092] client.dns.lookup = default client.id = connections.max.idle.ms = 300000 metadata.max.age.ms = 300000 metric.reporters = [] metrics.num.samples = 2 metrics.recording.level = INFO metrics.sample.window.ms = 30000 receive.buffer.bytes = 65536 reconnect.backoff.max.ms = 1000 reconnect.backoff.ms = 50 request.timeout.ms = 120000 retries = 5 retry.backoff.ms = 100 sasl.client.callback.handler.class = null sasl.jaas.config = null sasl.kerberos.kinit.cmd = /usr/bin/kinit sasl.kerberos.min.time.before.relogin = 60000 sasl.kerberos.service.name = null sasl.kerberos.ticket.renew.jitter = 0.05 sasl.kerberos.ticket.renew.window.factor = 0.8 sasl.login.callback.handler.class = null sasl.login.class = null sasl.login.refresh.buffer.seconds = 300 sasl.login.refresh.min.period.seconds = 60 sasl.login.refresh.window.factor = 0.8 sasl.login.refresh.window.jitter = 0.05 sasl.mechanism = GSSAPI security.protocol = PLAINTEXT security.providers = null send.buffer.bytes = 131072 ssl.cipher.suites = null ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] ssl.endpoint.identification.algorithm = https ssl.key.password = null ssl.keymanager.algorithm = SunX509 ssl.keystore.location = null ssl.keystore.password = null ssl.keystore.type = JKS ssl.protocol = TLS ssl.provider = null ssl.secure.random.implementation = null ssl.trustmanager.algorithm = PKIX ssl.truststore.location = null ssl.truststore.password = null ssl.truststore.type = JKS 12/мар/2023 22:32:50.126 DEBUG org.apache.kafka.clients.admin.internals.AdminMetadataManager - [AdminClient clientId=adminclient-1] Setting bootstrap cluster metadata Cluster(id = null, nodes = [localhost:30092 (id: -1 rack: null)], partitions = [], controller = null). 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-closed: 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name connections-created: 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name successful-authentication: 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name successful-reauthentication: 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name successful-authentication-no-reauth: 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name failed-authentication: 12/мар/2023 22:32:50.188 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name failed-reauthentication: 12/мар/2023 22:32:50.198 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name reauthentication-latency: 12/мар/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent-received: 12/мар/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-sent: 12/мар/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name bytes-received: 12/мар/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name select-time: 12/мар/2023 22:32:50.199 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name io-time: 12/мар/2023 22:32:50.204 WARN org.apache.kafka.clients.admin.AdminClientConfig - The configuration 'group.id' was supplied but isn't a known config. 12/мар/2023 22:32:50.204 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka version: 2.4.0 12/мар/2023 22:32:50.204 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka commitId: 77a89fcf8d7fa018 12/мар/2023 22:32:50.204 INFO org.apache.kafka.common.utils.AppInfoParser - Kafka startTimeMs: 1678649570204 12/мар/2023 22:32:50.214 DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=adminclient-1] Kafka admin client initialized 12/мар/2023 22:32:50.215 DEBUG org.apache.kafka.clients.admin.KafkaAdminClient - [AdminClient clientId=adminclient-1] Queueing Call(callName=listNodes, deadlineMs=1678649690215) with a timeout 120000 ms from now. 12/мар/2023 22:32:50.215 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Initiating connection to node localhost:30092 (id: -1 rack: null) using address localhost/127.0.0.1 12/мар/2023 22:32:50.228 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-sent 12/мар/2023 22:32:50.230 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.bytes-received 12/мар/2023 22:32:50.232 DEBUG org.apache.kafka.common.metrics.Metrics - Added sensor with name node--1.latency 12/мар/2023 22:32:50.232 DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 12/мар/2023 22:32:50.320 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Completed connection to node -1. Fetching API versions. 12/мар/2023 22:32:50.320 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Initiating API versions fetch from node -1. 12/мар/2023 22:32:50.376 DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=adminclient-1] Connection with localhost/127.0.0.1 disconnected java.io.EOFException at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572) at org.apache.kafka.common.network.Selector.poll(Selector.java:483) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:540) at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1196) at java.lang.Thread.run(Thread.java:748) 12/мар/2023 22:33:11.787 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Node -1 disconnected. 12/мар/2023 22:33:12.766 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Initiating connection to node localhost:30092 (id: -1 rack: null) using address localhost/127.0.0.1 12/мар/2023 22:33:12.767 DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1 12/мар/2023 22:33:12.767 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Completed connection to node -1. Fetching API versions. 12/мар/2023 22:33:12.767 DEBUG org.apache.kafka.clients.NetworkClient - [AdminClient clientId=adminclient-1] Initiating API versions fetch from node -1. 12/мар/2023 22:33:12.768 DEBUG org.apache.kafka.common.network.Selector - [AdminClient clientId=adminclient-1] Connection with localhost/127.0.0.1 disconnected java.io.EOFException at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96) at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424) at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385) at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572) at org.apache.kafka.common.network.Selector.poll(Selector.java:483) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:540) at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1196) at java.lang.Thread.run(Thread.java:748)
When you apply the helm release it convert the Helm template to K8s resources like deployment, configmap, secret, statefulset and those values get applied and stored to etcd datastore of K8s. Still helm also create the secret for release so if you check the kubectl -n <namespace> get secret there will be secrets. Secret name will be something like : sh.helm.release.v1.<release-name>.<revision>
###Instruction: Alice deployes a chart from her computer to a k8s cluster. Bob runs helm ls -a on that cluster and sees Alice's chart. I am looking for any configmaps on my cluster that could hold chart info but cant find any. Where is it then? ### Response: When you apply the helm release it convert the Helm template to K8s resources like deployment, configmap, secret, statefulset and those values get applied and stored to etcd datastore of K8s. Still helm also create the secret for release so if you check the kubectl -n <namespace> get secret there will be secrets. Secret name will be something like : sh.helm.release.v1.<release-name>.<revision>
Alice deployes a chart from her computer to a k8s cluster. Bob runs helm ls -a on that cluster and sees Alice's chart. I am looking for any configmaps on my cluster that could hold chart info but cant find any. Where is it then?
Helm is like a package manager. It can install applications on your cluster, but it has only some basic logic for updates to its configuration or for version upgrades. You control it through the helm commands and call it when you need it. So it helps you with some tasks, but it is still up to you to run your Kafka cluster day-to-day. Operators on the other hand are (usually) more sophisticated. They don't handle only the installation but also day-2 operations. They essentially try to encode the knowledge and the tasks a human operator running someting like a Kafka cluster would need and do into an application (= the operator). The operator runs all the time in your cluster, and constantly monitors the Kafka cluster to see what is happening in it, if some actions should be taken, and so on. For something like Kafka, the Strimzi operator for example incorporates the rolling update knowledge such as that the controller broker should be rolled last and partition replicas kept in-sync, it deals with upgrades which in Kafka usually consist of multiple rolling updates, handles certificate renewals, and much more. So an operator will normally do a lot more things for you than a Helm Chart as it operates the Kafka cluster for you. For stateful applications such as Kafka or for example databases, this can often make a huge difference. But it is usually also more opinionated as it does things the way it was programmed to which might be different from what you were used to. Helm Charts normally give you a lot of freedom to do things any way you want. Note: Different operators have different features and levels of maturity. So they might or might not support different tasks. If you google for it, you will find many different articles, videos, or conference talks about the Kubernetes operator pattern and compare it with Helm Charts which will explain the differences. (Disclaimer: I'm one of the Strimzi project maintatainers)
###Instruction: Can anyone explain in simple terms what is the difference between deploying Kafka through Kubernetes operator (for example Strimzi ) and Kafka helm chart or manifest file? Previously in my project we used helm chart for kafka but for now we've received requirement to move to kafka strimzi-operator. I can't access people who invented it but my colleguess also don't know the reason. So please explain why kafka strimzi operator better (or maybe worse) than kafka helm chart ? ### Response: Helm is like a package manager. It can install applications on your cluster, but it has only some basic logic for updates to its configuration or for version upgrades. You control it through the helm commands and call it when you need it. So it helps you with some tasks, but it is still up to you to run your Kafka cluster day-to-day. Operators on the other hand are (usually) more sophisticated. They don't handle only the installation but also day-2 operations. They essentially try to encode the knowledge and the tasks a human operator running someting like a Kafka cluster would need and do into an application (= the operator). The operator runs all the time in your cluster, and constantly monitors the Kafka cluster to see what is happening in it, if some actions should be taken, and so on. For something like Kafka, the Strimzi operator for example incorporates the rolling update knowledge such as that the controller broker should be rolled last and partition replicas kept in-sync, it deals with upgrades which in Kafka usually consist of multiple rolling updates, handles certificate renewals, and much more. So an operator will normally do a lot more things for you than a Helm Chart as it operates the Kafka cluster for you. For stateful applications such as Kafka or for example databases, this can often make a huge difference. But it is usually also more opinionated as it does things the way it was programmed to which might be different from what you were used to. Helm Charts normally give you a lot of freedom to do things any way you want. Note: Different operators have different features and levels of maturity. So they might or might not support different tasks. If you google for it, you will find many different articles, videos, or conference talks about the Kubernetes operator pattern and compare it with Helm Charts which will explain the differences. (Disclaimer: I'm one of the Strimzi project maintatainers)
Can anyone explain in simple terms what is the difference between deploying Kafka through Kubernetes operator (for example Strimzi ) and Kafka helm chart or manifest file? Previously in my project we used helm chart for kafka but for now we've received requirement to move to kafka strimzi-operator. I can't access people who invented it but my colleguess also don't know the reason. So please explain why kafka strimzi operator better (or maybe worse) than kafka helm chart ?
Eks does not provide the ability to override the default scheduler configuration, which means that actually configuring the default-scheduler profile with the MostAllocated scoring strategy is not an option. However, you may run your own scheduler alongside the default scheduler, and this one may be configured how you like. Once you create a custom scheduler, you can override that scheduler's configuration with the MostAllocated scoring strategy and then instruct your workloads to use that scheduler. In order to run multiple schedulers, you have to set up several Kubernetes Objects. These objects are documented in the guide linked above: ServiceAccount ClusterRoleBinding x2 RoleBinding ConfigMap Deployment The deployment will use the standard kube-scheduler image provided by Google, unless you'd like to create your own. I wouldn't recommend it. Major Note: Ensure your version of the kube-scheduler is the same version as the control plane. This will not work otherwise. In addition, ensure that your version of the kube-scheduler is compatible with the version of the configuration objects that you use to configure the scheduler profile. v1beta2 is safe for v1.22.x -> v1.24.x but only v1beta3 or v1 is safe for v.1.25+. For example, here's a working version of a deployment manifest and config map that are used to create a custom scheduler compatible with k8s v.1.22.x. Note you'll still have to create the other objects for this to work: apiVersion: apps/v1 kind: Deployment metadata: name: custom-scheduler namespace: kube-system spec: replicas: 1 selector: matchLabels: name: custom-scheduler template: metadata: labels: component: scheduler name: custom-scheduler tier: control-plane spec: containers: - command: - /usr/local/bin/kube-scheduler - --config=/etc/kubernetes/custom-scheduler/custom-scheduler-config.yaml env: [] image: registry.k8s.io/kube-scheduler:v1.22.16 imagePullPolicy: IfNotPresent livenessProbe: httpGet: path: /healthz port: 10259 scheme: HTTPS name: custom-scheduler readinessProbe: httpGet: path: /healthz port: 10259 scheme: HTTPS volumeMounts: - mountPath: /etc/kubernetes/custom-scheduler name: custom-scheduler-config serviceAccountName: custom-scheduler volumes: - configMap: name: custom-scheduler-config name: custom-scheduler-config apiVersion: v1 kind: ConfigMap data: custom-scheduler-config.yaml: | apiVersion: kubescheduler.config.k8s.io/v1beta2 kind: KubeSchedulerConfiguration leaderElection: leaderElect: false profiles: - pluginConfig: - args: apiVersion: kubescheduler.config.k8s.io/v1beta2 kind: NodeResourcesFitArgs scoringStrategy: resources: - name: cpu weight: 1 - name: memory weight: 1 type: MostAllocated name: NodeResourcesFit plugins: score: enabled: - name: NodeResourcesFit weight: 1 schedulerName: custom-scheduler metadata: name: custom-scheduler-config namespace: kube-system
###Instruction: We would like to pack as many pods into each nodes in our cluster as much as possible do decrease the amount of nodes we have on some of our environments. I saw https://github.com/kubernetes-sigs/descheduler HighNodeUtilization strategy which seems to fit the bill for what we need. However, it seems the cluster needs to have the scoring strategy MostAllocated to work with this. I believe that the kube-scheduler in EKS in inaccessible to be configured. How do I then configure the MostAllocated scoring strategy? Better yet, how do I configure this automated packing of pods in as little nodes as possible in a cluster without the use of Descheduler? Tried deploying the descheduler as is without the MostAllocated scoring strategy configured. Obviously did not provide the results expected. Many of my digging online led to having to create a custom-scheduler, but I have found little/unclear resources to be able to do so. ### Response: Eks does not provide the ability to override the default scheduler configuration, which means that actually configuring the default-scheduler profile with the MostAllocated scoring strategy is not an option. However, you may run your own scheduler alongside the default scheduler, and this one may be configured how you like. Once you create a custom scheduler, you can override that scheduler's configuration with the MostAllocated scoring strategy and then instruct your workloads to use that scheduler. In order to run multiple schedulers, you have to set up several Kubernetes Objects. These objects are documented in the guide linked above: ServiceAccount ClusterRoleBinding x2 RoleBinding ConfigMap Deployment The deployment will use the standard kube-scheduler image provided by Google, unless you'd like to create your own. I wouldn't recommend it. Major Note: Ensure your version of the kube-scheduler is the same version as the control plane. This will not work otherwise. In addition, ensure that your version of the kube-scheduler is compatible with the version of the configuration objects that you use to configure the scheduler profile. v1beta2 is safe for v1.22.x -> v1.24.x but only v1beta3 or v1 is safe for v.1.25+. For example, here's a working version of a deployment manifest and config map that are used to create a custom scheduler compatible with k8s v.1.22.x. Note you'll still have to create the other objects for this to work: apiVersion: apps/v1 kind: Deployment metadata: name: custom-scheduler namespace: kube-system spec: replicas: 1 selector: matchLabels: name: custom-scheduler template: metadata: labels: component: scheduler name: custom-scheduler tier: control-plane spec: containers: - command: - /usr/local/bin/kube-scheduler - --config=/etc/kubernetes/custom-scheduler/custom-scheduler-config.yaml env: [] image: registry.k8s.io/kube-scheduler:v1.22.16 imagePullPolicy: IfNotPresent livenessProbe: httpGet: path: /healthz port: 10259 scheme: HTTPS name: custom-scheduler readinessProbe: httpGet: path: /healthz port: 10259 scheme: HTTPS volumeMounts: - mountPath: /etc/kubernetes/custom-scheduler name: custom-scheduler-config serviceAccountName: custom-scheduler volumes: - configMap: name: custom-scheduler-config name: custom-scheduler-config apiVersion: v1 kind: ConfigMap data: custom-scheduler-config.yaml: | apiVersion: kubescheduler.config.k8s.io/v1beta2 kind: KubeSchedulerConfiguration leaderElection: leaderElect: false profiles: - pluginConfig: - args: apiVersion: kubescheduler.config.k8s.io/v1beta2 kind: NodeResourcesFitArgs scoringStrategy: resources: - name: cpu weight: 1 - name: memory weight: 1 type: MostAllocated name: NodeResourcesFit plugins: score: enabled: - name: NodeResourcesFit weight: 1 schedulerName: custom-scheduler metadata: name: custom-scheduler-config namespace: kube-system
We would like to pack as many pods into each nodes in our cluster as much as possible do decrease the amount of nodes we have on some of our environments. I saw https://github.com/kubernetes-sigs/descheduler HighNodeUtilization strategy which seems to fit the bill for what we need. However, it seems the cluster needs to have the scoring strategy MostAllocated to work with this. I believe that the kube-scheduler in EKS in inaccessible to be configured. How do I then configure the MostAllocated scoring strategy? Better yet, how do I configure this automated packing of pods in as little nodes as possible in a cluster without the use of Descheduler? Tried deploying the descheduler as is without the MostAllocated scoring strategy configured. Obviously did not provide the results expected. Many of my digging online led to having to create a custom-scheduler, but I have found little/unclear resources to be able to do so.
As David suggested there is no option to control the log target. However as requested for log collector application writing this answer. If your application is streaming the UDP logs you can use the Graylog Opensource. It uses Mongo & Elasticsearch as backend databases. We been using Graylog to collect logs from the application POD. Now regarding the log collector for kubectl logs -f <POD> you can push all these logs from the Worker Node file system using the fluentd collector. Log location will be /var/log/pods You can use the Fluentd collector along with the Graylog Gelf UDP input Fluentd -> pushing over gelf UDP -> Graylog input saving to Elasticsearch Here is the ref you can follow : https://docs.fluentd.org/how-to-guides/graylog2 Above example uses Graylog2 now Graylog3 version is available opensource would suggest checking out that. You can refer my Github repo : https://github.com/harsh4870/OCI-public-logging-uma-agent Will get more idea about how deployment setting up log file on Node's filesystem and further it gets processed by collector although not using fluentd but just for ref. Oracle OCI UMA agent also similar job like fluentd collector only, parsing & pushing logs to the backend.
###Instruction: I know docker has options like syslog log-driver and log-opts so that it can be used to send the logs to say, the UDP server. Marathon is the docker orchestrator here and a config file has the below: { "key": "log-driver", "value": "syslog" }, { "key": "log-opt", "value": "syslog-address=udp://some-udp-server:port" }, The existing setup is such that certain downstream systems/entities take the information received on this UDP server to create visualisations on Grafana. How do I achieve the same in a k8s manifest file that I'm deploying via helm3? Or is there a third-party application I need to use? Basically, I want to send the logs that come in the kubectl logs -f <pod_name> command to this UDP server with minimal intrusion. I would only like to replace this part of the flow so that I don't have to disturb any of the downstream systems. ### Response: As David suggested there is no option to control the log target. However as requested for log collector application writing this answer. If your application is streaming the UDP logs you can use the Graylog Opensource. It uses Mongo & Elasticsearch as backend databases. We been using Graylog to collect logs from the application POD. Now regarding the log collector for kubectl logs -f <POD> you can push all these logs from the Worker Node file system using the fluentd collector. Log location will be /var/log/pods You can use the Fluentd collector along with the Graylog Gelf UDP input Fluentd -> pushing over gelf UDP -> Graylog input saving to Elasticsearch Here is the ref you can follow : https://docs.fluentd.org/how-to-guides/graylog2 Above example uses Graylog2 now Graylog3 version is available opensource would suggest checking out that. You can refer my Github repo : https://github.com/harsh4870/OCI-public-logging-uma-agent Will get more idea about how deployment setting up log file on Node's filesystem and further it gets processed by collector although not using fluentd but just for ref. Oracle OCI UMA agent also similar job like fluentd collector only, parsing & pushing logs to the backend.
I know docker has options like syslog log-driver and log-opts so that it can be used to send the logs to say, the UDP server. Marathon is the docker orchestrator here and a config file has the below: { "key": "log-driver", "value": "syslog" }, { "key": "log-opt", "value": "syslog-address=udp://some-udp-server:port" }, The existing setup is such that certain downstream systems/entities take the information received on this UDP server to create visualisations on Grafana. How do I achieve the same in a k8s manifest file that I'm deploying via helm3? Or is there a third-party application I need to use? Basically, I want to send the logs that come in the kubectl logs -f <pod_name> command to this UDP server with minimal intrusion. I would only like to replace this part of the flow so that I don't have to disturb any of the downstream systems.
This is a tricky problem requiring in-depth explanation of GitLab and Kubernetes dynamics. Summary of your problem: volumes: - .:/var/www/html # or # - /builds/projects/laravel-api:/var/www/html The Docker service you are using does not share the same filesystem than your job's Pod. When you instruct Docker to mount /builds/projects/laravel-api (or . which resolves to the same), Docker will mount this directory from its own filesystem which is indeed empty. As you already pointed out, you must somehow share the /builds directory between job's Pod and Docker service. Solution 1: share Persistent Volume between Docker service and job's Pod Create a Persistent Volume Claim (PVC) so that they share the /builds directory: Create a PVC such as: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gitlab-claim spec: accessModes: - ReadWriteOnce # or ReadWriteMany resources: requests: storage: 8Gi Configure GitLab Runner Kubernetes executor to mount the PVC at /builds. For example: [[runners]] [runners.kubernetes] [[runners.kubernetes.volumes.pvc]] name = "gitlab-claim" mount_path = "/builds" Configure Docker deployment to mount the PVC at /builds. This depends on how you configured Docker service, but you'll probably have to configure a Container spec such as: spec: volumes: - name: job-volume persistentVolumeClaim: claimName: gitlab-claim containers: - name: docker image: docker:dind # ... volumeMounts: - name: job-volume mountPath: /builds Your setup will look something like this. Both pods will share the same Volume mounted at /builds. Important note: choose carefully between ReadWriteOnce / ReadWriteMany access mode: Use ReadWriteMany if your provider supports it, it will allow the same volume to be shared across multiple nodes. If not, ReadWriteOnce will require your job's Pods to be running on the same node as Docker service as volume won't be shareable across Kubernetes nodes. Solution 2: use GitLab service to run Docker-in-Docker (DinD) This setup is a bit different as you'll deploy a fresh Docker service every build. However data won't be persisted across jobs as Docker service will be recreated for each job. Volumes defined for the build container are also automatically mounted for all services containers.. You can then share an emptyDir between job's pod and a Docker DinD service: Configure a DinD service on your CI such as: image: docker:20.10.16-dind variables: DOCKER_HOST: tcp://docker:2375 DOCKER_TLS_CERTDIR: "" services: - docker:20.10.16-dind Configure Kubernetes executor to mount an emptyDir at /builds which will be shared between job's and services containers: [runners.kubernetes] [[runners.kubernetes.volumes.empty_dir]] name = "builds-data" mount_path = "/builds" In this case, you'll have a single Pod with both job's container and DinD service container, both sharing the emptyDir volume /builds. Your laravel and other containers will run inside the DinD service container. Which solution is better? Depends on context: Efficiency-wize, Solution 1 is better as your Docker service will remain static and keep build caches, downloaded images, etc. allowing much faster Docker builds and deployments. However, your Docker service may be shared with multiple projects/entities, causing security risk: anyone with access to Docker service may also access containers not related to their project or scope and eventually retrieve sensitive data from within another projects container/volume/... For example: consider project A and project B using Docker service to deploy testing apps, each configured to access AWS account A & B. Entity with access to Project A may easily grab credentials from a container running for Project B and access AWS account B to start a crypto miner on EC2. It's possible to mitigate that risk by deploying Docker services dedicated per project or scope depending on your need, and/or securing via SSH/TLS to only allow certain entities to use it. Solution 2 is much easier to setup and secure, but will be less efficient: a new CI Docker service will be started each build, so you'll have to download Docker images again every pipeline and any Docker build cache will be lost. You may optimize CI config to download cache or such, but it requires a more complex setup (might as well go for Solution 2 if you're ready to add complexity) This is all a bit complex and plays with lots of various concepts. Do not hesitate to ask questions, I'll answer or edit as possible.
###Instruction: I am having a problem with my GitLab pipeline when trying to mount a docker volume inside a container. Before I explain the problem, first I will describe my whole setup, because I think that is very essential to understand the problem, because I think that this is the reason why I am having this problem. Setup Okay, so to start off, I have a kubernetes cluster. This cluster runs my gitlab/gitlab-ee:15.8.0-ee.0 image. I installed a GitLab runner in this cluster as well, so that I am able to run pipelines of course. Then the last thing I installed is a docker instance, because I saw that you can mount the docker.sock from your host machine to the gitlab pipeline, but this is not recommended, because the entire cluster relies on that docker.sock, so I have another instance of docker running and I am mounting that docker.sock for pipelines only. These 3 deployments are used by me to run GitLab pipelines. The problem I am happy with the way everything is setup, but I think I am still missing some configuration, because the mounting of docker volumes are not working properly in pipelines. I have this script to test this, which contains this code: image: docker:20.10.16-dind variables: DOCKER_HOST: "tcp://docker-service:2375" # <-- Address to reach the docker instance from my cluster DOCKER_COMPOSE_CMD: "docker-compose -f docker-compose-test.yml" stages: - test test: stage: test script: - $DOCKER_COMPOSE_CMD down --volumes --remove-orphans - $DOCKER_COMPOSE_CMD build - $DOCKER_COMPOSE_CMD --env-file .env.pipeline up -d - $DOCKER_COMPOSE_CMD exec -T -e APP_ENV=testing laravel-api-test sh -c "ls" With the following docker-compose-test.yml: version: '3.7' services: laravel-api-test: build: context: . dockerfile: docker/development/Dockerfile volumes: - .:/var/www/html environment: - COMPOSER_MEMORY_LIMIT=-1 depends_on: - database-test database-test: image: postgres:15.1-alpine ports: - ${DB_PORT}:5432 environment: POSTGRES_DB: ${DB_DATABASE} POSTGRES_PASSWORD: ${DB_PASSWORD_SECRET} POSTGRES_USER: ${DB_USERNAME_SECRET} redis-test: image: redis:7.0.8 ports: - ${REDIS_PORT}:6379 networks: default: name: application Now what this pipeline does, it builds the docker containers and then starts them. Then it runs the ls command which prints out all the files in the working-dir of the container. However, this working-dir is empty. This is caused by the volume mount in the docker-compose-test.yml with this line: volumes: - .:/var/www/html In the Dockerfile I also have this: COPY . /var/www/html/ So when I remove the volume mount in docker-compose-test.yml, all files are there, so the copying does work for the Dockerfile, but not mounting it later on. I saw this thread and tried some of their solutions and tested it with their test script: variables: SHARED_PATH: /builds/shared/$CI_PROJECT_PATH script: - mkdir -p ${SHARED_PATH} - touch ${SHARED_PATH}/test_file - docker run -v ${SHARED_PATH}:/mnt ubuntu ls /mnt But this still resulted in an empty /mnt dir, while the test_file should have been there. In the GitLab runner I added this line to the config: volumes = ["/cache", "/builds:/builds"] Unfortunately, this did not change anything. I am not sure, but my guess is that I need to access the /builds from my other docker instance, because I have a feeling that I am mounting the /builds from the host machine, which is not the docker I am using in my pipeline. If this is the case, I am not sure how to configure my Kubernetes cluster to use the other one. The weird thing is that when I do cd /builds/projects/laravel-api (my repo is named laravel-api and its inside the projects group) and then ls in my pipeline, I do see my repository containing all the files. But when I try to mount that directory in my docker-compose-test.yml I still get an empty dir. So I mean this: volumes: - /builds/projects/laravel-api:/var/www/html So every way of mounting volumes after builds are resulting in empty directories... Wrap up So to summarize the problem. Every form of mounting I do in my pipeline results eventually in an empty directory. When copying files from a Dockerfile only the directory does work, but that is not something I can work with. I hope this covers the entire problem. Some help is really appreciated! If there are any questions about the setup or something like that, please ask I will respond ASAP! ### Response: This is a tricky problem requiring in-depth explanation of GitLab and Kubernetes dynamics. Summary of your problem: volumes: - .:/var/www/html # or # - /builds/projects/laravel-api:/var/www/html The Docker service you are using does not share the same filesystem than your job's Pod. When you instruct Docker to mount /builds/projects/laravel-api (or . which resolves to the same), Docker will mount this directory from its own filesystem which is indeed empty. As you already pointed out, you must somehow share the /builds directory between job's Pod and Docker service. Solution 1: share Persistent Volume between Docker service and job's Pod Create a Persistent Volume Claim (PVC) so that they share the /builds directory: Create a PVC such as: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: gitlab-claim spec: accessModes: - ReadWriteOnce # or ReadWriteMany resources: requests: storage: 8Gi Configure GitLab Runner Kubernetes executor to mount the PVC at /builds. For example: [[runners]] [runners.kubernetes] [[runners.kubernetes.volumes.pvc]] name = "gitlab-claim" mount_path = "/builds" Configure Docker deployment to mount the PVC at /builds. This depends on how you configured Docker service, but you'll probably have to configure a Container spec such as: spec: volumes: - name: job-volume persistentVolumeClaim: claimName: gitlab-claim containers: - name: docker image: docker:dind # ... volumeMounts: - name: job-volume mountPath: /builds Your setup will look something like this. Both pods will share the same Volume mounted at /builds. Important note: choose carefully between ReadWriteOnce / ReadWriteMany access mode: Use ReadWriteMany if your provider supports it, it will allow the same volume to be shared across multiple nodes. If not, ReadWriteOnce will require your job's Pods to be running on the same node as Docker service as volume won't be shareable across Kubernetes nodes. Solution 2: use GitLab service to run Docker-in-Docker (DinD) This setup is a bit different as you'll deploy a fresh Docker service every build. However data won't be persisted across jobs as Docker service will be recreated for each job. Volumes defined for the build container are also automatically mounted for all services containers.. You can then share an emptyDir between job's pod and a Docker DinD service: Configure a DinD service on your CI such as: image: docker:20.10.16-dind variables: DOCKER_HOST: tcp://docker:2375 DOCKER_TLS_CERTDIR: "" services: - docker:20.10.16-dind Configure Kubernetes executor to mount an emptyDir at /builds which will be shared between job's and services containers: [runners.kubernetes] [[runners.kubernetes.volumes.empty_dir]] name = "builds-data" mount_path = "/builds" In this case, you'll have a single Pod with both job's container and DinD service container, both sharing the emptyDir volume /builds. Your laravel and other containers will run inside the DinD service container. Which solution is better? Depends on context: Efficiency-wize, Solution 1 is better as your Docker service will remain static and keep build caches, downloaded images, etc. allowing much faster Docker builds and deployments. However, your Docker service may be shared with multiple projects/entities, causing security risk: anyone with access to Docker service may also access containers not related to their project or scope and eventually retrieve sensitive data from within another projects container/volume/... For example: consider project A and project B using Docker service to deploy testing apps, each configured to access AWS account A & B. Entity with access to Project A may easily grab credentials from a container running for Project B and access AWS account B to start a crypto miner on EC2. It's possible to mitigate that risk by deploying Docker services dedicated per project or scope depending on your need, and/or securing via SSH/TLS to only allow certain entities to use it. Solution 2 is much easier to setup and secure, but will be less efficient: a new CI Docker service will be started each build, so you'll have to download Docker images again every pipeline and any Docker build cache will be lost. You may optimize CI config to download cache or such, but it requires a more complex setup (might as well go for Solution 2 if you're ready to add complexity) This is all a bit complex and plays with lots of various concepts. Do not hesitate to ask questions, I'll answer or edit as possible.
I am having a problem with my GitLab pipeline when trying to mount a docker volume inside a container. Before I explain the problem, first I will describe my whole setup, because I think that is very essential to understand the problem, because I think that this is the reason why I am having this problem. Setup Okay, so to start off, I have a kubernetes cluster. This cluster runs my gitlab/gitlab-ee:15.8.0-ee.0 image. I installed a GitLab runner in this cluster as well, so that I am able to run pipelines of course. Then the last thing I installed is a docker instance, because I saw that you can mount the docker.sock from your host machine to the gitlab pipeline, but this is not recommended, because the entire cluster relies on that docker.sock, so I have another instance of docker running and I am mounting that docker.sock for pipelines only. These 3 deployments are used by me to run GitLab pipelines. The problem I am happy with the way everything is setup, but I think I am still missing some configuration, because the mounting of docker volumes are not working properly in pipelines. I have this script to test this, which contains this code: image: docker:20.10.16-dind variables: DOCKER_HOST: "tcp://docker-service:2375" # <-- Address to reach the docker instance from my cluster DOCKER_COMPOSE_CMD: "docker-compose -f docker-compose-test.yml" stages: - test test: stage: test script: - $DOCKER_COMPOSE_CMD down --volumes --remove-orphans - $DOCKER_COMPOSE_CMD build - $DOCKER_COMPOSE_CMD --env-file .env.pipeline up -d - $DOCKER_COMPOSE_CMD exec -T -e APP_ENV=testing laravel-api-test sh -c "ls" With the following docker-compose-test.yml: version: '3.7' services: laravel-api-test: build: context: . dockerfile: docker/development/Dockerfile volumes: - .:/var/www/html environment: - COMPOSER_MEMORY_LIMIT=-1 depends_on: - database-test database-test: image: postgres:15.1-alpine ports: - ${DB_PORT}:5432 environment: POSTGRES_DB: ${DB_DATABASE} POSTGRES_PASSWORD: ${DB_PASSWORD_SECRET} POSTGRES_USER: ${DB_USERNAME_SECRET} redis-test: image: redis:7.0.8 ports: - ${REDIS_PORT}:6379 networks: default: name: application Now what this pipeline does, it builds the docker containers and then starts them. Then it runs the ls command which prints out all the files in the working-dir of the container. However, this working-dir is empty. This is caused by the volume mount in the docker-compose-test.yml with this line: volumes: - .:/var/www/html In the Dockerfile I also have this: COPY . /var/www/html/ So when I remove the volume mount in docker-compose-test.yml, all files are there, so the copying does work for the Dockerfile, but not mounting it later on. I saw this thread and tried some of their solutions and tested it with their test script: variables: SHARED_PATH: /builds/shared/$CI_PROJECT_PATH script: - mkdir -p ${SHARED_PATH} - touch ${SHARED_PATH}/test_file - docker run -v ${SHARED_PATH}:/mnt ubuntu ls /mnt But this still resulted in an empty /mnt dir, while the test_file should have been there. In the GitLab runner I added this line to the config: volumes = ["/cache", "/builds:/builds"] Unfortunately, this did not change anything. I am not sure, but my guess is that I need to access the /builds from my other docker instance, because I have a feeling that I am mounting the /builds from the host machine, which is not the docker I am using in my pipeline. If this is the case, I am not sure how to configure my Kubernetes cluster to use the other one. The weird thing is that when I do cd /builds/projects/laravel-api (my repo is named laravel-api and its inside the projects group) and then ls in my pipeline, I do see my repository containing all the files. But when I try to mount that directory in my docker-compose-test.yml I still get an empty dir. So I mean this: volumes: - /builds/projects/laravel-api:/var/www/html So every way of mounting volumes after builds are resulting in empty directories... Wrap up So to summarize the problem. Every form of mounting I do in my pipeline results eventually in an empty directory. When copying files from a Dockerfile only the directory does work, but that is not something I can work with. I hope this covers the entire problem. Some help is really appreciated! If there are any questions about the setup or something like that, please ask I will respond ASAP!
You can configure Kubernetes to use your host's DNS resolver by setting up dnsPolicy for the pod by selecting either Default or ClusterFirstWithHostNet. The first option uses the cluster DNS if available, and falls back to the host's DNS resolver if none is found. The second option uses the cluster DNS first, and fallsback to the host DNS resolver if it can't resolve the name. https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/
###Instruction: Kubernetes cluster is running on a node (Virtual Machine) which gets its network configuration from a DHCP server (this includes DNS settings). From within a pod I need to be able to reach external resources identified by FQDN. The DNS on the K8 node (VM) is able to solve the FQDN to an IP address but how could I make it accessible to the pod? Also I need to support the scenario when the DNS settings change in the VM where K8 cluster is running. The FQDN resources are not known upfront. ### Response: You can configure Kubernetes to use your host's DNS resolver by setting up dnsPolicy for the pod by selecting either Default or ClusterFirstWithHostNet. The first option uses the cluster DNS if available, and falls back to the host's DNS resolver if none is found. The second option uses the cluster DNS first, and fallsback to the host DNS resolver if it can't resolve the name. https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/
Kubernetes cluster is running on a node (Virtual Machine) which gets its network configuration from a DHCP server (this includes DNS settings). From within a pod I need to be able to reach external resources identified by FQDN. The DNS on the K8 node (VM) is able to solve the FQDN to an IP address but how could I make it accessible to the pod? Also I need to support the scenario when the DNS settings change in the VM where K8 cluster is running. The FQDN resources are not known upfront.
I solved the same issue and the problem was with different MLRun version between client side and server side. I had MLRun on client in version 1.2.1rc2 and server side in version 1.2.1 (these versions have different interfaces and it generates freez issue). Please, synch MLRun versions between client and server and it will works. BTW: Your part of code seems as this original sample here https://docs.mlrun.org/en/stable/feature-store/end-to-end-demo/02-create-training-model.html
###Instruction: I got freez of Sklearn-classifier in MLRun (the job is still running after 5, 10, 20, ... minutes), see log output: 2023-02-21 13:50:15,853 [info] starting run training uid=e8e66defd91043dda62ae8b6795c74ea DB=http://mlrun-api:8080 2023-02-21 13:50:16,136 [info] Job is running in the background, pod: training-tgplm see freez/pending issue on Web UI: I used this source code and classifier_fn.run(train_task, local=False) generates freez: # Import the Sklearn classifier function from the function hub classifier_fn = mlrun.import_function('hub://sklearn-classifier') # Prepare the parameters list for the training function training_params = {"model_name": ['risk_xgboost'], "model_pkg_class": ['sklearn.ensemble.GradientBoostingClassifier']} # Define the training task, including the feature vector, label and hyperparams definitions train_task = mlrun.new_task('training', inputs={'dataset': transactions_fv.uri}, params={'label_column': 'n4_pd30'} ) train_task.with_hyper_params(training_params, strategy='list', selector='max.accuracy') # Specify the cluster image classifier_fn.spec.image = 'mlrun/mlrun' # Run training classifier_fn.run(train_task, local=False) Did you have and solve the same issue? ### Response: I solved the same issue and the problem was with different MLRun version between client side and server side. I had MLRun on client in version 1.2.1rc2 and server side in version 1.2.1 (these versions have different interfaces and it generates freez issue). Please, synch MLRun versions between client and server and it will works. BTW: Your part of code seems as this original sample here https://docs.mlrun.org/en/stable/feature-store/end-to-end-demo/02-create-training-model.html
I got freez of Sklearn-classifier in MLRun (the job is still running after 5, 10, 20, ... minutes), see log output: 2023-02-21 13:50:15,853 [info] starting run training uid=e8e66defd91043dda62ae8b6795c74ea DB=http://mlrun-api:8080 2023-02-21 13:50:16,136 [info] Job is running in the background, pod: training-tgplm see freez/pending issue on Web UI: I used this source code and classifier_fn.run(train_task, local=False) generates freez: # Import the Sklearn classifier function from the function hub classifier_fn = mlrun.import_function('hub://sklearn-classifier') # Prepare the parameters list for the training function training_params = {"model_name": ['risk_xgboost'], "model_pkg_class": ['sklearn.ensemble.GradientBoostingClassifier']} # Define the training task, including the feature vector, label and hyperparams definitions train_task = mlrun.new_task('training', inputs={'dataset': transactions_fv.uri}, params={'label_column': 'n4_pd30'} ) train_task.with_hyper_params(training_params, strategy='list', selector='max.accuracy') # Specify the cluster image classifier_fn.spec.image = 'mlrun/mlrun' # Run training classifier_fn.run(train_task, local=False) Did you have and solve the same issue?
Try this one-liner: while [ true ]; do echo $(date) $(date +%s) $(kubectl top -n your-namespace pod $(kubectl get pods -n your-namespace -l your-label-name=your-label-value -o jsonpath='{..metadata.name}') | tail -n 1) | tee -a /path/to/save/your/logs.txt; done Add sleep 300 to sample it every 5 minutes instead of continuously. It will find a pod in namespace your-namespace with label your-label-name that has value your-label-value, take its name, and will take only the last one such pod, if you have multiple pods with the same label (that's what | tail -n 1 for). This way you won't have to determine the name of a pod manually. Then it'll print something like this: Sun, Mar 12, 2023 4:59:05 PM 1678640345 your-pod-name-5c64678fc6-rsldm 47m 657Mi Where 1678640345 is Unix milliseconds timestamp written by $(date +%s). The output will be printed in console (stdout) and mirrored in /path/to/save/your/logs.txt file.
###Instruction: I am running a load test over a kubernetes pod and i want to sample every 5 minutes the CPU and memory usage of it. I was currently manually using the linux top command over the kubernetes pod. Is there any way given a kubernetes pod to fetch the CPU/Memory usage every X minutes and append it to a file ? ### Response: Try this one-liner: while [ true ]; do echo $(date) $(date +%s) $(kubectl top -n your-namespace pod $(kubectl get pods -n your-namespace -l your-label-name=your-label-value -o jsonpath='{..metadata.name}') | tail -n 1) | tee -a /path/to/save/your/logs.txt; done Add sleep 300 to sample it every 5 minutes instead of continuously. It will find a pod in namespace your-namespace with label your-label-name that has value your-label-value, take its name, and will take only the last one such pod, if you have multiple pods with the same label (that's what | tail -n 1 for). This way you won't have to determine the name of a pod manually. Then it'll print something like this: Sun, Mar 12, 2023 4:59:05 PM 1678640345 your-pod-name-5c64678fc6-rsldm 47m 657Mi Where 1678640345 is Unix milliseconds timestamp written by $(date +%s). The output will be printed in console (stdout) and mirrored in /path/to/save/your/logs.txt file.
I am running a load test over a kubernetes pod and i want to sample every 5 minutes the CPU and memory usage of it. I was currently manually using the linux top command over the kubernetes pod. Is there any way given a kubernetes pod to fetch the CPU/Memory usage every X minutes and append it to a file ?
Some generic thoughts on Helm charts vs. operators: A Helm chart directly includes Kubernetes YAML files using the Go text/template language. An operator has actual code, frequently written in Go using the Kubernetes SDK, that creates the same objects. If you're going to install some application, using a Helm chart it's possible to inspect the chart or use a tool like helm template to see what it's going to do. You do not have that level of control over an operator: you can assign it some very broad permissions to create and edit StatesulSets and Secrets and it will do...something. A Helm chart will visibly fail quickly if some configuration is wrong, but an operator can only report its state via the status: in its custom resources, so you can have limited visibility into what's going wrong if an operator isn't working. As an implementer, if you're familiar with the Kubernetes YAML syntax already, it's a straightforward transition to turn it into a Helm chart. The template language is Turing-complete, though, and it's possible to write arbitrarily complex logic. Testing the templated logic becomes tricky. You also need to carefully manage whitespace and YAML layout concerns in the output of your templates. Once you've gotten up to this level of complexity, the Go native testing package with the support tools in packages like Kubebuilder make testing an operator much easier. Operators and controllers do have some additional capabilities. They run arbitrary code, can edit objects in the cluster (given the right RBAC permissions), can inspect external state, and keep running after the initial installation. It is straightforward to layer operators by having one operator create the resource that triggers another (as in standard Kubernetes where a Deployment creates ReplicaSets which create Pods). Helm's dependency system is a little more robustly defined, but runs into trouble when you do try to have nested dependencies. If most of your environment is in Helm anyways, it might make sense to prefer Helm charts for everything. Tools like Helmfile can make installing multiple Helm charts more straightfoward. If you're not already invested in Helm and are using other tools, and you don't mind not being able to see what the operator is doing, then a controller will be likely be simpler to use. (In my day job, I maintain both Helm charts and custom operators. My application uses Kafka, but I do not maintain the Kafka installation. Our Helmfile-oriented developer setup installs Kafka using a Helm chart.)
###Instruction: Can anyone explain in simple terms what is the difference between deploying Kafka through Kubernetes operator (for example Strimzi ) and Kafka helm chart or manifest file? Previously in my project we used helm chart for kafka but for now we've received requirement to move to kafka strimzi-operator. I can't access people who invented it but my colleguess also don't know the reason. So please explain why kafka strimzi operator better (or maybe worse) than kafka helm chart ? ### Response: Some generic thoughts on Helm charts vs. operators: A Helm chart directly includes Kubernetes YAML files using the Go text/template language. An operator has actual code, frequently written in Go using the Kubernetes SDK, that creates the same objects. If you're going to install some application, using a Helm chart it's possible to inspect the chart or use a tool like helm template to see what it's going to do. You do not have that level of control over an operator: you can assign it some very broad permissions to create and edit StatesulSets and Secrets and it will do...something. A Helm chart will visibly fail quickly if some configuration is wrong, but an operator can only report its state via the status: in its custom resources, so you can have limited visibility into what's going wrong if an operator isn't working. As an implementer, if you're familiar with the Kubernetes YAML syntax already, it's a straightforward transition to turn it into a Helm chart. The template language is Turing-complete, though, and it's possible to write arbitrarily complex logic. Testing the templated logic becomes tricky. You also need to carefully manage whitespace and YAML layout concerns in the output of your templates. Once you've gotten up to this level of complexity, the Go native testing package with the support tools in packages like Kubebuilder make testing an operator much easier. Operators and controllers do have some additional capabilities. They run arbitrary code, can edit objects in the cluster (given the right RBAC permissions), can inspect external state, and keep running after the initial installation. It is straightforward to layer operators by having one operator create the resource that triggers another (as in standard Kubernetes where a Deployment creates ReplicaSets which create Pods). Helm's dependency system is a little more robustly defined, but runs into trouble when you do try to have nested dependencies. If most of your environment is in Helm anyways, it might make sense to prefer Helm charts for everything. Tools like Helmfile can make installing multiple Helm charts more straightfoward. If you're not already invested in Helm and are using other tools, and you don't mind not being able to see what the operator is doing, then a controller will be likely be simpler to use. (In my day job, I maintain both Helm charts and custom operators. My application uses Kafka, but I do not maintain the Kafka installation. Our Helmfile-oriented developer setup installs Kafka using a Helm chart.)
Can anyone explain in simple terms what is the difference between deploying Kafka through Kubernetes operator (for example Strimzi ) and Kafka helm chart or manifest file? Previously in my project we used helm chart for kafka but for now we've received requirement to move to kafka strimzi-operator. I can't access people who invented it but my colleguess also don't know the reason. So please explain why kafka strimzi operator better (or maybe worse) than kafka helm chart ?
After some testing, here is a working nginx.conf - http{ server { listen 81; set $target "http://127.0.0.1:80"; location / { proxy_pass $target; proxy_set_header Host $host; } } } New directive - proxy_set_header Host $host; Trailing slash removed from the target variable used by the proxy_pass directive. (Specific to my application) Other endpoints on the server are better reachable using $host:$server_port in place of $host. This is caused by the app server redirecting incoming requests to different URIs, losing the proxy's port (81) in the process.
###Instruction: I have a strange result from using nginx and IIS server together in single Kubernetes pod. It seems to be an issue with nginx.conf. If I bypass nginx and go directly to IIS, I see the standard landing page - However when I try to go through the reverse proxy I see this partial result - Here are the files: nginx.conf: events { worker_connections 4096; ## Default: 1024 } http{ server { listen 81; #Using variable to prevent nginx from checking hostname at startup, which leads to a container failure / restart loop, due to nginx starting faster than IIS server. set $target "http://127.0.0.1:80/"; location / { proxy_pass $target; } } } deployment.yaml: apiVersion: apps/v1 kind: Deployment metadata: labels: ... name: ... spec: replicas: 1 selector: matchLabels: pod: ... template: metadata: labels: pod: ... name: ... spec: containers: - image: claudiubelu/nginx:1.15-1-windows-amd64-1809 name: nginx-reverse-proxy volumeMounts: - mountPath: "C:/usr/share/nginx/conf" name: nginx-conf imagePullPolicy: Always - image: some-repo/proprietary-server-including-iis name: ... imagePullPolicy: Always nodeSelector: kubernetes.io/os: windows imagePullSecrets: - name: secret1 volumes: - name: nginx-conf persistentVolumeClaim: claimName: pvc-nginx Mapping the nginx.conf file from a volume is just a convenient way to rapidly test different configs. New configs can be swapped in using kubectl cp ./nginx/conf nginx-busybox-pod:/mnt/nginx/. Busybox pod (used to access the PVC): apiVersion: v1 kind: Pod metadata: name: nginx-busybox-pod namespace: default spec: containers: - image: busybox command: - sleep - "360000" imagePullPolicy: Always name: busybox volumeMounts: - name: nginx-conf mountPath: "/mnt/nginx/conf" restartPolicy: Always volumes: - name: nginx-conf persistentVolumeClaim: claimName: pvc-nginx nodeSelector: kubernetes.io/os: linux And lastly the PVC: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-nginx spec: accessModes: - ReadWriteMany resources: requests: storage: 100Mi storageClassName: azurefile Any ideas why? ### Response: After some testing, here is a working nginx.conf - http{ server { listen 81; set $target "http://127.0.0.1:80"; location / { proxy_pass $target; proxy_set_header Host $host; } } } New directive - proxy_set_header Host $host; Trailing slash removed from the target variable used by the proxy_pass directive. (Specific to my application) Other endpoints on the server are better reachable using $host:$server_port in place of $host. This is caused by the app server redirecting incoming requests to different URIs, losing the proxy's port (81) in the process.
I have a strange result from using nginx and IIS server together in single Kubernetes pod. It seems to be an issue with nginx.conf. If I bypass nginx and go directly to IIS, I see the standard landing page - However when I try to go through the reverse proxy I see this partial result - Here are the files: nginx.conf: events { worker_connections 4096; ## Default: 1024 } http{ server { listen 81; #Using variable to prevent nginx from checking hostname at startup, which leads to a container failure / restart loop, due to nginx starting faster than IIS server. set $target "http://127.0.0.1:80/"; location / { proxy_pass $target; } } } deployment.yaml: apiVersion: apps/v1 kind: Deployment metadata: labels: ... name: ... spec: replicas: 1 selector: matchLabels: pod: ... template: metadata: labels: pod: ... name: ... spec: containers: - image: claudiubelu/nginx:1.15-1-windows-amd64-1809 name: nginx-reverse-proxy volumeMounts: - mountPath: "C:/usr/share/nginx/conf" name: nginx-conf imagePullPolicy: Always - image: some-repo/proprietary-server-including-iis name: ... imagePullPolicy: Always nodeSelector: kubernetes.io/os: windows imagePullSecrets: - name: secret1 volumes: - name: nginx-conf persistentVolumeClaim: claimName: pvc-nginx Mapping the nginx.conf file from a volume is just a convenient way to rapidly test different configs. New configs can be swapped in using kubectl cp ./nginx/conf nginx-busybox-pod:/mnt/nginx/. Busybox pod (used to access the PVC): apiVersion: v1 kind: Pod metadata: name: nginx-busybox-pod namespace: default spec: containers: - image: busybox command: - sleep - "360000" imagePullPolicy: Always name: busybox volumeMounts: - name: nginx-conf mountPath: "/mnt/nginx/conf" restartPolicy: Always volumes: - name: nginx-conf persistentVolumeClaim: claimName: pvc-nginx nodeSelector: kubernetes.io/os: linux And lastly the PVC: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc-nginx spec: accessModes: - ReadWriteMany resources: requests: storage: 100Mi storageClassName: azurefile Any ideas why?
But the client rep said this might cause problems in a Kubernetes pod, relating to access rights and process monitoring among other things. There is nothing special about kubernetes with respect to child processes and access rights. It's just Unix processes: a child runs with the same credentials as the parent and can access the same files and other resources as the parent. The process monitoring question is worth exploring in a little more detail. Typically, we say that containers in general -- not just in Kubernetes, but in docker/podman/etc as well -- should have a single entrypoint. In other words, you don't want to create a single container running multiple services, like a webserver and a database. This is because in a multi-entrypoint container, the failure of a service is hidden from the container management tools, so the container runtime can't destroy and re-create the container in response to the service failure. As long as your application is able to respond properly to the child process dying unexpectedly -- both by calling wait() on it to clean up the process entry and properly respawning it when necessary -- you're in good shape.
###Instruction: As an IT contractor, I was tasked with fixing an issue in a client's software which made simple use of a third-party library to encrypt or decrypt byte strings. For reasons relating to AWS temporary access tokens, the library required occasional reinitialisation for a fresh access token to be acquired (via AWS EKS) and used. I came up with a simple solution in which initialization and use of this library was relegated to a child process forked for the purpose, with input and output strings passed each way in hex via a couple of pipes. Then to reinitialize the library the parent could simply kill the child process and fork a new one. Seems pretty standard stuff, used everywhere in Unix. But the client rep said this might cause problems in a Kubernetes pod, relating to access rights and process monitoring among other things. Is he correct, or is he (as I suspect) being an over-cautious pearl clutcher? If he is right then what kind of problems can arise, and how can these be avoided? ### Response: But the client rep said this might cause problems in a Kubernetes pod, relating to access rights and process monitoring among other things. There is nothing special about kubernetes with respect to child processes and access rights. It's just Unix processes: a child runs with the same credentials as the parent and can access the same files and other resources as the parent. The process monitoring question is worth exploring in a little more detail. Typically, we say that containers in general -- not just in Kubernetes, but in docker/podman/etc as well -- should have a single entrypoint. In other words, you don't want to create a single container running multiple services, like a webserver and a database. This is because in a multi-entrypoint container, the failure of a service is hidden from the container management tools, so the container runtime can't destroy and re-create the container in response to the service failure. As long as your application is able to respond properly to the child process dying unexpectedly -- both by calling wait() on it to clean up the process entry and properly respawning it when necessary -- you're in good shape.
As an IT contractor, I was tasked with fixing an issue in a client's software which made simple use of a third-party library to encrypt or decrypt byte strings. For reasons relating to AWS temporary access tokens, the library required occasional reinitialisation for a fresh access token to be acquired (via AWS EKS) and used. I came up with a simple solution in which initialization and use of this library was relegated to a child process forked for the purpose, with input and output strings passed each way in hex via a couple of pipes. Then to reinitialize the library the parent could simply kill the child process and fork a new one. Seems pretty standard stuff, used everywhere in Unix. But the client rep said this might cause problems in a Kubernetes pod, relating to access rights and process monitoring among other things. Is he correct, or is he (as I suspect) being an over-cautious pearl clutcher? If he is right then what kind of problems can arise, and how can these be avoided?
If the encoded data is a string and not binary data, you can decode it to a UTF-8 string using the @base64d filter. This should be available in jq 1.6. $ ./etcdctl ... | jq '.[][] | (.key, .value) |= @base64d' { "key": "by-dev/meta/snapshots/root-coord/partitions/440047424618365170/440047424618365171_ts440049489891819524", "create_revision": 44536, "mod_revision": 44536, "version": 1, "value": "\b���љ׍\u0006\u0012\b_default\u0018������׍\u0006 ���љ׍\u0006" } It appears the value is not a UTF-8 string in your example so beware. Unfortunately, it doesn't return a byte array so it may not be very useful for these cases. jqplay
###Instruction: Is there any easy command line option to export my entire ETCD database to json file but also decode the keys and values automatically from base64? What I succeeded to the moment is this(example show 1x key/value): ./etcdctl get "" --prefix -w json | jq -r ".[] | .[] " { "key": "YnktZGV2L21ldGEvc25hcHNob3RzL3Jvb3QtY29vcmQvcGFydGl0aW9ucy80NDAwNDc0MjQ2MTgzNjUxNzAvNDQwMDQ3NDI0NjE4MzY1MTcxX3RzNDQwMDQ5NDg5ODkxODE5NTI0", "create_revision": 44536, "mod_revision": 44536, "version": 1, "value": "CPOB0OXRmdeNBhIIX2RlZmF1bHQYhIDgxN/V140GIPKB0OXRmdeNBg==" } But I need to decode the entire database keys and values to human readable format? Thanks P.S. Final solution after @Jeff Mercado help: 1. /etcdctl get "" --prefix -w json | jq '.[]' > etcd_filter.txt 2. Clear output to form array of objects [{},{} ...{}] 3. cat etcd_filter.txt | jq '.[] | (.key, .value) |= @base64d' jq playground ### Response: If the encoded data is a string and not binary data, you can decode it to a UTF-8 string using the @base64d filter. This should be available in jq 1.6. $ ./etcdctl ... | jq '.[][] | (.key, .value) |= @base64d' { "key": "by-dev/meta/snapshots/root-coord/partitions/440047424618365170/440047424618365171_ts440049489891819524", "create_revision": 44536, "mod_revision": 44536, "version": 1, "value": "\b���љ׍\u0006\u0012\b_default\u0018������׍\u0006 ���љ׍\u0006" } It appears the value is not a UTF-8 string in your example so beware. Unfortunately, it doesn't return a byte array so it may not be very useful for these cases. jqplay
Is there any easy command line option to export my entire ETCD database to json file but also decode the keys and values automatically from base64? What I succeeded to the moment is this(example show 1x key/value): ./etcdctl get "" --prefix -w json | jq -r ".[] | .[] " { "key": "YnktZGV2L21ldGEvc25hcHNob3RzL3Jvb3QtY29vcmQvcGFydGl0aW9ucy80NDAwNDc0MjQ2MTgzNjUxNzAvNDQwMDQ3NDI0NjE4MzY1MTcxX3RzNDQwMDQ5NDg5ODkxODE5NTI0", "create_revision": 44536, "mod_revision": 44536, "version": 1, "value": "CPOB0OXRmdeNBhIIX2RlZmF1bHQYhIDgxN/V140GIPKB0OXRmdeNBg==" } But I need to decode the entire database keys and values to human readable format? Thanks P.S. Final solution after @Jeff Mercado help: 1. /etcdctl get "" --prefix -w json | jq '.[]' > etcd_filter.txt 2. Clear output to form array of objects [{},{} ...{}] 3. cat etcd_filter.txt | jq '.[] | (.key, .value) |= @base64d' jq playground
I had to set output format to json. >kubectl get nodes Unable to connect to the server: getting credentials: decoding stdout: couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } >aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [eu-north-1]: Default output format [text]: json >kubectl get nodes NAME STATUS ROLES AGE VERSION ip-192-168-20-202.eu-north-1.compute.internal Ready <none> 370d v1.21.5-eks-9017834
###Instruction: error: Couldn't get available api versions from server: couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } ### Response: I had to set output format to json. >kubectl get nodes Unable to connect to the server: getting credentials: decoding stdout: couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } >aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [eu-north-1]: Default output format [text]: json >kubectl get nodes NAME STATUS ROLES AGE VERSION ip-192-168-20-202.eu-north-1.compute.internal Ready <none> 370d v1.21.5-eks-9017834
error: Couldn't get available api versions from server: couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" }
You are right topology spread constraints is good for one deployment. There could be many reasons behind that behavior of Kubernetes. One could be like you have set the Resource request & limit which K8s think is fine to Run both on Single Node so it's scheduling both pods on the same Node. Or you have not at all set anything which could be another reason too. Try increasing the request & limit of Deployment you will the scheduling diff. Meanwhile, you can also use Affinity (Node/POD affinity) or Taints & Toleration is also a good option to separate out the PODs on different available nodes. Affinity will work across deployment also. Ref : https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/ apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - zone-east1 containers: - name: with-node-affinity image: registry.k8s.io/pause:2.0
###Instruction: I have set of pods belonging to different deployments. All are configured to run as single replicas. Further, i have 2 nodes in my cluster. Now, when i try to schedule my pods, all pods get deployed to same node. its very rare that i see my pods are going to another node. Due to this, my one node is always under memory pressure with utilization near 90% and other node with utilization near 30%. Due to this, if my pods try to consume more than 80% of their limits, they are killed by k8s by saying node does not have enough resources. How can i spread my pods equally across the nodes? or what could be possibly wrong with my cluster? I have read through topology spread constraints but they only talk about spreading pods belonging to one deployment. ### Response: You are right topology spread constraints is good for one deployment. There could be many reasons behind that behavior of Kubernetes. One could be like you have set the Resource request & limit which K8s think is fine to Run both on Single Node so it's scheduling both pods on the same Node. Or you have not at all set anything which could be another reason too. Try increasing the request & limit of Deployment you will the scheduling diff. Meanwhile, you can also use Affinity (Node/POD affinity) or Taints & Toleration is also a good option to separate out the PODs on different available nodes. Affinity will work across deployment also. Ref : https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/ apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: topology.kubernetes.io/zone operator: In values: - zone-east1 containers: - name: with-node-affinity image: registry.k8s.io/pause:2.0
I have set of pods belonging to different deployments. All are configured to run as single replicas. Further, i have 2 nodes in my cluster. Now, when i try to schedule my pods, all pods get deployed to same node. its very rare that i see my pods are going to another node. Due to this, my one node is always under memory pressure with utilization near 90% and other node with utilization near 30%. Due to this, if my pods try to consume more than 80% of their limits, they are killed by k8s by saying node does not have enough resources. How can i spread my pods equally across the nodes? or what could be possibly wrong with my cluster? I have read through topology spread constraints but they only talk about spreading pods belonging to one deployment.
@Jun's answer worked mostly for me, but the secretName values have to be different. Otherwise, you'll get this error: Warning BadConfig 12m cert-manager-ingress-shim spec.tls[0].secretName: Invalid value: "api-ingress-cert": this secret name must only appear in a single TLS entry but is also used in spec.tls[1].secretName After fixing the secretName values, cert-manager generated everything as expected.
###Instruction: I have setup a backend and frontend service running on Kubernetes. Frontend would be www.<myDomain>.com and backend would be api.<myDomain>.com I need to expose and secure both services. I wish to use one ingress. I want to use free certificates from let's encrypt + cert manager. I guess a certificate for <myDomain>.com should cover both www. and api.. Pretty normal use case, right? But when these normal stuff comes together, I couldn't figure out the combined yaml. I was able to get single service, the www.<myDomain>.com working with https. Things doesn't work when I tried to add the api.<myDomain>.com I'm using GKE, but this doesn't seem to be a platform related question. Now creating ingress takes forever. This following events has been tried again and again Error syncing to GCP: error running load balancer syncing routine: loadbalancer <some id here> does not exist: googleapi: Error 404: The resource 'projects/<project>/global/sslCertificates/<some id here>' was not found, notFound apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: web-ingress annotations: kubernetes.io/ingress.class: gce kubernetes.io/ingress.allow-http: "true" cert-manager.io/issuer: letsencrypt-staging spec: tls: - secretName: web-ssl hosts: - <myDomain>.com rules: - host: "www.<myDomain>.com" http: paths: - pathType: Prefix path: "/" backend: service: name: angular-service port: number: 80 - host: "api.<myDomain>.com" http: paths: - pathType: Prefix path: "/" backend: service: name: spring-boot-service port: number: 8080 ### Response: @Jun's answer worked mostly for me, but the secretName values have to be different. Otherwise, you'll get this error: Warning BadConfig 12m cert-manager-ingress-shim spec.tls[0].secretName: Invalid value: "api-ingress-cert": this secret name must only appear in a single TLS entry but is also used in spec.tls[1].secretName After fixing the secretName values, cert-manager generated everything as expected.
I have setup a backend and frontend service running on Kubernetes. Frontend would be www.<myDomain>.com and backend would be api.<myDomain>.com I need to expose and secure both services. I wish to use one ingress. I want to use free certificates from let's encrypt + cert manager. I guess a certificate for <myDomain>.com should cover both www. and api.. Pretty normal use case, right? But when these normal stuff comes together, I couldn't figure out the combined yaml. I was able to get single service, the www.<myDomain>.com working with https. Things doesn't work when I tried to add the api.<myDomain>.com I'm using GKE, but this doesn't seem to be a platform related question. Now creating ingress takes forever. This following events has been tried again and again Error syncing to GCP: error running load balancer syncing routine: loadbalancer <some id here> does not exist: googleapi: Error 404: The resource 'projects/<project>/global/sslCertificates/<some id here>' was not found, notFound apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: web-ingress annotations: kubernetes.io/ingress.class: gce kubernetes.io/ingress.allow-http: "true" cert-manager.io/issuer: letsencrypt-staging spec: tls: - secretName: web-ssl hosts: - <myDomain>.com rules: - host: "www.<myDomain>.com" http: paths: - pathType: Prefix path: "/" backend: service: name: angular-service port: number: 80 - host: "api.<myDomain>.com" http: paths: - pathType: Prefix path: "/" backend: service: name: spring-boot-service port: number: 8080
I believe Knative 1.9 requires Kubernetes 1.23 or newer. One of the changes that happened in the last year or so was the removal of the v2beta2 autoscaling API and replacement with v2 API. I believe the new API is present in Kubernetes 1.22, but Knative 1.9 was tested with the community supported versions as of January, which includes 1.26, 1.25, 1.24 and possibly 1.23.
###Instruction: I followed the official instruction to installing knative serving on a self-built k8s cluster. But when running the second line kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.9.2/serving-core.yaml I got unable to recognize "https://github.com/knative/serving/releases/download/knative-v1.9.2/serving-core.yaml": no matches for kind "HorizontalPodAutoscaler" in version "autoscaling/v2" unable to recognize "https://github.com/knative/serving/releases/download/knative-v1.9.2/serving-core.yaml": no matches for kind "HorizontalPodAutoscaler" in version "autoscaling/v2" I searched for similar errors, but found few things helpful. My k8s cluster is built on two virtualbox VMs, one as the master node and one as the worker node. Both with: ubuntu 22.04 docker version 20.10 k8s version 1.21.14 Here are the custom resources I got after running the first line. $ kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.9.2/serving-crds.yaml customresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev created customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev created customresourcedefinition.apiextensions.k8s.io/clusterdomainclaims.networking.internal.knative.dev created customresourcedefinition.apiextensions.k8s.io/domainmappings.serving.knative.dev created customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev created customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev created customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev created customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev created customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev created customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev created customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev created customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev created $ kubectl get crd NAME CREATED AT certificates.networking.internal.knative.dev 2023-03-03T01:09:49Z clusterdomainclaims.networking.internal.knative.dev 2023-03-03T01:09:49Z configurations.serving.knative.dev 2023-03-03T01:09:49Z domainmappings.serving.knative.dev 2023-03-03T01:09:49Z images.caching.internal.knative.dev 2023-03-03T01:09:50Z ingresses.networking.internal.knative.dev 2023-03-03T01:09:49Z metrics.autoscaling.internal.knative.dev 2023-03-03T01:09:50Z podautoscalers.autoscaling.internal.knative.dev 2023-03-03T01:09:50Z revisions.serving.knative.dev 2023-03-03T01:09:50Z routes.serving.knative.dev 2023-03-03T01:09:50Z serverlessservices.networking.internal.knative.dev 2023-03-03T01:09:50Z services.serving.knative.dev 2023-03-03T01:09:50Z All I have done are these two lines of commands $ kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.9.2/serving-crds.yaml $ kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.9.2/serving-core.yaml I am new to k8s custom resources and knative. Thanks in advance for any help. ### Response: I believe Knative 1.9 requires Kubernetes 1.23 or newer. One of the changes that happened in the last year or so was the removal of the v2beta2 autoscaling API and replacement with v2 API. I believe the new API is present in Kubernetes 1.22, but Knative 1.9 was tested with the community supported versions as of January, which includes 1.26, 1.25, 1.24 and possibly 1.23.
I followed the official instruction to installing knative serving on a self-built k8s cluster. But when running the second line kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.9.2/serving-core.yaml I got unable to recognize "https://github.com/knative/serving/releases/download/knative-v1.9.2/serving-core.yaml": no matches for kind "HorizontalPodAutoscaler" in version "autoscaling/v2" unable to recognize "https://github.com/knative/serving/releases/download/knative-v1.9.2/serving-core.yaml": no matches for kind "HorizontalPodAutoscaler" in version "autoscaling/v2" I searched for similar errors, but found few things helpful. My k8s cluster is built on two virtualbox VMs, one as the master node and one as the worker node. Both with: ubuntu 22.04 docker version 20.10 k8s version 1.21.14 Here are the custom resources I got after running the first line. $ kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.9.2/serving-crds.yaml customresourcedefinition.apiextensions.k8s.io/certificates.networking.internal.knative.dev created customresourcedefinition.apiextensions.k8s.io/configurations.serving.knative.dev created customresourcedefinition.apiextensions.k8s.io/clusterdomainclaims.networking.internal.knative.dev created customresourcedefinition.apiextensions.k8s.io/domainmappings.serving.knative.dev created customresourcedefinition.apiextensions.k8s.io/ingresses.networking.internal.knative.dev created customresourcedefinition.apiextensions.k8s.io/metrics.autoscaling.internal.knative.dev created customresourcedefinition.apiextensions.k8s.io/podautoscalers.autoscaling.internal.knative.dev created customresourcedefinition.apiextensions.k8s.io/revisions.serving.knative.dev created customresourcedefinition.apiextensions.k8s.io/routes.serving.knative.dev created customresourcedefinition.apiextensions.k8s.io/serverlessservices.networking.internal.knative.dev created customresourcedefinition.apiextensions.k8s.io/services.serving.knative.dev created customresourcedefinition.apiextensions.k8s.io/images.caching.internal.knative.dev created $ kubectl get crd NAME CREATED AT certificates.networking.internal.knative.dev 2023-03-03T01:09:49Z clusterdomainclaims.networking.internal.knative.dev 2023-03-03T01:09:49Z configurations.serving.knative.dev 2023-03-03T01:09:49Z domainmappings.serving.knative.dev 2023-03-03T01:09:49Z images.caching.internal.knative.dev 2023-03-03T01:09:50Z ingresses.networking.internal.knative.dev 2023-03-03T01:09:49Z metrics.autoscaling.internal.knative.dev 2023-03-03T01:09:50Z podautoscalers.autoscaling.internal.knative.dev 2023-03-03T01:09:50Z revisions.serving.knative.dev 2023-03-03T01:09:50Z routes.serving.knative.dev 2023-03-03T01:09:50Z serverlessservices.networking.internal.knative.dev 2023-03-03T01:09:50Z services.serving.knative.dev 2023-03-03T01:09:50Z All I have done are these two lines of commands $ kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.9.2/serving-crds.yaml $ kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.9.2/serving-core.yaml I am new to k8s custom resources and knative. Thanks in advance for any help.
I solved the issue by escaping quotes and double quotes of the SecRule in the configmap as follows: SecRule REQUEST_URI|ARGS|QUERY_STRING \"@contains attack\" \"id:100001,phase:1,t:lowercase,deny,status:403,msg:\'Attack Detected\'\"
###Instruction: I created the following configMap for my NGINX ingress controller: apiVersion: v1 data: allow-snippet-annotations: "true" enable-modsecurity: "true" enable-owasp-modsecurity-crs: "true" modsecurity-snippet: |- SecRuleEngine On SecRequestBodyAccess On SecAuditLog /dev/stdout SecAuditLogFormat JSON SecAuditEngine RelevantOnly SecRule REQUEST_URI|ARGS|QUERY_STRING "@contains attack" "id:100001,phase:1,t:lowercase,deny,status:403,msg:'Attack Detected'" kind: ConfigMap metadata: annotations: meta.helm.sh/release-name: nginx-ingress meta.helm.sh/release-namespace: ingress-basic creationTimestamp: "2023-01-20T11:31:53Z" labels: app.kubernetes.io/component: controller app.kubernetes.io/instance: nginx-ingress app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx app.kubernetes.io/version: 1.5.1 helm.sh/chart: ingress-nginx-4.4.2 name: nginx-ingress-ingress-nginx-controller namespace: ingress-basic resourceVersion: "200257665" uid: e6ab9121-9a73-47e3-83ec-6c1fa19072ee I would expect that following SecRule SecRule REQUEST_URI|ARGS|QUERY_STRING "@contains attack" "id:100001,phase:1,t:lowercase,deny,status:403,msg:'Attack Detected'" would block any request containing the word attack in the URI or in the querystring, for example in: https://secrule.sample.com/api?task=attack But it doesn't. There is clearly something missing in the definition of the configMap of my NGINX ingress controller, but I don't understand what. Any clue? Thanks! I'd like to use ModSecurity with an NGINX Ingress Controller to block incoming calls that contain a given word in the querystring. ### Response: I solved the issue by escaping quotes and double quotes of the SecRule in the configmap as follows: SecRule REQUEST_URI|ARGS|QUERY_STRING \"@contains attack\" \"id:100001,phase:1,t:lowercase,deny,status:403,msg:\'Attack Detected\'\"
I created the following configMap for my NGINX ingress controller: apiVersion: v1 data: allow-snippet-annotations: "true" enable-modsecurity: "true" enable-owasp-modsecurity-crs: "true" modsecurity-snippet: |- SecRuleEngine On SecRequestBodyAccess On SecAuditLog /dev/stdout SecAuditLogFormat JSON SecAuditEngine RelevantOnly SecRule REQUEST_URI|ARGS|QUERY_STRING "@contains attack" "id:100001,phase:1,t:lowercase,deny,status:403,msg:'Attack Detected'" kind: ConfigMap metadata: annotations: meta.helm.sh/release-name: nginx-ingress meta.helm.sh/release-namespace: ingress-basic creationTimestamp: "2023-01-20T11:31:53Z" labels: app.kubernetes.io/component: controller app.kubernetes.io/instance: nginx-ingress app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx app.kubernetes.io/version: 1.5.1 helm.sh/chart: ingress-nginx-4.4.2 name: nginx-ingress-ingress-nginx-controller namespace: ingress-basic resourceVersion: "200257665" uid: e6ab9121-9a73-47e3-83ec-6c1fa19072ee I would expect that following SecRule SecRule REQUEST_URI|ARGS|QUERY_STRING "@contains attack" "id:100001,phase:1,t:lowercase,deny,status:403,msg:'Attack Detected'" would block any request containing the word attack in the URI or in the querystring, for example in: https://secrule.sample.com/api?task=attack But it doesn't. There is clearly something missing in the definition of the configMap of my NGINX ingress controller, but I don't understand what. Any clue? Thanks! I'd like to use ModSecurity with an NGINX Ingress Controller to block incoming calls that contain a given word in the querystring.
Executing both minikube delete and minikube start has solved the problem for me
###Instruction: I'm getting below error when I try to run the minikube after downloading its binary:- ❌ Exiting due to GUEST_START: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.22.3 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk stderr: [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0304 05:40:42.096000 3744 certs.go:489] WARNING: could not validate bounds for certificate apiserver-kubelet-client: the certificate has expired: NotBefore: 2020-06-29 07:35:45 +0000 UTC, NotAfter: 2022-12-10 12:46:24 +0000 UTC error execution phase certs/apiserver-kubelet-client: [certs] certificate apiserver-kubelet-client not signed by CA certificate ca: x509: certificate has expired or is not yet valid: current time 2023-03-04T05:40:42Z is after 2022-12-10T12:46:24Z To see the stack trace of this error execute with --v=5 or higher I'm referring its official documentation here It gives a warning that kubelet service is not enabled and suggested to use 'systemctl enable kubelet.service'. I tried below commands but no idea how run the minikube on MacOS:- (base) ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ~ » systemctl enable kubelet.service 80 ↵ vinod827@Vinods-MacBook-Pro zsh: command not found: systemctl (base) ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ~ » launchctl enable kubelet.service 127 ↵ vinod827@Vinods-MacBook-Pro Unrecognized target specifier. Usage: launchctl enable <service-target> <service-target> takes a form of <domain-target>/<service-id>. Please refer to `man launchctl` for explanation of the <domain-target> specifiers. (base) Any idea what could be the problem here? ### Response: Executing both minikube delete and minikube start has solved the problem for me
I'm getting below error when I try to run the minikube after downloading its binary:- ❌ Exiting due to GUEST_START: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1 stdout: [init] Using Kubernetes version: v1.22.3 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/var/lib/minikube/certs" [certs] Using existing ca certificate authority [certs] Using existing apiserver certificate and key on disk stderr: [WARNING Swap]: running with swap on is not supported. Please disable swap [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' W0304 05:40:42.096000 3744 certs.go:489] WARNING: could not validate bounds for certificate apiserver-kubelet-client: the certificate has expired: NotBefore: 2020-06-29 07:35:45 +0000 UTC, NotAfter: 2022-12-10 12:46:24 +0000 UTC error execution phase certs/apiserver-kubelet-client: [certs] certificate apiserver-kubelet-client not signed by CA certificate ca: x509: certificate has expired or is not yet valid: current time 2023-03-04T05:40:42Z is after 2022-12-10T12:46:24Z To see the stack trace of this error execute with --v=5 or higher I'm referring its official documentation here It gives a warning that kubelet service is not enabled and suggested to use 'systemctl enable kubelet.service'. I tried below commands but no idea how run the minikube on MacOS:- (base) ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ~ » systemctl enable kubelet.service 80 ↵ vinod827@Vinods-MacBook-Pro zsh: command not found: systemctl (base) ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ~ » launchctl enable kubelet.service 127 ↵ vinod827@Vinods-MacBook-Pro Unrecognized target specifier. Usage: launchctl enable <service-target> <service-target> takes a form of <domain-target>/<service-id>. Please refer to `man launchctl` for explanation of the <domain-target> specifiers. (base) Any idea what could be the problem here?
Is it necessary to enable Vertical or Horizontal Pod Autoscaling in GKE Autopilot Clusters? Not necessary, but if using autopilot it's about leveraging the option to scale nodes without worry and just you focus on HPA & VPA. For a GKE cluster is configured with Autopilot, does it make sense to also enable autoscaling? I think would be beneficial to you to enable Autoscaling like HPA & VPA with GKE autopilot mode if using. You can also go with HPA and Max POD limit to scale. VPA would be also good to create small-size POD when there is less traffic or resource consumption there, which will be a good factor in reducing cost also. In the document Compare GKE Autopilot and Standard, it says the autoscalers are optional. Yes, it's optional but would be good to go initially with HPA scaling so if there is any sudden traffic spike will be able to handle it. Also node auto-provisioning says: Yes with Autopilot cluster you don't have to worry about the Ifra part like Node pool setup, size, Node pool's scaling etc. With autopilot, you just worry about your application and its scaling with HPA & VPA. HPA scales the replicas of POD as per your setting while if required Node auto-scaled by Google and get attached to your GKE cluster without setting up cluster(node) autoscaler by you. While with GKE standard you have to worry about the size of Node pool scaling etc.
###Instruction: For a GKE cluster configured with Autopilot, does it make sense to also enable autoscaling? In the document Compare GKE Autopilot and Standard, it says the auto scaler are optional. Also node auto-provisioning says: With Autopilot clusters, you don't need to manually provision nodes or manage node pools because node pools are automatically provisioned through node auto-provisioning. With node auto-provisioning, nodes are automatically scaled to meet the requirements of your workloads. EDIT: I am confused between the concepts of autoscaling and auto node provisioning. ### Response: Is it necessary to enable Vertical or Horizontal Pod Autoscaling in GKE Autopilot Clusters? Not necessary, but if using autopilot it's about leveraging the option to scale nodes without worry and just you focus on HPA & VPA. For a GKE cluster is configured with Autopilot, does it make sense to also enable autoscaling? I think would be beneficial to you to enable Autoscaling like HPA & VPA with GKE autopilot mode if using. You can also go with HPA and Max POD limit to scale. VPA would be also good to create small-size POD when there is less traffic or resource consumption there, which will be a good factor in reducing cost also. In the document Compare GKE Autopilot and Standard, it says the autoscalers are optional. Yes, it's optional but would be good to go initially with HPA scaling so if there is any sudden traffic spike will be able to handle it. Also node auto-provisioning says: Yes with Autopilot cluster you don't have to worry about the Ifra part like Node pool setup, size, Node pool's scaling etc. With autopilot, you just worry about your application and its scaling with HPA & VPA. HPA scales the replicas of POD as per your setting while if required Node auto-scaled by Google and get attached to your GKE cluster without setting up cluster(node) autoscaler by you. While with GKE standard you have to worry about the size of Node pool scaling etc.
For a GKE cluster configured with Autopilot, does it make sense to also enable autoscaling? In the document Compare GKE Autopilot and Standard, it says the auto scaler are optional. Also node auto-provisioning says: With Autopilot clusters, you don't need to manually provision nodes or manage node pools because node pools are automatically provisioned through node auto-provisioning. With node auto-provisioning, nodes are automatically scaled to meet the requirements of your workloads. EDIT: I am confused between the concepts of autoscaling and auto node provisioning.
Damn, Solved :) --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: lb-pool namespace: metallb-system spec: ipAddressPools: - lb-pool Seems that L2Advertisement was missing according to : https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/baremetal.md
###Instruction: So, here is my current setup My experience is mostly on openshift, but I'm trying to get familiar with kubernetes... and I'm a bit noob in KS8 :) kubernets + callico + external storage(nfs) + metallb + ingress-nginx kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master01 Ready control-plane 3d14h v1.26.2 192.168.50.15 <none> Ubuntu 22.04.2 LTS 5.15.0-67-generic cri-o://1.24.4 master02 Ready control-plane 2d15h v1.26.2 192.168.50.16 <none> Ubuntu 22.04.2 LTS 5.15.0-67-generic cri-o://1.24.4 worker-01 Ready worker 2d14h v1.26.2 192.168.50.105 <none> Ubuntu 22.04.2 LTS 5.15.0-67-generic cri-o://1.24.4 worker-02 Ready worker 2d13h v1.26.2 192.168.50.106 <none> Ubuntu 22.04.2 LTS 5.15.0-67-generic cri-o://1.24.4 kubectl get pods -n metallb-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES controller-79d5899cb-hg4lv 1/1 Running 0 23m 10.30.0.27 worker-02 <none> <none> speaker-lvpbn 1/1 Running 0 21m 192.168.50.106 worker-02 <none> <none> speaker-rxcvb 1/1 Running 0 21m 192.168.50.105 worker-01 <none> <none> metallb has been config with this ippool apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: lb-pool spec: addresses: - 192.168.50.115-192.168.50.118 kubectl get all -n ingress-nginx NAME READY STATUS RESTARTS AGE pod/ingress-nginx-controller-c69664497-z84b8 1/1 Running 0 12h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ingress-nginx-controller LoadBalancer 10.108.69.42 192.168.50.115 80:32481/TCP,443:32137/TCP,8443:30940/TCP 83m service/ingress-nginx-controller-admission ClusterIP 10.97.240.138 <none> 443/TCP 12h NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/ingress-nginx-controller 1/1 1 1 12h NAME DESIRED CURRENT READY AGE replicaset.apps/ingress-nginx-controller-c69664497 1 1 1 12h kubectl create deployment httpd24 --image=docker.io/library/httpd:2.4.55 kubectl expose deployment/httpd24 --port 80 create ingress:: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: httpd24-ingress namespace: default spec: ingressClassName: nginx rules: - host: http24-kube.docker-containers.local http: paths: - path: / pathType: Prefix backend: service: name: httpd24 port: number: 80 kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE httpd24-ingress nginx http24-kube.docker-containers.local 192.168.50.115 80 58m So, from inside cluster I can execute w/o errors curl -s http://http24-kube.docker-containers.local However, from outside cluster ping to 192.158.50.115 or ping to http24-kube.docker-containers.local got timeout. Computer from where I try to connect to http24-kube.docker-containers.local has access to subnet 192.168.50.0/25 I think it may be related to nginx-ingress...but have zero experience with .... ======== C:\Users\AZ_fjonnas>nslookup http24-kube.docker-containers.local Name: http24-kube.docker-containers.local Address: 192.168.50.115 Now, from the one of the masternodes root@master01:~# curl -s 'http://http24-kube.docker-containers.local' <html><body><h1>It works!</h1></body></html> root@master01:~# So, cluster master nodes resolve the name, moreover can access the httpd24 pod with ingress IP But, windows machine can't access at all ingress IP(192.168.50.111) port 80 That's why I think is something related to how nginx-ingress works... :( ALL nodes belong to same subnet: 192.168.50.0/25 ### Response: Damn, Solved :) --- apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: lb-pool namespace: metallb-system spec: ipAddressPools: - lb-pool Seems that L2Advertisement was missing according to : https://github.com/kubernetes/ingress-nginx/blob/main/docs/deploy/baremetal.md
So, here is my current setup My experience is mostly on openshift, but I'm trying to get familiar with kubernetes... and I'm a bit noob in KS8 :) kubernets + callico + external storage(nfs) + metallb + ingress-nginx kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master01 Ready control-plane 3d14h v1.26.2 192.168.50.15 <none> Ubuntu 22.04.2 LTS 5.15.0-67-generic cri-o://1.24.4 master02 Ready control-plane 2d15h v1.26.2 192.168.50.16 <none> Ubuntu 22.04.2 LTS 5.15.0-67-generic cri-o://1.24.4 worker-01 Ready worker 2d14h v1.26.2 192.168.50.105 <none> Ubuntu 22.04.2 LTS 5.15.0-67-generic cri-o://1.24.4 worker-02 Ready worker 2d13h v1.26.2 192.168.50.106 <none> Ubuntu 22.04.2 LTS 5.15.0-67-generic cri-o://1.24.4 kubectl get pods -n metallb-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES controller-79d5899cb-hg4lv 1/1 Running 0 23m 10.30.0.27 worker-02 <none> <none> speaker-lvpbn 1/1 Running 0 21m 192.168.50.106 worker-02 <none> <none> speaker-rxcvb 1/1 Running 0 21m 192.168.50.105 worker-01 <none> <none> metallb has been config with this ippool apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: lb-pool spec: addresses: - 192.168.50.115-192.168.50.118 kubectl get all -n ingress-nginx NAME READY STATUS RESTARTS AGE pod/ingress-nginx-controller-c69664497-z84b8 1/1 Running 0 12h NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/ingress-nginx-controller LoadBalancer 10.108.69.42 192.168.50.115 80:32481/TCP,443:32137/TCP,8443:30940/TCP 83m service/ingress-nginx-controller-admission ClusterIP 10.97.240.138 <none> 443/TCP 12h NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/ingress-nginx-controller 1/1 1 1 12h NAME DESIRED CURRENT READY AGE replicaset.apps/ingress-nginx-controller-c69664497 1 1 1 12h kubectl create deployment httpd24 --image=docker.io/library/httpd:2.4.55 kubectl expose deployment/httpd24 --port 80 create ingress:: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: httpd24-ingress namespace: default spec: ingressClassName: nginx rules: - host: http24-kube.docker-containers.local http: paths: - path: / pathType: Prefix backend: service: name: httpd24 port: number: 80 kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE httpd24-ingress nginx http24-kube.docker-containers.local 192.168.50.115 80 58m So, from inside cluster I can execute w/o errors curl -s http://http24-kube.docker-containers.local However, from outside cluster ping to 192.158.50.115 or ping to http24-kube.docker-containers.local got timeout. Computer from where I try to connect to http24-kube.docker-containers.local has access to subnet 192.168.50.0/25 I think it may be related to nginx-ingress...but have zero experience with .... ======== C:\Users\AZ_fjonnas>nslookup http24-kube.docker-containers.local Name: http24-kube.docker-containers.local Address: 192.168.50.115 Now, from the one of the masternodes root@master01:~# curl -s 'http://http24-kube.docker-containers.local' <html><body><h1>It works!</h1></body></html> root@master01:~# So, cluster master nodes resolve the name, moreover can access the httpd24 pod with ingress IP But, windows machine can't access at all ingress IP(192.168.50.111) port 80 That's why I think is something related to how nginx-ingress works... :( ALL nodes belong to same subnet: 192.168.50.0/25
It's a type assertion, as @peter suggested in the comment.
###Instruction: While I'm reading how to test on k8s with the fake client on this link, I noticed this function, which, IIUC, assigns a faked server version to the faked k8s cluster. k8s.clientset.Discovery().(*discoveryfake.FakeDiscovery).FakedServerVersion = &version.Info{ Major: expectedMajor, Minor: expectedMinor, } I'm trying to understand what this *discoveryfake.FakeDiscovery code means. k8s.clientset.Discovery() returns DiscoveryInterface (defined here), but I cannot find *discoveryfake field. What does Discovery().(*discoveryfake.FakeDiscovery) mean? ### Response: It's a type assertion, as @peter suggested in the comment.
While I'm reading how to test on k8s with the fake client on this link, I noticed this function, which, IIUC, assigns a faked server version to the faked k8s cluster. k8s.clientset.Discovery().(*discoveryfake.FakeDiscovery).FakedServerVersion = &version.Info{ Major: expectedMajor, Minor: expectedMinor, } I'm trying to understand what this *discoveryfake.FakeDiscovery code means. k8s.clientset.Discovery() returns DiscoveryInterface (defined here), but I cannot find *discoveryfake field. What does Discovery().(*discoveryfake.FakeDiscovery) mean?
You can use the Scaling policy with the HPA Example scaleUp: stabilizationWindowSeconds: 0 policies: - type: Pods value: 3 periodSeconds: 5 selectPolicy: Max it will add the 3 PODs every 5 second until HPA metrics become steady. Ref doc : https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/853-configurable-hpa-scale-velocity/README.md
###Instruction: We want our deployment to have 3/6/9/etc replicas at all times to have an even AZ spread. Is there a way to achieve this via HPA config ? ### Response: You can use the Scaling policy with the HPA Example scaleUp: stabilizationWindowSeconds: 0 policies: - type: Pods value: 3 periodSeconds: 5 selectPolicy: Max it will add the 3 PODs every 5 second until HPA metrics become steady. Ref doc : https://github.com/kubernetes/enhancements/blob/master/keps/sig-autoscaling/853-configurable-hpa-scale-velocity/README.md
We want our deployment to have 3/6/9/etc replicas at all times to have an even AZ spread. Is there a way to achieve this via HPA config ?
Try this: kubectl create secret generic airflow-postgresql -n airflow --from-literal=password='postgres' --dry-run=client -o yaml | kubectl apply -f - I think the error is due to the helm create the airflow-postgresql's secrets using "postgres-password" as the key while the helm update expects "password" as the key. Or: https://airflow.apache.org/docs/helm-chart/stable/release_notes.html#airflow-helm-chart-1-8-0-2023-02-06 Airflow Helm Chart 1.8.0 (2023-02-06) Significant Changes bitnami/postgresql subchart updated to 12.1.9 (#29071) The version of postgresql installed is still version 11. If you are upgrading an existing helm release with the built-in postgres database, you will either need to delete your release and reinstall fresh, or manually delete these 2 objects: kubectl delete secret {RELEASE_NAME}-postgresql kubectl delete statefulset {RELEASE_NAME}-postgresql As a reminder, it is recommended to set up an external database in production. This version of the chart uses different variable names for setting usernames and passwords in the postgres database. postgresql.auth.enablePostgresUser is used to determine if the “postgres” admin account will be created. postgresql.auth.postgresPassword sets the password for the “postgres” user. postgresql.auth.username and postrgesql.auth.password are used to set credentials for a non-admin account if desired. postgresql.postgresqlUsername and postgresql.postresqlPassword, which were used in the previous version of the chart, are no longer used. Users will need to make those changes in their values files if they are changing the Postgres configuration. Previously the subchart version was 10.5.3.
###Instruction: I am getting the following error on installing a custom docker image as a revision. I am not able to figure out the reason. So far I have tried adding secrets with the password and it didn't work helm upgrade --install airflow apache-airflow/airflow -n airflow -f values.yaml --debug history.go:56: [debug] getting history for release airflow upgrade.go:144: [debug] preparing upgrade for airflow Error: UPGRADE FAILED: execution error at (airflow/charts/postgresql/templates/secrets.yaml:20:15): PASSWORDS ERROR: The secret "airflow-postgresql" does not contain the key "password" ### Response: Try this: kubectl create secret generic airflow-postgresql -n airflow --from-literal=password='postgres' --dry-run=client -o yaml | kubectl apply -f - I think the error is due to the helm create the airflow-postgresql's secrets using "postgres-password" as the key while the helm update expects "password" as the key. Or: https://airflow.apache.org/docs/helm-chart/stable/release_notes.html#airflow-helm-chart-1-8-0-2023-02-06 Airflow Helm Chart 1.8.0 (2023-02-06) Significant Changes bitnami/postgresql subchart updated to 12.1.9 (#29071) The version of postgresql installed is still version 11. If you are upgrading an existing helm release with the built-in postgres database, you will either need to delete your release and reinstall fresh, or manually delete these 2 objects: kubectl delete secret {RELEASE_NAME}-postgresql kubectl delete statefulset {RELEASE_NAME}-postgresql As a reminder, it is recommended to set up an external database in production. This version of the chart uses different variable names for setting usernames and passwords in the postgres database. postgresql.auth.enablePostgresUser is used to determine if the “postgres” admin account will be created. postgresql.auth.postgresPassword sets the password for the “postgres” user. postgresql.auth.username and postrgesql.auth.password are used to set credentials for a non-admin account if desired. postgresql.postgresqlUsername and postgresql.postresqlPassword, which were used in the previous version of the chart, are no longer used. Users will need to make those changes in their values files if they are changing the Postgres configuration. Previously the subchart version was 10.5.3.
I am getting the following error on installing a custom docker image as a revision. I am not able to figure out the reason. So far I have tried adding secrets with the password and it didn't work helm upgrade --install airflow apache-airflow/airflow -n airflow -f values.yaml --debug history.go:56: [debug] getting history for release airflow upgrade.go:144: [debug] preparing upgrade for airflow Error: UPGRADE FAILED: execution error at (airflow/charts/postgresql/templates/secrets.yaml:20:15): PASSWORDS ERROR: The secret "airflow-postgresql" does not contain the key "password"
You can mount your local kubeconfig file to the Docker image that you are trying, not sure if the way you follow Env variable option is suggested anywhere or not Try something like : docker run --publish-all --name kubeview -v ./config:/.kube/config ghcr.io/benc-uk/kubeview:0.1.31 ./config - will the path of your ~/.kube/config/ local system /.kube/config - path where your local file will get set in the container, so when container will run your local file will be available at that path Tried with fake values, which not able to parse but it got the file 2023/03/12 19:28:04 ### Kubeview v0.0.0 starting... 2023/03/12 19:28:04 ### Connecting to Kubernetes... 2023/03/12 19:28:04 ### Creating client with config file: /.kube/config panic: error loading config file "/.kube/config": couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } goroutine 1 [running]: main.main() /build/cmd/server/main.go:60 +0x6a5
###Instruction: Based on the KubeView README, I tried to run KubeView using the container provided here. I run: $ docker run --publish-all --name kubeview ghcr.io/benc-uk/kubeview:0.1.31 I get the following output: 2023/03/12 18:06:45 ### Kubeview v0.1.31 starting... 2023/03/12 18:06:45 ### Connecting to Kubernetes... 2023/03/12 18:06:45 ### Creating client with config file: /.kube/config panic: stat /.kube/config: no such file or directory goroutine 1 [running]: main.main() /build/cmd/server/main.go:60 +0x6a5 I can see that the problem is that the tool is looking for the kubeconfig file in /.kube/config. It can't find it because mine is in my home directory, ~/.kube/config/ I tried to pass an environment variable like this: $ docker run --publish-all --name kubeview -e KUBECONFIG=/Users/<MY_USERNAME>/.kube/config ghcr.io/benc-uk/kubeview:latest It didn't work. Has anyone been able to run KubeView as a container? I'm on a Mac. ### Response: You can mount your local kubeconfig file to the Docker image that you are trying, not sure if the way you follow Env variable option is suggested anywhere or not Try something like : docker run --publish-all --name kubeview -v ./config:/.kube/config ghcr.io/benc-uk/kubeview:0.1.31 ./config - will the path of your ~/.kube/config/ local system /.kube/config - path where your local file will get set in the container, so when container will run your local file will be available at that path Tried with fake values, which not able to parse but it got the file 2023/03/12 19:28:04 ### Kubeview v0.0.0 starting... 2023/03/12 19:28:04 ### Connecting to Kubernetes... 2023/03/12 19:28:04 ### Creating client with config file: /.kube/config panic: error loading config file "/.kube/config": couldn't get version/kind; json parse error: json: cannot unmarshal string into Go value of type struct { APIVersion string "json:\"apiVersion,omitempty\""; Kind string "json:\"kind,omitempty\"" } goroutine 1 [running]: main.main() /build/cmd/server/main.go:60 +0x6a5
Based on the KubeView README, I tried to run KubeView using the container provided here. I run: $ docker run --publish-all --name kubeview ghcr.io/benc-uk/kubeview:0.1.31 I get the following output: 2023/03/12 18:06:45 ### Kubeview v0.1.31 starting... 2023/03/12 18:06:45 ### Connecting to Kubernetes... 2023/03/12 18:06:45 ### Creating client with config file: /.kube/config panic: stat /.kube/config: no such file or directory goroutine 1 [running]: main.main() /build/cmd/server/main.go:60 +0x6a5 I can see that the problem is that the tool is looking for the kubeconfig file in /.kube/config. It can't find it because mine is in my home directory, ~/.kube/config/ I tried to pass an environment variable like this: $ docker run --publish-all --name kubeview -e KUBECONFIG=/Users/<MY_USERNAME>/.kube/config ghcr.io/benc-uk/kubeview:latest It didn't work. Has anyone been able to run KubeView as a container? I'm on a Mac.
It might be perfectly possible to mount your own secrets into the /var/run/secrets or /run/secrets path. Even in the scenario where Kuberbetes is concurrently using that path for system-related tasks. This commonly happens when, for instance, you are running a cluster with 2+ nodes and Kubernetes starts projecting it's system objects like secrets, tokens, certifates, etc. into /var/run/secrets/kubernetes.io/serviceaccount/. I don't know the exact specifics of your case, but you might want to look into subPath. When your application expects secrets to be available in /run/secrets, you can do so something like: ... containers: - name: myContainer image: "someImage:1.0.0" volumeMounts: - name: mySecretVolumeName mountPath: "/run/secrets/webhook_secret" readOnly: true subPath: webhook-secret volumes: - name: mySecretVolumeName secret: secretName: secrets items: - key: webhook-secret path: webhook-secret ... People started bumping into the issue you're experiencing (if I understand you correctly) a few years ago: https://github.com/kubernetes/kubernetes/issues/65835 Hope this helps you out!
###Instruction: I do have an existing application that used docker and docker compose so far. I want to operate this app in a Kubernetes cluster. Shouldn't be a big deal, right? But so far I failed because of the used secrets. The application expects a secret to be present in a file at: /run/secrets/webhook_secret, where webhook_secret is the file containing the secret. I created a secret with kubectl like this: kubectl create secret generic webhook-secret \ --from-literal=webhook_secret=123 \ --namespace my-app I tried to mount the secret in the manifest with... ... volumeMounts: - name: secrets mountPath: "/run/secrets" readOnly: true volumes: - name: secrets secret: secretName: webhook-secret ... But then the pod is not able to start, as Kubernetes also tries to mount the same directory and so issues with run/secrets/kubernetes.io occur... Warning Failed 3m1s (x4 over 3m37s) kubelet Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/var/lib/kubelet/pods/e79f634a-2abe-4c47-ae50-c4beb5b66ae6/volumes/kubernetes.io~projected/kube-api-access-grl42" to rootfs at "/var/run/secrets/kubernetes.io/serviceaccount": mkdir /run/containerd/io.containerd.runtime.v2.task/k8s.io/my-app-container/rootfs/run/secrets/kubernetes.io: read-only file system: unknown Any idea how I can provide the secret as required by the application? Regards and thanks, Thomas ### Response: It might be perfectly possible to mount your own secrets into the /var/run/secrets or /run/secrets path. Even in the scenario where Kuberbetes is concurrently using that path for system-related tasks. This commonly happens when, for instance, you are running a cluster with 2+ nodes and Kubernetes starts projecting it's system objects like secrets, tokens, certifates, etc. into /var/run/secrets/kubernetes.io/serviceaccount/. I don't know the exact specifics of your case, but you might want to look into subPath. When your application expects secrets to be available in /run/secrets, you can do so something like: ... containers: - name: myContainer image: "someImage:1.0.0" volumeMounts: - name: mySecretVolumeName mountPath: "/run/secrets/webhook_secret" readOnly: true subPath: webhook-secret volumes: - name: mySecretVolumeName secret: secretName: secrets items: - key: webhook-secret path: webhook-secret ... People started bumping into the issue you're experiencing (if I understand you correctly) a few years ago: https://github.com/kubernetes/kubernetes/issues/65835 Hope this helps you out!
I do have an existing application that used docker and docker compose so far. I want to operate this app in a Kubernetes cluster. Shouldn't be a big deal, right? But so far I failed because of the used secrets. The application expects a secret to be present in a file at: /run/secrets/webhook_secret, where webhook_secret is the file containing the secret. I created a secret with kubectl like this: kubectl create secret generic webhook-secret \ --from-literal=webhook_secret=123 \ --namespace my-app I tried to mount the secret in the manifest with... ... volumeMounts: - name: secrets mountPath: "/run/secrets" readOnly: true volumes: - name: secrets secret: secretName: webhook-secret ... But then the pod is not able to start, as Kubernetes also tries to mount the same directory and so issues with run/secrets/kubernetes.io occur... Warning Failed 3m1s (x4 over 3m37s) kubelet Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/var/lib/kubelet/pods/e79f634a-2abe-4c47-ae50-c4beb5b66ae6/volumes/kubernetes.io~projected/kube-api-access-grl42" to rootfs at "/var/run/secrets/kubernetes.io/serviceaccount": mkdir /run/containerd/io.containerd.runtime.v2.task/k8s.io/my-app-container/rootfs/run/secrets/kubernetes.io: read-only file system: unknown Any idea how I can provide the secret as required by the application? Regards and thanks, Thomas
You can use the K8s network polcies to block the traffic across the namespace or services. So with network policy, you will be able to restrict the traffic across the namespaces, PODs based on the labels, selectors etc. With network policy service will be able to resolve the IP in DNS but the Network policy won't allow further to connect to end service. Example Allow traffic from some PODs in another namespace kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-all-ns-db spec: podSelector: matchLabels: app: web ingress: - from: - namespaceSelector: #All pods in namespace with label app=db-ns matchLabels: team: operations podSelector: #Chooses pods with app=db matchLabels: app: db here is the list of policies you can refer : https://github.com/ahmetb/kubernetes-network-policy-recipes
###Instruction: I can resolve a DNS name on the internal kubernetes DNS with a value like: http://serviceNameExampleAbc.default.svc.cluster.local where serviceExampleAbc relates to the IP of the service by name serviceExampleAbc... However, how can I instead only resolve this to the service IP only if it has a specific label to it? For example http://serviceNameExampleAbc.servicesWithXyzLabelOnly.default.svc.cluster.local would only resolve to serviceNameExampleAbc's IP if it has a label defined for it like below: apiVersion: v1 kind: Service metadata: name: serviceNameExampleAbc labels: xyz: abcxyzexmaple ... ### Response: You can use the K8s network polcies to block the traffic across the namespace or services. So with network policy, you will be able to restrict the traffic across the namespaces, PODs based on the labels, selectors etc. With network policy service will be able to resolve the IP in DNS but the Network policy won't allow further to connect to end service. Example Allow traffic from some PODs in another namespace kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: web-allow-all-ns-db spec: podSelector: matchLabels: app: web ingress: - from: - namespaceSelector: #All pods in namespace with label app=db-ns matchLabels: team: operations podSelector: #Chooses pods with app=db matchLabels: app: db here is the list of policies you can refer : https://github.com/ahmetb/kubernetes-network-policy-recipes
I can resolve a DNS name on the internal kubernetes DNS with a value like: http://serviceNameExampleAbc.default.svc.cluster.local where serviceExampleAbc relates to the IP of the service by name serviceExampleAbc... However, how can I instead only resolve this to the service IP only if it has a specific label to it? For example http://serviceNameExampleAbc.servicesWithXyzLabelOnly.default.svc.cluster.local would only resolve to serviceNameExampleAbc's IP if it has a label defined for it like below: apiVersion: v1 kind: Service metadata: name: serviceNameExampleAbc labels: xyz: abcxyzexmaple ...
Nginx Ingress does not support TCP or UDP services You can follow this guide the setup & expose the TCP service : https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/ Here is my YAML config file for RabbitMQ i have used --- apiVersion: v1 kind: ConfigMap metadata: name: tcp-services namespace: ingress-nginx data: 61616: "default/artemis-service:61616" Make tcp-service config into ingress-nginx namespace or where your controller is installed. Add the Port to Nginx service - name: proxied-tcp-61616 port: 61616 targetPort: 61616 protocol: TCP You will be able to connect your Broker with NginxIP:61616 now from outside. Note if you are very new to K8s: If you are running your application inside of the same Kubernetes cluster you don't need to expose the Port, you can use the service name for internal communication & connection. artemis-service:61616 rest will K8s will manage DNS resolution etc.
###Instruction: I have a k8 deployment (using minikube) for ActiveMQ-Artemis. In this I'm exposing ports for broker(61616) and console(8161) both. Pointing the ports to service. It is working fine. and also I configured an Ingress Controller for the console. Service.yaml kind: Service metadata: name: artemis-service spec: type: ClusterIP ports: - port: 8161 name: http-console protocol: TCP targetPort: 8161 - port: 61616 name: netty-connector protocol: TCP targetPort: 61616 selector: app: artemis Ingress.yaml kind: Ingress metadata: name: broker-ingress labels: name: broker-ingress spec: ingressClassName: nginx rules: - host: artemis.broker.com http: paths: - pathType: Prefix path: "/" backend: service: name: artemis-service port: number: 8161 I can access the activemq console when I hit http://artemis.broker.com And now I want to expose the TCP port (61616) using NGINX Controller through which I can publish/consume messages to ActiveMQ queue using a TCP URL. I found TransportServer in Nginx Controller to expose TCP. https://docs.nginx.com/nginx-ingress-controller/configuration/transportserver-resource/ Can someone help me how to configure this TransportServer. PS - I'm new to kubernetes. ### Response: Nginx Ingress does not support TCP or UDP services You can follow this guide the setup & expose the TCP service : https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/ Here is my YAML config file for RabbitMQ i have used --- apiVersion: v1 kind: ConfigMap metadata: name: tcp-services namespace: ingress-nginx data: 61616: "default/artemis-service:61616" Make tcp-service config into ingress-nginx namespace or where your controller is installed. Add the Port to Nginx service - name: proxied-tcp-61616 port: 61616 targetPort: 61616 protocol: TCP You will be able to connect your Broker with NginxIP:61616 now from outside. Note if you are very new to K8s: If you are running your application inside of the same Kubernetes cluster you don't need to expose the Port, you can use the service name for internal communication & connection. artemis-service:61616 rest will K8s will manage DNS resolution etc.
I have a k8 deployment (using minikube) for ActiveMQ-Artemis. In this I'm exposing ports for broker(61616) and console(8161) both. Pointing the ports to service. It is working fine. and also I configured an Ingress Controller for the console. Service.yaml kind: Service metadata: name: artemis-service spec: type: ClusterIP ports: - port: 8161 name: http-console protocol: TCP targetPort: 8161 - port: 61616 name: netty-connector protocol: TCP targetPort: 61616 selector: app: artemis Ingress.yaml kind: Ingress metadata: name: broker-ingress labels: name: broker-ingress spec: ingressClassName: nginx rules: - host: artemis.broker.com http: paths: - pathType: Prefix path: "/" backend: service: name: artemis-service port: number: 8161 I can access the activemq console when I hit http://artemis.broker.com And now I want to expose the TCP port (61616) using NGINX Controller through which I can publish/consume messages to ActiveMQ queue using a TCP URL. I found TransportServer in Nginx Controller to expose TCP. https://docs.nginx.com/nginx-ingress-controller/configuration/transportserver-resource/ Can someone help me how to configure this TransportServer. PS - I'm new to kubernetes.
Yes, I have encountered the issue multiple times. You definitely want to keep the memory limit for the k8 to avoid the noisy neighbour problems. The possible solutions you have mentioned are right. Monitoring and load testing are a must to arrive at the number. Along with these, I used the profiling of Java processes to see how GC is getting triggered and whether the memory usage should remain the same or increase with the increase of load. Profiling is a very powerful tool to provide some insights into suboptimal usage of data structures as well. What to profile While doing the Java profiling, you need to check What's the Eden and old-gen usage How often full GC is running, the memory utilisation will increase and decrease after the full GC. See the GC pattern How many objects are getting created CPU usage, (will increase during the full GC) How to profile Java application Here are a few good resources https://www.baeldung.com/java-profilers#:~:text=A%20Java%20Profiler%20is%20a,thread%20executions%2C%20and%20garbage%20collections. https://medium.com/platform-engineer/guide-to-java-profilers-e344ce0339e0 How to Profile Kubernetes Application with Java https://medium.com/swlh/introducing-kubectl-flame-effortless-profiling-on-kubernetes-4b80fc181852 https://www.youtube.com/watch?v=vHTWdkCUAoI
###Instruction: I would like to know, if there is an optimal approach for setting memory limits for Kubernetes containers, especially for applications running java. For Java applications, we have to set a heap value in conjunction with Kubernetes resources. it's like we're kinda forced to guess at a memory limit for the Kubernetes microservice. To be more clear, in java the heap memory can be limited to memory limits defined in the container, but how to arrive at a specific limit value ?? If we don't set up limits for the container, then the java heap considers the underlying node memory limits rather than at container limits, so it can extend the pod limit to max memory, which can stress the other pods running in that node. If we don't set up enough memory limits at the container, then we can see containers getting killed with OOM errors. The possible solutions, I can think of is Monitoring the microservice for some period of time and based on the utilization, choosing the limits Implementing some load testing mechanism and based on the observation setting the limits Other than the above, I would like to get some comments if there is any other approach followed by anyone in setting memory limits for Kubernetes containers. Has anyone encountered this earlier!! ### Response: Yes, I have encountered the issue multiple times. You definitely want to keep the memory limit for the k8 to avoid the noisy neighbour problems. The possible solutions you have mentioned are right. Monitoring and load testing are a must to arrive at the number. Along with these, I used the profiling of Java processes to see how GC is getting triggered and whether the memory usage should remain the same or increase with the increase of load. Profiling is a very powerful tool to provide some insights into suboptimal usage of data structures as well. What to profile While doing the Java profiling, you need to check What's the Eden and old-gen usage How often full GC is running, the memory utilisation will increase and decrease after the full GC. See the GC pattern How many objects are getting created CPU usage, (will increase during the full GC) How to profile Java application Here are a few good resources https://www.baeldung.com/java-profilers#:~:text=A%20Java%20Profiler%20is%20a,thread%20executions%2C%20and%20garbage%20collections. https://medium.com/platform-engineer/guide-to-java-profilers-e344ce0339e0 How to Profile Kubernetes Application with Java https://medium.com/swlh/introducing-kubectl-flame-effortless-profiling-on-kubernetes-4b80fc181852 https://www.youtube.com/watch?v=vHTWdkCUAoI
I would like to know, if there is an optimal approach for setting memory limits for Kubernetes containers, especially for applications running java. For Java applications, we have to set a heap value in conjunction with Kubernetes resources. it's like we're kinda forced to guess at a memory limit for the Kubernetes microservice. To be more clear, in java the heap memory can be limited to memory limits defined in the container, but how to arrive at a specific limit value ?? If we don't set up limits for the container, then the java heap considers the underlying node memory limits rather than at container limits, so it can extend the pod limit to max memory, which can stress the other pods running in that node. If we don't set up enough memory limits at the container, then we can see containers getting killed with OOM errors. The possible solutions, I can think of is Monitoring the microservice for some period of time and based on the utilization, choosing the limits Implementing some load testing mechanism and based on the observation setting the limits Other than the above, I would like to get some comments if there is any other approach followed by anyone in setting memory limits for Kubernetes containers. Has anyone encountered this earlier!!
Looks like there is no auto-provisioning of k8s cluster you are running. You need to create Persistent Volume & Storage classes on local clusters manually by cluster admin. Create one provisioner and pass that to your storage class and give that storage class to a persistence volume template to create automatic PVC and PV. Refer to official kubernetes documentation on Configure a Pod to Use a PersistentVolume for Storage, which may help to resolve your issue. To configure a Pod to use a PersistentVolumeClaim for storage. Here is a summary of the process: You, as a cluster administrator, create a PersistentVolume backed by physical storage. You do not associate the volume with any Pod. You, now taking the role of a developer / cluster user, create a PersistentVolumeClaim that is automatically bound to a suitable PersistentVolume. You create a Pod that uses the above PersistentVolumeClaim for storage. Also try changing accessModes: ReadWriteMany to volume accessed by all your pods. A subPath needs to be used, If each pod wants to have its own directory. Refer to official Kubernetes document on Using subPath, like below : volumeMounts: - name: data mountPath: /var/lib/influxdb2 subPath: $(POD_NAME)
###Instruction: While installing influxdb2 using k8s manifest from the link influxdb2 installation on k8s I get below "pod has unbound immediate PersistentVolumeClaims" error. The instruction is given for minikube but I am installing it as a normal k8s cluster. Any idea about the issue and how to fix. /home/ravi#kubectl describe pod influxdb-0 -n influxdb Name: influxdb-0 Namespace: influxdb Priority: 0 Node: <none> Labels: app=influxdb controller-revision-hash=influxdb-78bc684b99 statefulset.kubernetes.io/pod-name=influxdb-0 Annotations: <none> Status: Pending IP: IPs: <none> Controlled By: StatefulSet/influxdb Containers: influxdb: Image: influxdb:2.0.6 Port: 8086/TCP Host Port: 0/TCP Environment: <none> Mounts: /var/lib/influxdb2 from data (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-k9d8t (ro) Conditions: Type Status PodScheduled False Volumes: data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: data-influxdb-0 ReadOnly: false default-token-k9d8t: Type: Secret (a volume populated by a Secret) SecretName: default-token-k9d8t Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims /home/ravi# influx db2 yaml file --- apiVersion: v1 kind: Namespace metadata: name: influxdb --- apiVersion: apps/v1 kind: StatefulSet metadata: labels: app: influxdb name: influxdb namespace: influxdb spec: replicas: 1 selector: matchLabels: app: influxdb serviceName: influxdb template: metadata: labels: app: influxdb spec: containers: - image: influxdb:2.0.6 name: influxdb ports: - containerPort: 8086 name: influxdb volumeMounts: - mountPath: /var/lib/influxdb2 name: data volumeClaimTemplates: - metadata: name: data namespace: influxdb spec: accessModes: - ReadWriteOnce resources: requests: storage: 10G --- apiVersion: v1 kind: Service metadata: name: influxdb namespace: influxdb spec: ports: - name: influxdb port: 8086 targetPort: 8086 selector: app: influxdb type: ClusterIP k8s version /home/ravi#kubectl version Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} /home/ravi>sudo kubectl get pvc -A NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE influxdb data-influxdb-0 Pending 4h41m ricplt pvc-ricplt-alarmmanager Bound pv-ricplt-alarmmanager 100Mi RWO local-storage 5h17m ricplt pvc-ricplt-e2term-alpha Bound pv-ricplt-e2term-alpha 100Mi RWO local-storage 5h18m ricplt r4-influxdb-influxdb2 Pending 32m /home/ravi> /home/ravi> /home/ravi> /home/ravi>sudo kubectl describe pvc data-influxdb-0 -n influxdb Name: data-influxdb-0 Namespace: influxdb StorageClass: Status: Pending Volume: Labels: app=influxdb Annotations: <none> Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Mounted By: influxdb-0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal FailedBinding 2m12s (x1021 over 4h17m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set /home/ravi> ### Response: Looks like there is no auto-provisioning of k8s cluster you are running. You need to create Persistent Volume & Storage classes on local clusters manually by cluster admin. Create one provisioner and pass that to your storage class and give that storage class to a persistence volume template to create automatic PVC and PV. Refer to official kubernetes documentation on Configure a Pod to Use a PersistentVolume for Storage, which may help to resolve your issue. To configure a Pod to use a PersistentVolumeClaim for storage. Here is a summary of the process: You, as a cluster administrator, create a PersistentVolume backed by physical storage. You do not associate the volume with any Pod. You, now taking the role of a developer / cluster user, create a PersistentVolumeClaim that is automatically bound to a suitable PersistentVolume. You create a Pod that uses the above PersistentVolumeClaim for storage. Also try changing accessModes: ReadWriteMany to volume accessed by all your pods. A subPath needs to be used, If each pod wants to have its own directory. Refer to official Kubernetes document on Using subPath, like below : volumeMounts: - name: data mountPath: /var/lib/influxdb2 subPath: $(POD_NAME)
While installing influxdb2 using k8s manifest from the link influxdb2 installation on k8s I get below "pod has unbound immediate PersistentVolumeClaims" error. The instruction is given for minikube but I am installing it as a normal k8s cluster. Any idea about the issue and how to fix. /home/ravi#kubectl describe pod influxdb-0 -n influxdb Name: influxdb-0 Namespace: influxdb Priority: 0 Node: <none> Labels: app=influxdb controller-revision-hash=influxdb-78bc684b99 statefulset.kubernetes.io/pod-name=influxdb-0 Annotations: <none> Status: Pending IP: IPs: <none> Controlled By: StatefulSet/influxdb Containers: influxdb: Image: influxdb:2.0.6 Port: 8086/TCP Host Port: 0/TCP Environment: <none> Mounts: /var/lib/influxdb2 from data (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-k9d8t (ro) Conditions: Type Status PodScheduled False Volumes: data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: data-influxdb-0 ReadOnly: false default-token-k9d8t: Type: Secret (a volume populated by a Secret) SecretName: default-token-k9d8t Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims /home/ravi# influx db2 yaml file --- apiVersion: v1 kind: Namespace metadata: name: influxdb --- apiVersion: apps/v1 kind: StatefulSet metadata: labels: app: influxdb name: influxdb namespace: influxdb spec: replicas: 1 selector: matchLabels: app: influxdb serviceName: influxdb template: metadata: labels: app: influxdb spec: containers: - image: influxdb:2.0.6 name: influxdb ports: - containerPort: 8086 name: influxdb volumeMounts: - mountPath: /var/lib/influxdb2 name: data volumeClaimTemplates: - metadata: name: data namespace: influxdb spec: accessModes: - ReadWriteOnce resources: requests: storage: 10G --- apiVersion: v1 kind: Service metadata: name: influxdb namespace: influxdb spec: ports: - name: influxdb port: 8086 targetPort: 8086 selector: app: influxdb type: ClusterIP k8s version /home/ravi#kubectl version Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} /home/ravi>sudo kubectl get pvc -A NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE influxdb data-influxdb-0 Pending 4h41m ricplt pvc-ricplt-alarmmanager Bound pv-ricplt-alarmmanager 100Mi RWO local-storage 5h17m ricplt pvc-ricplt-e2term-alpha Bound pv-ricplt-e2term-alpha 100Mi RWO local-storage 5h18m ricplt r4-influxdb-influxdb2 Pending 32m /home/ravi> /home/ravi> /home/ravi> /home/ravi>sudo kubectl describe pvc data-influxdb-0 -n influxdb Name: data-influxdb-0 Namespace: influxdb StorageClass: Status: Pending Volume: Labels: app=influxdb Annotations: <none> Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Mounted By: influxdb-0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal FailedBinding 2m12s (x1021 over 4h17m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set /home/ravi>
This might help you: I am having same error in injected container istio-init. My system is : k8s 1.26 and istio 1.17.1, installed on rocky 8.5 stations. This solved my problem: 1. cat <<EOT >> /etc/modules-load.d/k8s.conf overlay br_netfilter nf_nat xt_REDIRECT xt_owner iptable_nat iptable_mangle iptable_filter EOT modprobe br_netfilter ; modprobe nf_nat ; modprobe xt_REDIRECT ; modprobe xt_owner; modprobe iptable_nat; modprobe iptable_mangle; modprobe iptable_filter I got this solution from: enter link description here
###Instruction: I am new to Kubernetes, Istio and so on, so please be gentle :) I have minikube running, I can deploy services and they run fine. I have installed istio following this guide: https://istio.io/latest/docs/setup/install/istioctl/ If I tag the default namespace with kubectl label namespace default istio-injection=enabled the deployment fails. The service is green on the minikube dashboard, but the pod doesn't start up. Ready: false Started: false Reason: PodInitializing Here are a couple of print screens from the dashboard: This is clearly related to istio. If I remove the istio tag from the namespace, the deployment works and the pod starts. Any help would be greatly appreciated. EDIT Running kubectl logs mypod-bd48d6bcc-6wcq2 -c istio-init prints out 2022-08-24T14:07:15.227238Z info Istio iptables environment: ENVOY_PORT= INBOUND_CAPTURE_PORT= ISTIO_INBOUND_INTERCEPTION_MODE= ISTIO_INBOUND_TPROXY_ROUTE_TABLE= ISTIO_INBOUND_PORTS= ISTIO_OUTBOUND_PORTS= ISTIO_LOCAL_EXCLUDE_PORTS= ISTIO_EXCLUDE_INTERFACES= ISTIO_SERVICE_CIDR= ISTIO_SERVICE_EXCLUDE_CIDR= ISTIO_META_DNS_CAPTURE= INVALID_DROP= 2022-08-24T14:07:15.229791Z info Istio iptables variables: PROXY_PORT=15001 PROXY_INBOUND_CAPTURE_PORT=15006 PROXY_TUNNEL_PORT=15008 PROXY_UID=1337 PROXY_GID=1337 INBOUND_INTERCEPTION_MODE=REDIRECT INBOUND_TPROXY_MARK=1337 INBOUND_TPROXY_ROUTE_TABLE=133 INBOUND_PORTS_INCLUDE=* INBOUND_PORTS_EXCLUDE=15090,15021,15020 OUTBOUND_OWNER_GROUPS_INCLUDE=* OUTBOUND_OWNER_GROUPS_EXCLUDE= OUTBOUND_IP_RANGES_INCLUDE=* OUTBOUND_IP_RANGES_EXCLUDE= OUTBOUND_PORTS_INCLUDE= OUTBOUND_PORTS_EXCLUDE= KUBE_VIRT_INTERFACES= ENABLE_INBOUND_IPV6=false DNS_CAPTURE=false DROP_INVALID=false CAPTURE_ALL_DNS=false DNS_SERVERS=[],[] OUTPUT_PATH= NETWORK_NAMESPACE= CNI_MODE=false EXCLUDE_INTERFACES= 2022-08-24T14:07:15.232249Z info Writing following contents to rules file: /tmp/iptables-rules-1661350035231776045.txt1561657352 * nat -N ISTIO_INBOUND -N ISTIO_REDIRECT -N ISTIO_IN_REDIRECT -N ISTIO_OUTPUT -A ISTIO_INBOUND -p tcp --dport 15008 -j RETURN -A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001 -A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15006 -A PREROUTING -p tcp -j ISTIO_INBOUND -A ISTIO_INBOUND -p tcp --dport 15090 -j RETURN -A ISTIO_INBOUND -p tcp --dport 15021 -j RETURN -A ISTIO_INBOUND -p tcp --dport 15020 -j RETURN -A ISTIO_INBOUND -p tcp -j ISTIO_IN_REDIRECT -A OUTPUT -p tcp -j ISTIO_OUTPUT -A ISTIO_OUTPUT -o lo -s 127.0.0.6/32 -j RETURN -A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -m owner --uid-owner 1337 -j ISTIO_IN_REDIRECT -A ISTIO_OUTPUT -o lo -m owner ! --uid-owner 1337 -j RETURN -A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN -A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -m owner --gid-owner 1337 -j ISTIO_IN_REDIRECT -A ISTIO_OUTPUT -o lo -m owner ! --gid-owner 1337 -j RETURN -A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN -A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN -A ISTIO_OUTPUT -j ISTIO_REDIRECT COMMIT 2022-08-24T14:07:15.232504Z info Running command: iptables-restore --noflush /tmp/iptables-rules-1661350035231776045.txt1561657352 2022-08-24T14:07:15.256253Z error Command error output: xtables parameter problem: iptables-restore: unable to initialize table 'nat' Error occurred at line: 1 Try `iptables-restore -h' or 'iptables-restore --help' for more information. 2022-08-24T14:07:15.256845Z error Failed to execute: iptables-restore --noflush /tmp/iptables-rules-1661350035231776045.txt1561657352, exit status 2 ### Response: This might help you: I am having same error in injected container istio-init. My system is : k8s 1.26 and istio 1.17.1, installed on rocky 8.5 stations. This solved my problem: 1. cat <<EOT >> /etc/modules-load.d/k8s.conf overlay br_netfilter nf_nat xt_REDIRECT xt_owner iptable_nat iptable_mangle iptable_filter EOT modprobe br_netfilter ; modprobe nf_nat ; modprobe xt_REDIRECT ; modprobe xt_owner; modprobe iptable_nat; modprobe iptable_mangle; modprobe iptable_filter I got this solution from: enter link description here
I am new to Kubernetes, Istio and so on, so please be gentle :) I have minikube running, I can deploy services and they run fine. I have installed istio following this guide: https://istio.io/latest/docs/setup/install/istioctl/ If I tag the default namespace with kubectl label namespace default istio-injection=enabled the deployment fails. The service is green on the minikube dashboard, but the pod doesn't start up. Ready: false Started: false Reason: PodInitializing Here are a couple of print screens from the dashboard: This is clearly related to istio. If I remove the istio tag from the namespace, the deployment works and the pod starts. Any help would be greatly appreciated. EDIT Running kubectl logs mypod-bd48d6bcc-6wcq2 -c istio-init prints out 2022-08-24T14:07:15.227238Z info Istio iptables environment: ENVOY_PORT= INBOUND_CAPTURE_PORT= ISTIO_INBOUND_INTERCEPTION_MODE= ISTIO_INBOUND_TPROXY_ROUTE_TABLE= ISTIO_INBOUND_PORTS= ISTIO_OUTBOUND_PORTS= ISTIO_LOCAL_EXCLUDE_PORTS= ISTIO_EXCLUDE_INTERFACES= ISTIO_SERVICE_CIDR= ISTIO_SERVICE_EXCLUDE_CIDR= ISTIO_META_DNS_CAPTURE= INVALID_DROP= 2022-08-24T14:07:15.229791Z info Istio iptables variables: PROXY_PORT=15001 PROXY_INBOUND_CAPTURE_PORT=15006 PROXY_TUNNEL_PORT=15008 PROXY_UID=1337 PROXY_GID=1337 INBOUND_INTERCEPTION_MODE=REDIRECT INBOUND_TPROXY_MARK=1337 INBOUND_TPROXY_ROUTE_TABLE=133 INBOUND_PORTS_INCLUDE=* INBOUND_PORTS_EXCLUDE=15090,15021,15020 OUTBOUND_OWNER_GROUPS_INCLUDE=* OUTBOUND_OWNER_GROUPS_EXCLUDE= OUTBOUND_IP_RANGES_INCLUDE=* OUTBOUND_IP_RANGES_EXCLUDE= OUTBOUND_PORTS_INCLUDE= OUTBOUND_PORTS_EXCLUDE= KUBE_VIRT_INTERFACES= ENABLE_INBOUND_IPV6=false DNS_CAPTURE=false DROP_INVALID=false CAPTURE_ALL_DNS=false DNS_SERVERS=[],[] OUTPUT_PATH= NETWORK_NAMESPACE= CNI_MODE=false EXCLUDE_INTERFACES= 2022-08-24T14:07:15.232249Z info Writing following contents to rules file: /tmp/iptables-rules-1661350035231776045.txt1561657352 * nat -N ISTIO_INBOUND -N ISTIO_REDIRECT -N ISTIO_IN_REDIRECT -N ISTIO_OUTPUT -A ISTIO_INBOUND -p tcp --dport 15008 -j RETURN -A ISTIO_REDIRECT -p tcp -j REDIRECT --to-ports 15001 -A ISTIO_IN_REDIRECT -p tcp -j REDIRECT --to-ports 15006 -A PREROUTING -p tcp -j ISTIO_INBOUND -A ISTIO_INBOUND -p tcp --dport 15090 -j RETURN -A ISTIO_INBOUND -p tcp --dport 15021 -j RETURN -A ISTIO_INBOUND -p tcp --dport 15020 -j RETURN -A ISTIO_INBOUND -p tcp -j ISTIO_IN_REDIRECT -A OUTPUT -p tcp -j ISTIO_OUTPUT -A ISTIO_OUTPUT -o lo -s 127.0.0.6/32 -j RETURN -A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -m owner --uid-owner 1337 -j ISTIO_IN_REDIRECT -A ISTIO_OUTPUT -o lo -m owner ! --uid-owner 1337 -j RETURN -A ISTIO_OUTPUT -m owner --uid-owner 1337 -j RETURN -A ISTIO_OUTPUT -o lo ! -d 127.0.0.1/32 -m owner --gid-owner 1337 -j ISTIO_IN_REDIRECT -A ISTIO_OUTPUT -o lo -m owner ! --gid-owner 1337 -j RETURN -A ISTIO_OUTPUT -m owner --gid-owner 1337 -j RETURN -A ISTIO_OUTPUT -d 127.0.0.1/32 -j RETURN -A ISTIO_OUTPUT -j ISTIO_REDIRECT COMMIT 2022-08-24T14:07:15.232504Z info Running command: iptables-restore --noflush /tmp/iptables-rules-1661350035231776045.txt1561657352 2022-08-24T14:07:15.256253Z error Command error output: xtables parameter problem: iptables-restore: unable to initialize table 'nat' Error occurred at line: 1 Try `iptables-restore -h' or 'iptables-restore --help' for more information. 2022-08-24T14:07:15.256845Z error Failed to execute: iptables-restore --noflush /tmp/iptables-rules-1661350035231776045.txt1561657352, exit status 2
This is issue with indentation. Here is proper deployment definition apiVersion: apps/v1 kind: Deployment metadata: name: django-k8-web-deployment labels: app: django-k8-web-deployment spec: replicas: 3 selector: matchLabels: app: django-k8-web-deployment template: metadata: labels: app: django-k8-web-deployment spec: containers: - name: django-k8s-web image: registry.digitalocean.com/chrisocean/django-k8s-web:latest envFrom: - secretRef: name: django-k8s-web-prod-env env: - name: PORT value: "8001" ports: - containerPort: 8001 imagePullSecrets: - name: oceandev
###Instruction: apiVersion: apps/v1 kind: Deployment metadata: name: django-k8-web-deployment labels: app: django-k8-web-deployment spec: replicas: 3 selector: matchLabels: app: django-k8-web-deployment template: metadata: labels: app: django-k8-web-deployment spec: containers: - name: django-k8s-web image: registry.digitalocean.com/chrisocean/django-k8s-web:latest envFrom: - secretRef: name: django-k8s-web-prod-env env: - name: PORT value: "8001" ports: - containerPort: 8001 imagePullSecrets: - name: oceandev the above yaml file above is what I want to apply in kubernetes. I ran the folowing command on my terminal kubectl apply -f k8s/apps/django-k8s-web.yaml then I go the following error on the terminal kubectl apply -f k8s/apps/django-k8s-web.yaml service/django-k8-web-service unchanged Error from server (BadRequest): error when creating "k8s/apps/django-k8s-web.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.template.spec.containers[0].envFrom[0].name" who knows how to resolve the issue? I wanted it to apply the changes in the yaml file but it is not working.When I ran the following command kubectl get pods the STATUS of the pod is pending ### Response: This is issue with indentation. Here is proper deployment definition apiVersion: apps/v1 kind: Deployment metadata: name: django-k8-web-deployment labels: app: django-k8-web-deployment spec: replicas: 3 selector: matchLabels: app: django-k8-web-deployment template: metadata: labels: app: django-k8-web-deployment spec: containers: - name: django-k8s-web image: registry.digitalocean.com/chrisocean/django-k8s-web:latest envFrom: - secretRef: name: django-k8s-web-prod-env env: - name: PORT value: "8001" ports: - containerPort: 8001 imagePullSecrets: - name: oceandev
apiVersion: apps/v1 kind: Deployment metadata: name: django-k8-web-deployment labels: app: django-k8-web-deployment spec: replicas: 3 selector: matchLabels: app: django-k8-web-deployment template: metadata: labels: app: django-k8-web-deployment spec: containers: - name: django-k8s-web image: registry.digitalocean.com/chrisocean/django-k8s-web:latest envFrom: - secretRef: name: django-k8s-web-prod-env env: - name: PORT value: "8001" ports: - containerPort: 8001 imagePullSecrets: - name: oceandev the above yaml file above is what I want to apply in kubernetes. I ran the folowing command on my terminal kubectl apply -f k8s/apps/django-k8s-web.yaml then I go the following error on the terminal kubectl apply -f k8s/apps/django-k8s-web.yaml service/django-k8-web-service unchanged Error from server (BadRequest): error when creating "k8s/apps/django-k8s-web.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.template.spec.containers[0].envFrom[0].name" who knows how to resolve the issue? I wanted it to apply the changes in the yaml file but it is not working.When I ran the following command kubectl get pods the STATUS of the pod is pending
You are on the right path, If you don't want to SSH into the node and manual job, what you can do is like create one Daemonset and automate SSH process. Daemonset runs on all available nodes in Cluster. You can mount your shell script to Node path and execute it by doing SSH from Image. apiVersion: v1 kind: ServiceAccount metadata: name: daemonset-access --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: cron-namespace-admin-rbac subjects: - kind: ServiceAccount name: daemonset-access roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io --- apiVersion: apps/v1 kind: DaemonSet metadata: name: umalog labels: k8s-app: umalog spec: selector: matchLabels: name: umalog template: metadata: labels: name: umalog spec: serviceAccountName: daemonset-access securityContext: fsGroup: 1000 containers: - command: - bash - -c - | mkdir -p /work-dir/filter_config while true; do echo "inside while"; /root/reload-uma.sh; sleep 1800; done image: oraclelinux:8 imagePullPolicy: IfNotPresent name: daemon-ssh volumeMounts: - mountPath: /root/reload-uma.sh name: uma-agent-reload subPath: reload-uma.sh - mountPath: /work-dir-ssh name: do-node-ssh volumes: - configMap: defaultMode: 511 name: uma-agent-reload name: uma-agent-reload - hostPath: path: /home/opc/.ssh type: DirectoryOrCreate name: do-node-ssh Configmap --- apiVersion: v1 kind: ConfigMap metadata: name: uma-agent-reload data: reload-uma.sh: | #!/bin/bash -x cat ~/.ssh/id_rsa.pub >> /work-dir-ssh/authorized_keys ssh -o StrictHostKeyChecking=no <User-name-ssh>@`kubectl -n default get pod $HOSTNAME -o yaml | grep hostIP | awk -d: {'print $2'} | tail -1` sudo systemctl reload <your-service-agent>.service
###Instruction: I want to execute command on GCP GKE node but I need to do it without manually sshing to the machine. What I need: adding ntp.keys configuring ntp.conf restart ntp daemon (here I encountered a problem) Point 1 and 2 can be easly achived with DaemonSet. I mounted /etc/ntp/ directory to init container and created right files. Unofrtunately point 3 seems to be impossible. Is there any way I can run "systemctl restart ntp" on each node of the cluster? My problem is mainly due to GCP API does not provide nodes endpoint so I can't get IPs of nodes and then run ansible to perform NTP configuration. The goal is to **automatically ** provision cluster (with terraform) configure NTP client on each node ### Response: You are on the right path, If you don't want to SSH into the node and manual job, what you can do is like create one Daemonset and automate SSH process. Daemonset runs on all available nodes in Cluster. You can mount your shell script to Node path and execute it by doing SSH from Image. apiVersion: v1 kind: ServiceAccount metadata: name: daemonset-access --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: cron-namespace-admin-rbac subjects: - kind: ServiceAccount name: daemonset-access roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io --- apiVersion: apps/v1 kind: DaemonSet metadata: name: umalog labels: k8s-app: umalog spec: selector: matchLabels: name: umalog template: metadata: labels: name: umalog spec: serviceAccountName: daemonset-access securityContext: fsGroup: 1000 containers: - command: - bash - -c - | mkdir -p /work-dir/filter_config while true; do echo "inside while"; /root/reload-uma.sh; sleep 1800; done image: oraclelinux:8 imagePullPolicy: IfNotPresent name: daemon-ssh volumeMounts: - mountPath: /root/reload-uma.sh name: uma-agent-reload subPath: reload-uma.sh - mountPath: /work-dir-ssh name: do-node-ssh volumes: - configMap: defaultMode: 511 name: uma-agent-reload name: uma-agent-reload - hostPath: path: /home/opc/.ssh type: DirectoryOrCreate name: do-node-ssh Configmap --- apiVersion: v1 kind: ConfigMap metadata: name: uma-agent-reload data: reload-uma.sh: | #!/bin/bash -x cat ~/.ssh/id_rsa.pub >> /work-dir-ssh/authorized_keys ssh -o StrictHostKeyChecking=no <User-name-ssh>@`kubectl -n default get pod $HOSTNAME -o yaml | grep hostIP | awk -d: {'print $2'} | tail -1` sudo systemctl reload <your-service-agent>.service
I want to execute command on GCP GKE node but I need to do it without manually sshing to the machine. What I need: adding ntp.keys configuring ntp.conf restart ntp daemon (here I encountered a problem) Point 1 and 2 can be easly achived with DaemonSet. I mounted /etc/ntp/ directory to init container and created right files. Unofrtunately point 3 seems to be impossible. Is there any way I can run "systemctl restart ntp" on each node of the cluster? My problem is mainly due to GCP API does not provide nodes endpoint so I can't get IPs of nodes and then run ansible to perform NTP configuration. The goal is to **automatically ** provision cluster (with terraform) configure NTP client on each node
Yes, a core is equal to 1e9 (1,000,000,000) nanocores. You can test this by seeing that, for example in Datadog, a metric like (kubernetes.cpu.usage.total / 1000000000) / kubernetes_state.node.cpu_allocatable * 100 gives you a percentage that should be capped at 100%. The 1st metric is measured in nanocores, and the 2nd is measured in cores (see definition here). Here's another answer that aligns with this: link
###Instruction: im confuse to convert nanocore to core of CPU my formula is: nanocore / 1000000000 = CORE (1000000000 is 1 billion) if my linux os have 10 core. I want to calculator percentage cpu usage. This formula is: (nanocore / 1000000000) / 10 * 100 = percentage cpu usage ? is it right ? ### Response: Yes, a core is equal to 1e9 (1,000,000,000) nanocores. You can test this by seeing that, for example in Datadog, a metric like (kubernetes.cpu.usage.total / 1000000000) / kubernetes_state.node.cpu_allocatable * 100 gives you a percentage that should be capped at 100%. The 1st metric is measured in nanocores, and the 2nd is measured in cores (see definition here). Here's another answer that aligns with this: link
im confuse to convert nanocore to core of CPU my formula is: nanocore / 1000000000 = CORE (1000000000 is 1 billion) if my linux os have 10 core. I want to calculator percentage cpu usage. This formula is: (nanocore / 1000000000) / 10 * 100 = percentage cpu usage ? is it right ?
I try to give you a little suggestion, let me know if it works. If you have an up&running Jenkins instance (with the Kubernetes plugin installed), you can go to “Manage Jenkins”/“Configure Clouds” and prepare your Pod Templates as you see fit. There you will also find the definition of nodeSelector and Toleration. Once you have saved the setup you prefer, go to “Manage Jenkins”/“Configuration as Code” and save the JCASC as Code configuration of your Jenkins (click “Download Configuration”). You can replicate this working mode for any new configuration you want to add to your Jenkins.
###Instruction: I'm using Jenkins configuration as code (JCASC). I'm having a pod template and I want to add NodeSelector + Tolerations. podTemplate doesn't support key of tolerations and NodeSelector so I need to add pod YAML spec... agent: enabled: true podTemplates: podTemplates: jenkins-slave-pod: | - name: jenkins-slave-pod label: global-slave serviceAccount: jenkins idleMinutes: "15" containers: - name: main image: 'xxxxxx.dkr.ecr.us-west-2.amazonaws.com/jenkins-slave:ecs-global' command: "sleep" args: "30d" privileged: true I was thinking of adding yaml: and just configuring the spec of the pod... But when I'm adding yaml: and adding yamlStrategy: merge/overrid it ignores the YAML it and only uses my podTemplate instead. How can I merge/override my podTemplate and add pod with tolerations/nodeSelecotr? Thats the YAML I want to have inside my podTemplate: apiVersion: v1 kind: Pod serviceAccount: jenkins-non-prod idleMinutes: "15" containers: - name: main image: 'xxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/jenkins-slave:ecs-global' command: "sleep" args: "30d" privileged: true spec: nodeSelector: karpenter.sh/provisioner-name: jenkins-provisioner tolerations: - key: "jenkins" operator: "Exists" effect: "NoSchedule" ### Response: I try to give you a little suggestion, let me know if it works. If you have an up&running Jenkins instance (with the Kubernetes plugin installed), you can go to “Manage Jenkins”/“Configure Clouds” and prepare your Pod Templates as you see fit. There you will also find the definition of nodeSelector and Toleration. Once you have saved the setup you prefer, go to “Manage Jenkins”/“Configuration as Code” and save the JCASC as Code configuration of your Jenkins (click “Download Configuration”). You can replicate this working mode for any new configuration you want to add to your Jenkins.
I'm using Jenkins configuration as code (JCASC). I'm having a pod template and I want to add NodeSelector + Tolerations. podTemplate doesn't support key of tolerations and NodeSelector so I need to add pod YAML spec... agent: enabled: true podTemplates: podTemplates: jenkins-slave-pod: | - name: jenkins-slave-pod label: global-slave serviceAccount: jenkins idleMinutes: "15" containers: - name: main image: 'xxxxxx.dkr.ecr.us-west-2.amazonaws.com/jenkins-slave:ecs-global' command: "sleep" args: "30d" privileged: true I was thinking of adding yaml: and just configuring the spec of the pod... But when I'm adding yaml: and adding yamlStrategy: merge/overrid it ignores the YAML it and only uses my podTemplate instead. How can I merge/override my podTemplate and add pod with tolerations/nodeSelecotr? Thats the YAML I want to have inside my podTemplate: apiVersion: v1 kind: Pod serviceAccount: jenkins-non-prod idleMinutes: "15" containers: - name: main image: 'xxxxxxxx.dkr.ecr.us-west-2.amazonaws.com/jenkins-slave:ecs-global' command: "sleep" args: "30d" privileged: true spec: nodeSelector: karpenter.sh/provisioner-name: jenkins-provisioner tolerations: - key: "jenkins" operator: "Exists" effect: "NoSchedule"
I tried to reproduce the same in my environment I have created a virtual network gateway vnet local network gateway like below: In virtual network added gateway subnet like below: created local network gateway : On-premise try to configure Routing and remote access role in tools -> select custom configuration ->Vpn access, Lan routing ->finish in network interface select -> New demand-dial interface -> in vpn type select IPEv2 and in the destination address screen provide public IP of virtual network gateway Now, try to create a connection like below: Now, I have created an aks cluster with pod like below: To communicate with pod make sure to use Azure Container Networking Interface (CNI) every pod gets an IP address from the subnet and can be accessed directly each pod receives an IP address and can directly communicate with other pods and services. you can AKS nodes based on the maximum number of pod can support. Advanced network features and scenarios such as Virtual Nodes or Network Policies (either Azure or Calico) are supported with Azure CNI. When using Azure CNI, Every pod is assigned a VNET route-able private IP from the subnet. So, Gateway should be able reach the pods directly. Refer You can use AKS's advanced features such as virtual nodes or Azure Network Policy. Use Calico network policies. network policy allows an traffic between pods within a cluster and communicated kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: backend-policy spec: podSelector: matchLabels: app: backend ingress: - from: - podSelector: matchLabels: app: frontend To more in detail refer this link: Azure configure-kubenet - GitHub Network connectivity and secure in Azure Kubernetes Service | Microsoft
###Instruction: We have a requirement to connect a K8s POD to an Azure VPN Gateway in a secure manner. This is what our network topology is: Firstly is this possible to achieve and secondly how would we go about creating this peering? If peering isn't the best option then what would you recommend to solve this problem? TIA We have created the VPN gateway, VNET, and a local network and confirmed that they can communicate in both directions. The problem is how we bring this into K8s. ### Response: I tried to reproduce the same in my environment I have created a virtual network gateway vnet local network gateway like below: In virtual network added gateway subnet like below: created local network gateway : On-premise try to configure Routing and remote access role in tools -> select custom configuration ->Vpn access, Lan routing ->finish in network interface select -> New demand-dial interface -> in vpn type select IPEv2 and in the destination address screen provide public IP of virtual network gateway Now, try to create a connection like below: Now, I have created an aks cluster with pod like below: To communicate with pod make sure to use Azure Container Networking Interface (CNI) every pod gets an IP address from the subnet and can be accessed directly each pod receives an IP address and can directly communicate with other pods and services. you can AKS nodes based on the maximum number of pod can support. Advanced network features and scenarios such as Virtual Nodes or Network Policies (either Azure or Calico) are supported with Azure CNI. When using Azure CNI, Every pod is assigned a VNET route-able private IP from the subnet. So, Gateway should be able reach the pods directly. Refer You can use AKS's advanced features such as virtual nodes or Azure Network Policy. Use Calico network policies. network policy allows an traffic between pods within a cluster and communicated kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: backend-policy spec: podSelector: matchLabels: app: backend ingress: - from: - podSelector: matchLabels: app: frontend To more in detail refer this link: Azure configure-kubenet - GitHub Network connectivity and secure in Azure Kubernetes Service | Microsoft
We have a requirement to connect a K8s POD to an Azure VPN Gateway in a secure manner. This is what our network topology is: Firstly is this possible to achieve and secondly how would we go about creating this peering? If peering isn't the best option then what would you recommend to solve this problem? TIA We have created the VPN gateway, VNET, and a local network and confirmed that they can communicate in both directions. The problem is how we bring this into K8s.
The fastest solution is surely to use kustomize. Following your data, first organize the repository by creating a folder called "base" and one called "dev". Then move the "my-cronjob" manifest into the "base" folder and add a kustomization.yaml file that invokes the CronJob. Finally, create a file called kustomization.yaml inside the "dev" folder, calling the files from the "base" folder plus the patch. Example: base/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ./my-cronjob.yaml dev/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization bases: - ../base patches: - target: kind: CronJob name: my-cronjob patch: |- - op: replace path: /spec/jobTemplate/spec/template/spec/containers/0/env/0/valueFrom/fieldRef/fieldPath value: metadata.labels['DEV'] To replicate to other environments, just copy the "dev" folder and paste it into a "prod" folder (for example) and edit the patch with the correct parameter.
###Instruction: I want to build the the secretName dynamically base on the value of the my-label key (trough a ENV). Is this possible? I used the a similar approach to use label values as ARGs which worked. apiVersion: batch/v1beta1 kind: CronJob metadata: name: my-cronjob spec: schedule: "*/1 * * * *" jobTemplate: spec: template: metadata: labels: my-label: "my-value" spec: containers: - name: my-container image: my-image env: - name: MY_ENV_VAR valueFrom: fieldRef: fieldPath: metadata.labels['my-label'] volumeMounts: - name: my-secret mountPath: /path/to/my-secret volumes: - name: my-secret secret: secretName: my-secret-$(MY_ENV_VAR) ### Response: The fastest solution is surely to use kustomize. Following your data, first organize the repository by creating a folder called "base" and one called "dev". Then move the "my-cronjob" manifest into the "base" folder and add a kustomization.yaml file that invokes the CronJob. Finally, create a file called kustomization.yaml inside the "dev" folder, calling the files from the "base" folder plus the patch. Example: base/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - ./my-cronjob.yaml dev/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization bases: - ../base patches: - target: kind: CronJob name: my-cronjob patch: |- - op: replace path: /spec/jobTemplate/spec/template/spec/containers/0/env/0/valueFrom/fieldRef/fieldPath value: metadata.labels['DEV'] To replicate to other environments, just copy the "dev" folder and paste it into a "prod" folder (for example) and edit the patch with the correct parameter.
I want to build the the secretName dynamically base on the value of the my-label key (trough a ENV). Is this possible? I used the a similar approach to use label values as ARGs which worked. apiVersion: batch/v1beta1 kind: CronJob metadata: name: my-cronjob spec: schedule: "*/1 * * * *" jobTemplate: spec: template: metadata: labels: my-label: "my-value" spec: containers: - name: my-container image: my-image env: - name: MY_ENV_VAR valueFrom: fieldRef: fieldPath: metadata.labels['my-label'] volumeMounts: - name: my-secret mountPath: /path/to/my-secret volumes: - name: my-secret secret: secretName: my-secret-$(MY_ENV_VAR)
If you have an on-prem set up and you can expose your cluster you can go with that creating the K8s cluster and managing it. Else are you planning to set up manually on cloud VMs? instead better to use managed one from Digital Ocean or Oracle OKE which may cost around $20-30 months if a small project. Now you have the option to use Kubespray, and Kubeadm to setup the manual cluster and manage it so would be pretty easy to do it but again setting on VMs & managing it too much of work, instead go with the provider who gives free control plane and has only pay for Worker Nodes. You can also go for the Serverless containers, Cloud RUN from GCP & ECS from AWS worry about your container workload and pay for resources actually your container using with option to scaling.
###Instruction: I am getting into CI/CD and Microservices. I want to try Kubernetes for a project that I am starting. I spent the last two weeks learning about those topics. But I don't have the knowledge to make the decision. I would like to know if for a small project is better to setup Kubernetes in my server or pay a Cloud service. What difficulties I would have to setup Kubernetes manually? My impression is that setting up a Kubernetes by hand is more like toward setting up an email server, is much more viable to pay to have a email domain than set up an email server manually. I would like only to have kubernetes to get my services up automatically and other basic things. ### Response: If you have an on-prem set up and you can expose your cluster you can go with that creating the K8s cluster and managing it. Else are you planning to set up manually on cloud VMs? instead better to use managed one from Digital Ocean or Oracle OKE which may cost around $20-30 months if a small project. Now you have the option to use Kubespray, and Kubeadm to setup the manual cluster and manage it so would be pretty easy to do it but again setting on VMs & managing it too much of work, instead go with the provider who gives free control plane and has only pay for Worker Nodes. You can also go for the Serverless containers, Cloud RUN from GCP & ECS from AWS worry about your container workload and pay for resources actually your container using with option to scaling.
I am getting into CI/CD and Microservices. I want to try Kubernetes for a project that I am starting. I spent the last two weeks learning about those topics. But I don't have the knowledge to make the decision. I would like to know if for a small project is better to setup Kubernetes in my server or pay a Cloud service. What difficulties I would have to setup Kubernetes manually? My impression is that setting up a Kubernetes by hand is more like toward setting up an email server, is much more viable to pay to have a email domain than set up an email server manually. I would like only to have kubernetes to get my services up automatically and other basic things.
If your changes are very specific to your project and not something that can be merged into the original repo, or you have legal loopholes to jump through, you can fork the original code and periodically sync up with the upstream. The process is outlined here: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork Obviously you still have to resolve conflicts, but you can set up a CI pipeline to periodically sync and update your helm registry.
###Instruction: so I am creating a system composed of different components that are installed via helm charts. Since I needed different customizations for each of the charts, I created my own separated repositories, copied the charts there and added customizations. The question is: how do I conveniently upgrade the charts versions, while keeping the customizations? For some of the components it was an easy task, since custom templates were moved to different folder and when upgrade is needed I simply replace old files with the new one. The issue is that in some charts I needed to alter the original files directly, for example in the middle of a template adding necessary stuff. So, during the upgrade of such component I have to go through all files, compare them, remember what changes are mine and then update the whole repository. So yes, you guessed it - it's manual, inconvenient and error prune task. I tried looking for good practices how to maintain such repositories, but I couldn't find any answers, so I'd be more than happy to take any advice. ### Response: If your changes are very specific to your project and not something that can be merged into the original repo, or you have legal loopholes to jump through, you can fork the original code and periodically sync up with the upstream. The process is outlined here: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork Obviously you still have to resolve conflicts, but you can set up a CI pipeline to periodically sync and update your helm registry.
so I am creating a system composed of different components that are installed via helm charts. Since I needed different customizations for each of the charts, I created my own separated repositories, copied the charts there and added customizations. The question is: how do I conveniently upgrade the charts versions, while keeping the customizations? For some of the components it was an easy task, since custom templates were moved to different folder and when upgrade is needed I simply replace old files with the new one. The issue is that in some charts I needed to alter the original files directly, for example in the middle of a template adding necessary stuff. So, during the upgrade of such component I have to go through all files, compare them, remember what changes are mine and then update the whole repository. So yes, you guessed it - it's manual, inconvenient and error prune task. I tried looking for good practices how to maintain such repositories, but I couldn't find any answers, so I'd be more than happy to take any advice.
There is a blog K8s where various aspects of the topic "Warnings" are explained. https://kubernetes.io/blog/2020/09/03/warnings/#deprecation-warnings In summary, these have been introduced since version 1.19 and you can’t remove them easily (unless you use the k8s.io/client-go library to modify the K8s API). https://kubernetes.io/blog/2020/09/03/warnings/#customize-client-handling The last resort might be to "throw away" the output: kubectl get psp -A 2>&1 | grep -vi "warn" | grep -vi "deprecat"
###Instruction: I am trying to understand where is this warning coming from ? I have disabled the PSP support in my cluster and indeed using k8s version lower than 1.25. But I want to understand and disable this warning. Is it possible ? Which controller is responsible to handle this WARNING? kubectl get psp -A Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ ### Response: There is a blog K8s where various aspects of the topic "Warnings" are explained. https://kubernetes.io/blog/2020/09/03/warnings/#deprecation-warnings In summary, these have been introduced since version 1.19 and you can’t remove them easily (unless you use the k8s.io/client-go library to modify the K8s API). https://kubernetes.io/blog/2020/09/03/warnings/#customize-client-handling The last resort might be to "throw away" the output: kubectl get psp -A 2>&1 | grep -vi "warn" | grep -vi "deprecat"
I am trying to understand where is this warning coming from ? I have disabled the PSP support in my cluster and indeed using k8s version lower than 1.25. But I want to understand and disable this warning. Is it possible ? Which controller is responsible to handle this WARNING? kubectl get psp -A Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
This was fixed by deploying the CRDs separately. There seem to be more issues with a few objects and CRDs within the kube-prometheus-stack and loki charts, however, I can close this one.
###Instruction: I am using the non HA version of ArgoCD (v2.6.5) installed in a single node k3s cluster. The goal is to deploy a sample application together with kube-prometheus-stack, loki, tempo & minIO via Helm. However, when I create an "Application" in Github and reference it in Argocd, all of them are in "Out of sync" state. Once it tries to re-sync, they change the status to "Unknown". The installation of ArgoCD was done with the next command. (Basic install) kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml And, as example, the kube-prometheus-stack Application I create in Github looks this way: apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: kube-prometheus-stack namespace: argocd spec: project: default source: chart: kube-prometheus-stack repoURL: https://prometheus-community.github.io/helm-charts targetRevision: 44.4.1 helm: releaseName: kube-prometheus-stack destination: server: "https://kubernetes.default.svc" namespace: observability Any idea what I could be missing? Thanks! ### Response: This was fixed by deploying the CRDs separately. There seem to be more issues with a few objects and CRDs within the kube-prometheus-stack and loki charts, however, I can close this one.
I am using the non HA version of ArgoCD (v2.6.5) installed in a single node k3s cluster. The goal is to deploy a sample application together with kube-prometheus-stack, loki, tempo & minIO via Helm. However, when I create an "Application" in Github and reference it in Argocd, all of them are in "Out of sync" state. Once it tries to re-sync, they change the status to "Unknown". The installation of ArgoCD was done with the next command. (Basic install) kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml And, as example, the kube-prometheus-stack Application I create in Github looks this way: apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: kube-prometheus-stack namespace: argocd spec: project: default source: chart: kube-prometheus-stack repoURL: https://prometheus-community.github.io/helm-charts targetRevision: 44.4.1 helm: releaseName: kube-prometheus-stack destination: server: "https://kubernetes.default.svc" namespace: observability Any idea what I could be missing? Thanks!
there is no way to mount a Volume in a certain folder without overwriting its contents. In my opinion the best approaches could be: The first one reported by you (for large content): a. Create PVC b. Add an initContainer to your Deployment that mount the Volume in a DIFFERENT path from the directory containing the data to move/copy c. Add to the initContainer a "command" field with the commands to move/copy the content from the "source" directory to the mounted volume (target) d. Mount to the "main" container the PVC used in the initContainer at the "source" directory path Create a K8s cronjob (or job that works once if the files are never modified) that syncs from one folder to another (similar to point 1, but avoid waiting a long time before the application Pod starts, since the initContainer is no longer needed). Cronjob example (Pay attention to file owners; you may need to run the job under the same serviceAccount that produced those files) If they are static files, build the Docker image with all the contents of the folder already inside (Dockerfile —> copy). https://docs.docker.com/engine/reference/builder/ I strongly recommend not using hostPath in PRODUCTION environments. https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
###Instruction: I have a question about Kubernetes containers and persistent volumes. How can I make some of the preexisting folders of a Kubernetes container persistent? I know the usage of PVCs in Kubernetes but the problem about mounting a PVC to a container is that this operation -naturally- deletes everything in the mount path. eg. Say that we have an image which has a non-empty directory /xyz and we want to make this directory persistent. If we create a PVC and mount it to /xyz, we would lose everything inside /xyz as well (we don't want this to happen). So we want that directory to be persistent from the start with the files inside of it. I'm not so sure if Docker or any other container technology responds such a feature, so it may not be suitable for Kubernetes too. Would be glad if anyone can enlighten me about this. Thanks! My approaches so far: Copying: Creating a PVC for the directory contents and mounting it to an init container or job that copies /xyz to the /mounted/xyz, then mounting PVC to the main container's /xyz. This approach has some drawbacks if the directory is too fat or has some OS/runtime-specific configurations. Hostpath: Populating a directory with the contents of /xyz (eg. /in/host/xyz) before starting the container. Then mounting this path from host to the container. Not a good approach since it's hard to automate. ### Response: there is no way to mount a Volume in a certain folder without overwriting its contents. In my opinion the best approaches could be: The first one reported by you (for large content): a. Create PVC b. Add an initContainer to your Deployment that mount the Volume in a DIFFERENT path from the directory containing the data to move/copy c. Add to the initContainer a "command" field with the commands to move/copy the content from the "source" directory to the mounted volume (target) d. Mount to the "main" container the PVC used in the initContainer at the "source" directory path Create a K8s cronjob (or job that works once if the files are never modified) that syncs from one folder to another (similar to point 1, but avoid waiting a long time before the application Pod starts, since the initContainer is no longer needed). Cronjob example (Pay attention to file owners; you may need to run the job under the same serviceAccount that produced those files) If they are static files, build the Docker image with all the contents of the folder already inside (Dockerfile —> copy). https://docs.docker.com/engine/reference/builder/ I strongly recommend not using hostPath in PRODUCTION environments. https://kubernetes.io/docs/concepts/storage/volumes/#hostpath
I have a question about Kubernetes containers and persistent volumes. How can I make some of the preexisting folders of a Kubernetes container persistent? I know the usage of PVCs in Kubernetes but the problem about mounting a PVC to a container is that this operation -naturally- deletes everything in the mount path. eg. Say that we have an image which has a non-empty directory /xyz and we want to make this directory persistent. If we create a PVC and mount it to /xyz, we would lose everything inside /xyz as well (we don't want this to happen). So we want that directory to be persistent from the start with the files inside of it. I'm not so sure if Docker or any other container technology responds such a feature, so it may not be suitable for Kubernetes too. Would be glad if anyone can enlighten me about this. Thanks! My approaches so far: Copying: Creating a PVC for the directory contents and mounting it to an init container or job that copies /xyz to the /mounted/xyz, then mounting PVC to the main container's /xyz. This approach has some drawbacks if the directory is too fat or has some OS/runtime-specific configurations. Hostpath: Populating a directory with the contents of /xyz (eg. /in/host/xyz) before starting the container. Then mounting this path from host to the container. Not a good approach since it's hard to automate.
PREMISE: Every node in the cluster configures itself to listen on that assigned port and to forward traffic to one of the ready endpoints associated with that Service. You'll be able to contact the type: NodePort Service, from outside the cluster, by connecting to any node using the appropriate protocol (for example: TCP), and the appropriate port (as assigned to that Service). https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport First you should figure out what node the "posts" pod is running on. kubectl -n NAMESPACE get pods -owide | grep -i posts Once you figure out which node it's running on, retrieve its INTERNAL-IP: kubectl get nodes -owide After that you will be able to reach the node via Browser (NODE-INTERNAL-IP:NODEPORT-PORT). If you absolutely want to reach the service via "localhost" name, add the entry in the /etc/hosts file. For example: echo "127.0.0.1 NODE-INTERNAL-IP" >> /etc/hosts
###Instruction: I'm pretty new to Kubernetes, I have docker for mac, not minikube. I have the deployment file running with one pod (containing an express app) in the cluster, then i created a NodePort service. Inside the express app i have a get request to '/posts' and I want to access it from the localhost. i have this service file here: However, when i'm trying to access my localhost in the port that is given to the NodePort service, for example localhost:30134/posts, i get page isn't working on Chrome. Anyone has an idea why? ### Response: PREMISE: Every node in the cluster configures itself to listen on that assigned port and to forward traffic to one of the ready endpoints associated with that Service. You'll be able to contact the type: NodePort Service, from outside the cluster, by connecting to any node using the appropriate protocol (for example: TCP), and the appropriate port (as assigned to that Service). https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport First you should figure out what node the "posts" pod is running on. kubectl -n NAMESPACE get pods -owide | grep -i posts Once you figure out which node it's running on, retrieve its INTERNAL-IP: kubectl get nodes -owide After that you will be able to reach the node via Browser (NODE-INTERNAL-IP:NODEPORT-PORT). If you absolutely want to reach the service via "localhost" name, add the entry in the /etc/hosts file. For example: echo "127.0.0.1 NODE-INTERNAL-IP" >> /etc/hosts
I'm pretty new to Kubernetes, I have docker for mac, not minikube. I have the deployment file running with one pod (containing an express app) in the cluster, then i created a NodePort service. Inside the express app i have a get request to '/posts' and I want to access it from the localhost. i have this service file here: However, when i'm trying to access my localhost in the port that is given to the NodePort service, for example localhost:30134/posts, i get page isn't working on Chrome. Anyone has an idea why?
A some time ago, ingress-nginx allows multiple origins. See issue https://github.com/kubernetes/ingress-nginx/issues/5496 Example of usage: nginx.ingress.kubernetes.io/cors-allow-origin: "https://example.com, https://another.com, http://localhost:8000"
###Instruction: It's currently possible to allow a single domain or subdomain but I would like to allow multiple origins. I have tried many things like adding headers with snipets but had no success. This is my current ingress configuration: kind: Ingress apiVersion: extensions/v1beta1 metadata: name: nginx-ingress namespace: default selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/nginx-ingress uid: adcd75ab-b44b-420c-874e-abcfd1059592 resourceVersion: '259992616' generation: 7 creationTimestamp: '2020-06-10T12:15:18Z' annotations: cert-manager.io/cluster-issuer: letsencrypt-prod ingress.kubernetes.io/enable-cors: 'true' ingress.kubernetes.io/force-ssl-redirect: 'true' kubernetes.io/ingress.class: nginx kubernetes.io/tls-acme: 'true' nginx.ingress.kubernetes.io/cors-allow-credentials: 'true' nginx.ingress.kubernetes.io/cors-allow-headers: 'Authorization, X-Requested-With, Content-Type' nginx.ingress.kubernetes.io/cors-allow-methods: 'GET, PUT, POST, DELETE, HEAD, OPTIONS' nginx.ingress.kubernetes.io/cors-allow-origin: 'https://example.com' nginx.ingress.kubernetes.io/enable-cors: 'true' nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/secure-backends: 'true' I also would like to extend the cors-allow-origin like: nginx.ingress.kubernetes.io/cors-allow-origin: 'https://example.com, https://otherexample.com' Is it possible to allow multiple domains in other ways? ### Response: A some time ago, ingress-nginx allows multiple origins. See issue https://github.com/kubernetes/ingress-nginx/issues/5496 Example of usage: nginx.ingress.kubernetes.io/cors-allow-origin: "https://example.com, https://another.com, http://localhost:8000"
It's currently possible to allow a single domain or subdomain but I would like to allow multiple origins. I have tried many things like adding headers with snipets but had no success. This is my current ingress configuration: kind: Ingress apiVersion: extensions/v1beta1 metadata: name: nginx-ingress namespace: default selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/nginx-ingress uid: adcd75ab-b44b-420c-874e-abcfd1059592 resourceVersion: '259992616' generation: 7 creationTimestamp: '2020-06-10T12:15:18Z' annotations: cert-manager.io/cluster-issuer: letsencrypt-prod ingress.kubernetes.io/enable-cors: 'true' ingress.kubernetes.io/force-ssl-redirect: 'true' kubernetes.io/ingress.class: nginx kubernetes.io/tls-acme: 'true' nginx.ingress.kubernetes.io/cors-allow-credentials: 'true' nginx.ingress.kubernetes.io/cors-allow-headers: 'Authorization, X-Requested-With, Content-Type' nginx.ingress.kubernetes.io/cors-allow-methods: 'GET, PUT, POST, DELETE, HEAD, OPTIONS' nginx.ingress.kubernetes.io/cors-allow-origin: 'https://example.com' nginx.ingress.kubernetes.io/enable-cors: 'true' nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/secure-backends: 'true' I also would like to extend the cors-allow-origin like: nginx.ingress.kubernetes.io/cors-allow-origin: 'https://example.com, https://otherexample.com' Is it possible to allow multiple domains in other ways?
If the second code works, I'd assume there is an embedded struct, and looking at the code (assuming it's this) that appears to be the case: type Node struct { metav1.TypeMeta `json:",inline"` // Standard object's metadata. // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata // +optional metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"` // Spec defines the behavior of a node. // https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status // +optional Spec NodeSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"` // Most recently observed status of the node. // Populated by the system. // Read-only. // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status // +optional Status NodeStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"` } So the name field is likely either part of metav1.TypeMeta, or metav1.ObjectMeta, digging deeper it's on ObjectMeta, which means to define inline, you'll need to do something like: package main import ( corev1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) // ... node := &corev1.Node{ObjectMeta: metav1.ObjectMeta{ Name: pod.Spec.NodeName}}
###Instruction: I'm trying to create a k8s node struct and assign default value to it. The following code complains "unknown field Name in struct literal" node = &corev1.Node{Name: pod.Spec.NodeName} However, this code is fine: node = &corev1.Node{} node.Name = pod.Spec.NodeName why? ### Response: If the second code works, I'd assume there is an embedded struct, and looking at the code (assuming it's this) that appears to be the case: type Node struct { metav1.TypeMeta `json:",inline"` // Standard object's metadata. // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata // +optional metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"` // Spec defines the behavior of a node. // https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status // +optional Spec NodeSpec `json:"spec,omitempty" protobuf:"bytes,2,opt,name=spec"` // Most recently observed status of the node. // Populated by the system. // Read-only. // More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status // +optional Status NodeStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"` } So the name field is likely either part of metav1.TypeMeta, or metav1.ObjectMeta, digging deeper it's on ObjectMeta, which means to define inline, you'll need to do something like: package main import ( corev1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) // ... node := &corev1.Node{ObjectMeta: metav1.ObjectMeta{ Name: pod.Spec.NodeName}}
I'm trying to create a k8s node struct and assign default value to it. The following code complains "unknown field Name in struct literal" node = &corev1.Node{Name: pod.Spec.NodeName} However, this code is fine: node = &corev1.Node{} node.Name = pod.Spec.NodeName why?
The given commands does not work for me. I have created a few manifests that you can apply in your cluster to automatically delete the Pods matching the criteria with a kubernetes CronJob. https://github.com/tyriis/i-see-dead-pods this is working for me kubectl get pods \ --all-namespaces \ -o go-template \ --template='{{range .items}}{{printf "%s %s %s\n" .metadata.namespace .metadata.name .status.message}}{{end}}' \ | grep "Pod was terminated in response to imminent node shutdown." \ | awk '{print $1, $2}' \ | xargs -r -n2 kubectl delete pod -n
###Instruction: I have an issue with my GKE cluster. I am using two node pools: secondary - with standard set of highmen-n1 nodes, and primary - with preemptible highmem-n1 nodes. Issue is that I have many pods in Error/Completed status which are not cleared by k8s, all ran on preemptible set. THESE PODS ARE NOT JOBS. GKE documentation says that: "Preemptible VMs are Compute Engine VM instances that are priced lower than standard VMs and provide no guarantee of availability. Preemptible VMs offer similar functionality to Spot VMs, but only last up to 24 hours after creation." "When Compute Engine needs to reclaim the resources used by preemptible VMs, a preemption notice is sent to GKE. Preemptible VMs terminate 30 seconds after receiving a termination notice." Ref: https://cloud.google.com/kubernetes-engine/docs/how-to/preemptible-vms And from the kubernetes documentation: "For failed Pods, the API objects remain in the cluster's API until a human or controller process explicitly removes them. The Pod garbage collector (PodGC), which is a controller in the control plane, cleans up terminated Pods (with a phase of Succeeded or Failed), when the number of Pods exceeds the configured threshold (determined by terminated-pod-gc-threshold in the kube-controller-manager). This avoids a resource leak as Pods are created and terminated over time." Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection So, from my understanding every 24 hours this set of nodes is changing, so it kills all the pods running on them and depending on graceful shutdown pods are ending up in Completed or Error state. Nevertheless, kubernetes is not clearing or removing them, so I have tons of pods in mentioned statuses in my cluster, which is not expected at all. I am attaching screenshots for reference. Example kubectl describe pod output: Status: Failed Reason: Terminated Message: Pod was terminated in response to imminent node shutdown. Apart from that, no events, logs, etc. GKE version: 1.24.7-gke.900 Both Node pools versions: 1.24.5-gke.600 Did anyone encounter such issue or knows what's going on there? Is there solution to clear it in a different way than creating some script and running it periodically? I tried digging in into GKE logs, but I couldn't find anything. I also tried to look for the answers in docs, but I've failed. ### Response: The given commands does not work for me. I have created a few manifests that you can apply in your cluster to automatically delete the Pods matching the criteria with a kubernetes CronJob. https://github.com/tyriis/i-see-dead-pods this is working for me kubectl get pods \ --all-namespaces \ -o go-template \ --template='{{range .items}}{{printf "%s %s %s\n" .metadata.namespace .metadata.name .status.message}}{{end}}' \ | grep "Pod was terminated in response to imminent node shutdown." \ | awk '{print $1, $2}' \ | xargs -r -n2 kubectl delete pod -n
I have an issue with my GKE cluster. I am using two node pools: secondary - with standard set of highmen-n1 nodes, and primary - with preemptible highmem-n1 nodes. Issue is that I have many pods in Error/Completed status which are not cleared by k8s, all ran on preemptible set. THESE PODS ARE NOT JOBS. GKE documentation says that: "Preemptible VMs are Compute Engine VM instances that are priced lower than standard VMs and provide no guarantee of availability. Preemptible VMs offer similar functionality to Spot VMs, but only last up to 24 hours after creation." "When Compute Engine needs to reclaim the resources used by preemptible VMs, a preemption notice is sent to GKE. Preemptible VMs terminate 30 seconds after receiving a termination notice." Ref: https://cloud.google.com/kubernetes-engine/docs/how-to/preemptible-vms And from the kubernetes documentation: "For failed Pods, the API objects remain in the cluster's API until a human or controller process explicitly removes them. The Pod garbage collector (PodGC), which is a controller in the control plane, cleans up terminated Pods (with a phase of Succeeded or Failed), when the number of Pods exceeds the configured threshold (determined by terminated-pod-gc-threshold in the kube-controller-manager). This avoids a resource leak as Pods are created and terminated over time." Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-garbage-collection So, from my understanding every 24 hours this set of nodes is changing, so it kills all the pods running on them and depending on graceful shutdown pods are ending up in Completed or Error state. Nevertheless, kubernetes is not clearing or removing them, so I have tons of pods in mentioned statuses in my cluster, which is not expected at all. I am attaching screenshots for reference. Example kubectl describe pod output: Status: Failed Reason: Terminated Message: Pod was terminated in response to imminent node shutdown. Apart from that, no events, logs, etc. GKE version: 1.24.7-gke.900 Both Node pools versions: 1.24.5-gke.600 Did anyone encounter such issue or knows what's going on there? Is there solution to clear it in a different way than creating some script and running it periodically? I tried digging in into GKE logs, but I couldn't find anything. I also tried to look for the answers in docs, but I've failed.
try the ingress config like apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: certmanager.k8s.io/cluster-issuer: core-prod kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/secure-backends: "true" nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/websocket-services: ws-service nginx.org/websocket-services: ws-service name: core-ingress spec: rules: - host: test.io http: paths: - backend: serviceName: ws-service servicePort: 80 tls: - hosts: - test.io secretName: core-secret
###Instruction: I am trying to make socket connection from react application to nodejs server hosted on GKE. I am getting connection error "Connection closed before it was established". HTTP connection working fine. dev.host.com is used for frontend and dev.host.com/api is used for backend and included in ingress.yml file below. Can it might be the issue due to this setup or I missed any configuration. I haved configured 3 Replicas. GKE Configuration for frontend: frontend-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: frontend-deploy spec: replicas: 3 selector: matchLabels: app: frontend template: metadata: labels: app: frontend spec: containers: - image: image-frontend:latest name: vektor-fe imagePullPolicy: Always livenessProbe: ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: frontend-service spec: selector: app: frontend ports: - port: 3000 targetPort: 80 type: ClusterIP ingress.yml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/ssl-redirect: "true" spec: tls: - hosts: - dev.host.com secretName: my-ingress-tls rules: - host: dev.host.com http: paths: - path: / pathType: ImplementationSpecific backend: service: name: frontend-service port: number: 3000 - path: /api pathType: ImplementationSpecific backend: service: name: backend-service port: number: 4001 Backend Configuration backend-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: backend-deploy spec: replicas: 1 selector: matchLabels: app: app template: metadata: labels: app: app spec: containers: - image: my-app:latest name: vektor-be imagePullPolicy: Always ports: - containerPort: 4000 --- apiVersion: v1 kind: Service metadata: name: backend-service spec: ports: - port: 4001 targetPort: 4000 selector: app: app type: ClusterIP Tried socket connection to dev.host.com at port 4001 but got error "Connection closed before it was established". socket.io-client is used in react and socket.io is used in backend. ### Response: try the ingress config like apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: certmanager.k8s.io/cluster-issuer: core-prod kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/proxy-read-timeout: "1800" nginx.ingress.kubernetes.io/proxy-send-timeout: "1800" nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/secure-backends: "true" nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/websocket-services: ws-service nginx.org/websocket-services: ws-service name: core-ingress spec: rules: - host: test.io http: paths: - backend: serviceName: ws-service servicePort: 80 tls: - hosts: - test.io secretName: core-secret
I am trying to make socket connection from react application to nodejs server hosted on GKE. I am getting connection error "Connection closed before it was established". HTTP connection working fine. dev.host.com is used for frontend and dev.host.com/api is used for backend and included in ingress.yml file below. Can it might be the issue due to this setup or I missed any configuration. I haved configured 3 Replicas. GKE Configuration for frontend: frontend-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: frontend-deploy spec: replicas: 3 selector: matchLabels: app: frontend template: metadata: labels: app: frontend spec: containers: - image: image-frontend:latest name: vektor-fe imagePullPolicy: Always livenessProbe: ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: name: frontend-service spec: selector: app: frontend ports: - port: 3000 targetPort: 80 type: ClusterIP ingress.yml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-ingress annotations: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/ssl-redirect: "true" spec: tls: - hosts: - dev.host.com secretName: my-ingress-tls rules: - host: dev.host.com http: paths: - path: / pathType: ImplementationSpecific backend: service: name: frontend-service port: number: 3000 - path: /api pathType: ImplementationSpecific backend: service: name: backend-service port: number: 4001 Backend Configuration backend-deploy.yaml apiVersion: apps/v1 kind: Deployment metadata: name: backend-deploy spec: replicas: 1 selector: matchLabels: app: app template: metadata: labels: app: app spec: containers: - image: my-app:latest name: vektor-be imagePullPolicy: Always ports: - containerPort: 4000 --- apiVersion: v1 kind: Service metadata: name: backend-service spec: ports: - port: 4001 targetPort: 4000 selector: app: app type: ClusterIP Tried socket connection to dev.host.com at port 4001 but got error "Connection closed before it was established". socket.io-client is used in react and socket.io is used in backend.
It isn't possible to do this in a single step. But you can: Access the K8s cluster where PVC exists Create a Deployment (or single Pod) that mounts PVC on the path you prefer (Example /xyz) Run kubectl -n NAMESPACE cp POD_NAME:/xyz /tmp/ to locally copy the contents of the /xyz folder to the /tmp path Logout from K8s cluster Login to the K8s cluster where data will be migrated Create new PVC Create a Deployment (or Single Pod) that mounts the PVC on the path you prefer (Example /new-xyz) Run kubectl -n NAMESPACE cp /tmp/xyz/ POD_NAME:/new-xyz/ to copy the local content to the path /new-xyz
###Instruction: I have a k8s cluster which uses rancher.io/local-path. There is a PV there $ kubectl describe pv pvc-979af6ff-3617-4707-8a2c-b6c4ac00043f Name: pvc-979af6ff-3617-4707-8a2c-b6c4ac00043f Labels: <none> Annotations: pv.kubernetes.io/provisioned-by: rancher.io/local-path Finalizers: [kubernetes.io/pv-protection] StorageClass: local-path Status: Bound Claim: gitlab/gitlab-prometheus-server Reclaim Policy: Delete Access Modes: RWO VolumeMode: Filesystem Capacity: 8Gi Node Affinity: Required Terms: Term 0: kubernetes.io/hostname in [agent-1] Message: Source: Type: HostPath (bare host directory volume) Path: /var/lib/rancher/k3s/storage/pvc-979af6ff-3617-4707-8a2c-b6c4ac00043f_gitlab_gitlab-prometheus-server HostPathType: DirectoryOrCreate Events: <none> I would like to move that PV to another node, agetn-5. How can I achieve that? Important point that PV in question is rancher.io/local-path provisioned. ### Response: It isn't possible to do this in a single step. But you can: Access the K8s cluster where PVC exists Create a Deployment (or single Pod) that mounts PVC on the path you prefer (Example /xyz) Run kubectl -n NAMESPACE cp POD_NAME:/xyz /tmp/ to locally copy the contents of the /xyz folder to the /tmp path Logout from K8s cluster Login to the K8s cluster where data will be migrated Create new PVC Create a Deployment (or Single Pod) that mounts the PVC on the path you prefer (Example /new-xyz) Run kubectl -n NAMESPACE cp /tmp/xyz/ POD_NAME:/new-xyz/ to copy the local content to the path /new-xyz
I have a k8s cluster which uses rancher.io/local-path. There is a PV there $ kubectl describe pv pvc-979af6ff-3617-4707-8a2c-b6c4ac00043f Name: pvc-979af6ff-3617-4707-8a2c-b6c4ac00043f Labels: <none> Annotations: pv.kubernetes.io/provisioned-by: rancher.io/local-path Finalizers: [kubernetes.io/pv-protection] StorageClass: local-path Status: Bound Claim: gitlab/gitlab-prometheus-server Reclaim Policy: Delete Access Modes: RWO VolumeMode: Filesystem Capacity: 8Gi Node Affinity: Required Terms: Term 0: kubernetes.io/hostname in [agent-1] Message: Source: Type: HostPath (bare host directory volume) Path: /var/lib/rancher/k3s/storage/pvc-979af6ff-3617-4707-8a2c-b6c4ac00043f_gitlab_gitlab-prometheus-server HostPathType: DirectoryOrCreate Events: <none> I would like to move that PV to another node, agetn-5. How can I achieve that? Important point that PV in question is rancher.io/local-path provisioned.
Debug pointers: Application Level Check if you have a /test endpoint in your service what does it respond with. Because based on your test requests, with the IP you made request to / and with DNS you made a request to /test K8s Service Level Port forward the service and try to run it on your local machine. kubectl port-forward svc/http-svc 80:80 Ingress Level Is the error thrown by Nginx? If yes, then nginx ingress controller is throwing the error. Check if you have used the proper ingress class name in your ingress object based on what you configured while deploying your nginx ingress controller. As visible in your logs it says [upstream-default-backend] , that I feel based on my understanding once the controller receives the request it doesn't have any configured server configuration for this host. This happens because nginx ingress controller only adds the server configuration of the hosts that happens to have an ingress class in their ingress object.
###Instruction: ENV: k8s: v1.20.5 ingress-nginx: v1.6.4 I created ingress-nginx-controller from offical yaml: https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.4/deploy/static/provider/baremetal/deploy.yaml and I changed the network type to hostnetwork: hostNetwork: true Then I created a deployment to create a backend server.Below is the yaml file: apiVersion: v1 kind: Service metadata: name: http-svc spec: selector: app: http ports: - protocol: TCP port: 80 targetPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: http-deployment labels: app: http spec: replicas: 2 selector: matchLabels: app: http template: metadata: labels: app: http spec: containers: - name: http image: hashicorp/http-echo:alpine args: ["-text", "hello", "-listen=:80"] --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress spec: rules: - host: "test.com" http: paths: - pathType: Prefix path: "/test" backend: service: name: http-svc port: number: 80 Every things looks running fine, but I still got "500 Internal Server Error" when I access the web server via ingress-nginx. Below is the info for the resources: #kubectl describe ingress Name: test-ingress Namespace: default Address: Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>) Rules: Host Path Backends ---- ---- -------- test.com /test http-svc:80 (192.168.107.203:80,192.168.122.81:80) Annotations: <none> Events: <none> #kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE http-svc ClusterIP 10.100.58.107 <none> 80/TCP 48m kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 283d #kubectl get pods -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES http-deployment-764c4597c5-rdks7 1/1 Running 0 48m 192.168.122.81 k8s-node4 <none> <none> http-deployment-764c4597c5-rf99t 1/1 Running 0 48m 192.168.107.203 k8s-node3 <none> <none> #kubectl get pods -n ingress-nginx -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ingress-nginx-admission-create-kb64z 0/1 Completed 0 44m 192.168.107.204 k8s-node3 <none> <none> ingress-nginx-admission-patch-xmswb 0/1 Completed 1 44m 192.168.122.82 k8s-node4 <none> <none> ingress-nginx-controller-69695968f9-7dtxf 1/1 Running 0 44m 10.1.1.12 k8s-node2 <none> <none> I can directly acces my backend server via service ip, so I think the problem maybe is ingress-nginx can not forward the request to the backend? Below is the log in ingress-nginx-controller: 2023/03/19 07:31:54 [error] 25#25: *25199 could not find named location "@custom_upstream-default-backend_404", client: 127.0.0.1, server: , request: "GET /test HTTP/1.1", host: "test.com" 2023/03/19 07:31:54 [error] 26#26: *25201 could not find named location "@custom_upstream-default-backend_404", client: 127.0.0.1, server: , request: "GET / HTTP/1.0", host: "test.com" 10.1.1.11 - - [19/Mar/2023:07:31:54 +0000] "GET /test HTTP/1.1" 500 170 "-" "curl/7.29.0" 76 0.000 [upstream-default-backend] [] 127.0.0.1:8181 : 127.0.0.1:8181 0 : 170 0.000 : 0.000 500 : 500 28acaef695f43cac09e7dfc932511c92 Looks like it forwarded the request to localhost,but why? Did I miss something? Below is the different respones from different way to access: #curl http://10.100.58.107 hello #curl http://test.com/test <html> <head><title>500 Internal Server Error</title></head> <body> <center><h1>500 Internal Server Error</h1></center> <hr><center>nginx</center> </body> </html> I want to access the backend server via ingress-nginx successfully. ### Response: Debug pointers: Application Level Check if you have a /test endpoint in your service what does it respond with. Because based on your test requests, with the IP you made request to / and with DNS you made a request to /test K8s Service Level Port forward the service and try to run it on your local machine. kubectl port-forward svc/http-svc 80:80 Ingress Level Is the error thrown by Nginx? If yes, then nginx ingress controller is throwing the error. Check if you have used the proper ingress class name in your ingress object based on what you configured while deploying your nginx ingress controller. As visible in your logs it says [upstream-default-backend] , that I feel based on my understanding once the controller receives the request it doesn't have any configured server configuration for this host. This happens because nginx ingress controller only adds the server configuration of the hosts that happens to have an ingress class in their ingress object.
ENV: k8s: v1.20.5 ingress-nginx: v1.6.4 I created ingress-nginx-controller from offical yaml: https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.6.4/deploy/static/provider/baremetal/deploy.yaml and I changed the network type to hostnetwork: hostNetwork: true Then I created a deployment to create a backend server.Below is the yaml file: apiVersion: v1 kind: Service metadata: name: http-svc spec: selector: app: http ports: - protocol: TCP port: 80 targetPort: 80 --- apiVersion: apps/v1 kind: Deployment metadata: name: http-deployment labels: app: http spec: replicas: 2 selector: matchLabels: app: http template: metadata: labels: app: http spec: containers: - name: http image: hashicorp/http-echo:alpine args: ["-text", "hello", "-listen=:80"] --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: test-ingress spec: rules: - host: "test.com" http: paths: - pathType: Prefix path: "/test" backend: service: name: http-svc port: number: 80 Every things looks running fine, but I still got "500 Internal Server Error" when I access the web server via ingress-nginx. Below is the info for the resources: #kubectl describe ingress Name: test-ingress Namespace: default Address: Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>) Rules: Host Path Backends ---- ---- -------- test.com /test http-svc:80 (192.168.107.203:80,192.168.122.81:80) Annotations: <none> Events: <none> #kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE http-svc ClusterIP 10.100.58.107 <none> 80/TCP 48m kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 283d #kubectl get pods -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES http-deployment-764c4597c5-rdks7 1/1 Running 0 48m 192.168.122.81 k8s-node4 <none> <none> http-deployment-764c4597c5-rf99t 1/1 Running 0 48m 192.168.107.203 k8s-node3 <none> <none> #kubectl get pods -n ingress-nginx -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ingress-nginx-admission-create-kb64z 0/1 Completed 0 44m 192.168.107.204 k8s-node3 <none> <none> ingress-nginx-admission-patch-xmswb 0/1 Completed 1 44m 192.168.122.82 k8s-node4 <none> <none> ingress-nginx-controller-69695968f9-7dtxf 1/1 Running 0 44m 10.1.1.12 k8s-node2 <none> <none> I can directly acces my backend server via service ip, so I think the problem maybe is ingress-nginx can not forward the request to the backend? Below is the log in ingress-nginx-controller: 2023/03/19 07:31:54 [error] 25#25: *25199 could not find named location "@custom_upstream-default-backend_404", client: 127.0.0.1, server: , request: "GET /test HTTP/1.1", host: "test.com" 2023/03/19 07:31:54 [error] 26#26: *25201 could not find named location "@custom_upstream-default-backend_404", client: 127.0.0.1, server: , request: "GET / HTTP/1.0", host: "test.com" 10.1.1.11 - - [19/Mar/2023:07:31:54 +0000] "GET /test HTTP/1.1" 500 170 "-" "curl/7.29.0" 76 0.000 [upstream-default-backend] [] 127.0.0.1:8181 : 127.0.0.1:8181 0 : 170 0.000 : 0.000 500 : 500 28acaef695f43cac09e7dfc932511c92 Looks like it forwarded the request to localhost,but why? Did I miss something? Below is the different respones from different way to access: #curl http://10.100.58.107 hello #curl http://test.com/test <html> <head><title>500 Internal Server Error</title></head> <body> <center><h1>500 Internal Server Error</h1></center> <hr><center>nginx</center> </body> </html> I want to access the backend server via ingress-nginx successfully.
As of Kubernetes 1.22, swap memory is supported (as an alpha feature). Configure the node's host OS with a swap memory device (e.g., with swapon or /etc/fstab); Configure kubelet on that node to: still start despite detecting the presence of swap (disable fail-on-swap), enable the NodeSwap feature gate, and configure MemorySwap.SwapBehavior=UnlimitedSwap to let kubernetes workloads use swap memory. Note, there is currently no support for setting swap limits individually per workload (although this is planned for the beta). Either no containers are permitted to use any swap, or all containers can use unlimited swap memory. (If workloads are not permitted to use swap then, depending on the Linux kernel cgroups version, they could still get swapped anyway. Prior to cgroups v2, processes were not able to enforce separate limits for swap and physical memory, but only for the combined total.) See the docs, and the kubernetes enhancement proposal (KEP) cited therein, for more details.
###Instruction: I need to add swap memory in kubernates pod. so that if any pods exceeds the available RAM then it can use the swap memory from hard disk. is that possible in kubernates? ### Response: As of Kubernetes 1.22, swap memory is supported (as an alpha feature). Configure the node's host OS with a swap memory device (e.g., with swapon or /etc/fstab); Configure kubelet on that node to: still start despite detecting the presence of swap (disable fail-on-swap), enable the NodeSwap feature gate, and configure MemorySwap.SwapBehavior=UnlimitedSwap to let kubernetes workloads use swap memory. Note, there is currently no support for setting swap limits individually per workload (although this is planned for the beta). Either no containers are permitted to use any swap, or all containers can use unlimited swap memory. (If workloads are not permitted to use swap then, depending on the Linux kernel cgroups version, they could still get swapped anyway. Prior to cgroups v2, processes were not able to enforce separate limits for swap and physical memory, but only for the combined total.) See the docs, and the kubernetes enhancement proposal (KEP) cited therein, for more details.
I need to add swap memory in kubernates pod. so that if any pods exceeds the available RAM then it can use the swap memory from hard disk. is that possible in kubernates?