input
stringclasses
1 value
output
stringlengths
30
28.3k
text
stringlengths
151
31.8k
instruction
stringlengths
49
30.3k
NodePort is used to access a service within the cluster scope. You might create a firewall rule that allows TCP traffic on your node port. Create a firewall rule that allows TCP traffic on port 32004. On Ubuntu you can do something like: sudo ufw allow 32004/tcp And check port status with: sudo ufw status Once you are sure the port is opened you can curl the ip:port curl http://10.244.0.10:32004 For further info check the Kubernetes official documentation.
###Instruction: I have created 2 pods within the same cluster. One service is initialized as kubectl create deployment my-web --image=nginx --port=80 kubectl expose deployment my-web --target-port=80 --type=NodePort to my understanding, this creates a deployment with one pod my-web-<string> and exposes a port. With kubectl describe services my-web, I find that the following information: Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 32004/TCP Endpoints: 10.244.0.10:80 testing pod: kubectl run test-pod --image=nginx --restart=Never this creates another pod and I try to curl the nginx of my-web pod with the command curl 10.244.0.10:32004. That request times out. But somehow it works when I use curl 10.244.0:80. Why is that? I thought the service was exposed on port 32004 outside the my-web pod? Please also let me know what IP and port to curl from my host machine to access the my-web pod. I am running the cluster from minikube on MacOS. Thanks for the help! I try to curl the nginx of my-web pod with the command curl 10.244.0.10:32004. That request times out. But somehow it works when I use curl 10.244.0:80. ### Response: NodePort is used to access a service within the cluster scope. You might create a firewall rule that allows TCP traffic on your node port. Create a firewall rule that allows TCP traffic on port 32004. On Ubuntu you can do something like: sudo ufw allow 32004/tcp And check port status with: sudo ufw status Once you are sure the port is opened you can curl the ip:port curl http://10.244.0.10:32004 For further info check the Kubernetes official documentation.
I have created 2 pods within the same cluster. One service is initialized as kubectl create deployment my-web --image=nginx --port=80 kubectl expose deployment my-web --target-port=80 --type=NodePort to my understanding, this creates a deployment with one pod my-web-<string> and exposes a port. With kubectl describe services my-web, I find that the following information: Port: <unset> 80/TCP TargetPort: 80/TCP NodePort: <unset> 32004/TCP Endpoints: 10.244.0.10:80 testing pod: kubectl run test-pod --image=nginx --restart=Never this creates another pod and I try to curl the nginx of my-web pod with the command curl 10.244.0.10:32004. That request times out. But somehow it works when I use curl 10.244.0:80. Why is that? I thought the service was exposed on port 32004 outside the my-web pod? Please also let me know what IP and port to curl from my host machine to access the my-web pod. I am running the cluster from minikube on MacOS. Thanks for the help! I try to curl the nginx of my-web pod with the command curl 10.244.0.10:32004. That request times out. But somehow it works when I use curl 10.244.0:80.
Sara's answer above pointed me to the right direction. Copying the files works but this requires a restart of the tomcat service which reverts the changes. I had to use 'cp -r' on the deployment yaml as per below: spec: containers: - name: tomcat image: tomcat ports: - containerPort: 8080 volumeMounts: - mountPath: /usr/local/tomcat/webapps.dist/manager/META-INF/context.xml name: tomcat-configmap subPath: context1 - mountPath: /usr/local/tomcat/webapps.dist/host-manager/META-INF/context.xml name: tomcat-configmap subPath: context2 mountPath: /usr/local/tomcat/conf/tomcat-users.xml name: tomcat-configmap subPath: tomcat-users command: ["/bin/bash"] args: [ "-c", "cp -r /usr/local/tomcat/webapps.dist/* /usr/local/tomcat/webapps/ && catalina.sh start; sleep inf" ] volumes: - name: tomcat-configmap configMap: name: tomcat-configmap
###Instruction: Hello I am trying to deploy a simple tomcat service. Below are the details: 1.minikube version: v1.8.1 2.OS: mac 3.The deployment.yaml file (I am in the directory of the yaml file) apiVersion: apps/v1 kind: Deployment metadata: name: tomcat-deployment spec: selector: matchLabels: app: tomcat replicas: 1 template: metadata: labels: app: tomcat spec: containers: - name: tomcat image: tomcat:9.0 ports: - containerPort: 8080 4.Commands used to deploy and expose the service kubectl apply -f deployment.yaml kubectl expose deployment tomcat-deployment --type=NodePort minikube service tomcat-deployment --url curl [URL] I get a 404 when I curl the URL. I am unsure if there's an issue with the deployment.yaml file or some minikube settings. ### Response: Sara's answer above pointed me to the right direction. Copying the files works but this requires a restart of the tomcat service which reverts the changes. I had to use 'cp -r' on the deployment yaml as per below: spec: containers: - name: tomcat image: tomcat ports: - containerPort: 8080 volumeMounts: - mountPath: /usr/local/tomcat/webapps.dist/manager/META-INF/context.xml name: tomcat-configmap subPath: context1 - mountPath: /usr/local/tomcat/webapps.dist/host-manager/META-INF/context.xml name: tomcat-configmap subPath: context2 mountPath: /usr/local/tomcat/conf/tomcat-users.xml name: tomcat-configmap subPath: tomcat-users command: ["/bin/bash"] args: [ "-c", "cp -r /usr/local/tomcat/webapps.dist/* /usr/local/tomcat/webapps/ && catalina.sh start; sleep inf" ] volumes: - name: tomcat-configmap configMap: name: tomcat-configmap
Hello I am trying to deploy a simple tomcat service. Below are the details: 1.minikube version: v1.8.1 2.OS: mac 3.The deployment.yaml file (I am in the directory of the yaml file) apiVersion: apps/v1 kind: Deployment metadata: name: tomcat-deployment spec: selector: matchLabels: app: tomcat replicas: 1 template: metadata: labels: app: tomcat spec: containers: - name: tomcat image: tomcat:9.0 ports: - containerPort: 8080 4.Commands used to deploy and expose the service kubectl apply -f deployment.yaml kubectl expose deployment tomcat-deployment --type=NodePort minikube service tomcat-deployment --url curl [URL] I get a 404 when I curl the URL. I am unsure if there's an issue with the deployment.yaml file or some minikube settings.
In principle, a go get sigs.k8s.io/cluster-api@v1.4.2 (done in the folder where your go.mod is) should be enough to: update your go.mod, add the library in your $GOPATH and enable VSCode auto-import to work. That means, when you start typing the name of a struct, like KubeadmControlPlane, the VSCode Go extension should suggest an auto-import if it can find a matching package in your GOPATH or in your project's vendor directory. If not, the manual process would be: Identify the API Group and Version of the CRD: This information is usually found in the apiVersion field of the CRD YAML file. For example, the KubeadmControlPlane is part of the controlplane.cluster.x-k8s.io/v1beta1 API group and version. Find the Go Package for the API Group: You need to find the corresponding Go package for this API group. In the case of the KubeadmControlPlane, it is part of the sigs.k8s.io/cluster-api project and the specific package path is sigs.k8s.io/cluster-api/controlplane/kubeadm/api/v1beta1. A search in pkg.go.dev works too, pending an official API to lookup packages (issue 36785). Identify the Go Struct for the CRD: The Go struct is usually named similarly to the Kind of the CRD. In this case, it is KubeadmControlPlane. Create the Go Import Statement: Once you have the package path and struct name, you can create the Go import statement. For example: import ( kubeadm "sigs.k8s.io/cluster-api/controlplane/kubeadm/api/v1beta1" )
###Instruction: Imagine you want to get kind/struct called KubeadmControlPlane from the kubernetes API server. This means you need to import the related struct into your code. A matching import statement for KubeadmControlPlane would be: kubeadm "sigs.k8s.io/cluster-api/controlplane/kubeadm/api/v1beta1" Up to now I need to much time to find a matching import statement. I use vscode. How do you get from the CRD kind to an import statement? ### Response: In principle, a go get sigs.k8s.io/cluster-api@v1.4.2 (done in the folder where your go.mod is) should be enough to: update your go.mod, add the library in your $GOPATH and enable VSCode auto-import to work. That means, when you start typing the name of a struct, like KubeadmControlPlane, the VSCode Go extension should suggest an auto-import if it can find a matching package in your GOPATH or in your project's vendor directory. If not, the manual process would be: Identify the API Group and Version of the CRD: This information is usually found in the apiVersion field of the CRD YAML file. For example, the KubeadmControlPlane is part of the controlplane.cluster.x-k8s.io/v1beta1 API group and version. Find the Go Package for the API Group: You need to find the corresponding Go package for this API group. In the case of the KubeadmControlPlane, it is part of the sigs.k8s.io/cluster-api project and the specific package path is sigs.k8s.io/cluster-api/controlplane/kubeadm/api/v1beta1. A search in pkg.go.dev works too, pending an official API to lookup packages (issue 36785). Identify the Go Struct for the CRD: The Go struct is usually named similarly to the Kind of the CRD. In this case, it is KubeadmControlPlane. Create the Go Import Statement: Once you have the package path and struct name, you can create the Go import statement. For example: import ( kubeadm "sigs.k8s.io/cluster-api/controlplane/kubeadm/api/v1beta1" )
Imagine you want to get kind/struct called KubeadmControlPlane from the kubernetes API server. This means you need to import the related struct into your code. A matching import statement for KubeadmControlPlane would be: kubeadm "sigs.k8s.io/cluster-api/controlplane/kubeadm/api/v1beta1" Up to now I need to much time to find a matching import statement. I use vscode. How do you get from the CRD kind to an import statement?
Answering to myself after having determined the root cause of the issue. Regarding Linkerd, everything was correctly setup. The main precaution that one needs to take is adding the linkerd-await binary to the Flink image and making sure to override the entrypoint for the jobmanager since otherwise you will run into issues when upgrading your jobs. The jobmanager won't kill the Linkerd proxy, and because of that it will hang around with NotReady status. Again, that is easily solved by wrapping the main cmd in a linkerd-await call. So, first add the linkerd-await binary to your docker image: # Add linkerd-await and linkerd-entrypoint.sh USER root RUN apt-get update && apt-get install -y wget RUN wget https://github.com/linkerd/linkerd-await/releases/download/release%2Fv0.2.7/linkerd-await-v0.2.7-amd64 -O ./linkerd-await && chmod +x ./linkerd-await COPY scripts/flink/linkerd-entrypoint.sh ./linkerd-entrypoint.sh Then, for the jobmanager only, override the entrypoint like this: spec: containers: - name: flink-jobmanager command: - linkerd-entrypoint.sh # defined above Alternatively one could use the LINKERD_DISABLED or LINKERD_AWAIT_DISABLED env vars for bypassing the linkerd-await wrapper. For more info on using jobs & Linkerd consult the following resources: https://itnext.io/three-ways-to-use-linkerd-with-kubernetes-jobs-c12ccc6d4c7c (solution #3 is the one explained here) https://github.com/linkerd/linkerd-await Also, regarding the annotation config.linkerd.io/proxy-await: enabled , it does only the waiting but not the shutdown part, so if we are going to manually run linkerd-await --shutdown -- "$@" anyway, that annotation can be safely removed since it's redundant: https://github.com/linkerd/linkerd2/issues/8006 Finally, regarding: java.lang.NoClassDefFoundError: Could not initialize class foo.bar.Job let me clarify that this had nothing to do with Linkerd. This was mostly a config error along the lines of: java.lang.NoClassDefFoundError: Could not initialize class XXX Essentially (the specific details are irrelevant), there were some env vars missing in the taskmanager pods. Note that the exception message says "Could not initialize class foo.bar.Job" which is different from "Could not find class...". Sorry for the confusion!
###Instruction: I am deploying some Flink jobs which require access to some services under a service mesh implemented via Linkerd and I'm running into this error: java.lang.NoClassDefFoundError: Could not initialize class foo.bar.Job I can confirm that the jar file contains the class that cannot be found apparently, so it's not a problem with the jar itself, but seems to be related to Linkerd. In particular, I'm using the following pod annotations for both the jobmanager and the taskmanager pods (taken from my Helm Chart values file): podAnnotations: linkerd.io/inject: enabled config.linkerd.io/skip-outbound-ports: 6123,6124 config.linkerd.io/proxy-await: enabled For what it's worth, I'm using the Ververica Platform (Community Edition) for deploying my jobs to Kubernetes, although I don't think the issue is VVP-specific: {{- define "vvp.deployment" }} kind: Deployment apiVersion: v1 metadata: name: my-job spec: template: spec: artifact: kind: jar flinkImageRegistry: {{ .Values.flink.imageRegistry }} flinkVersion: "1.15.1" flinkImageTag: 1.15.1-stream1-scala_2.12-java11-linkerd entryClass: foo.bar.Job kubernetes: jobManagerPodTemplate: metadata: {{- with .Values.flink.podAnnotations }} annotations: {{- toYaml . | nindent 14 }} {{- end }} spec: containers: - name: flink-jobmanager command: - linkerd-entrypoint.sh taskManagerPodTemplate: metadata: {{- with .Values.flink.podAnnotations }} annotations: {{- toYaml . | nindent 14 }} {{- end }} {{- end }} where the contents of linkerd-entrypoint.sh are: #!/bin/bash set -e exec linkerd-await --shutdown -- "$@" For extra context, the VVP and the flink jobs are deployed into different namespaces. Also, for the VVP pods, I'm not using any linkerd annotations whatsoever. Has anyone encountered similar problems? The closest troubleshooting resource/guide that I've found so far is this one, which targets Istio instead of Linkerd. ### Response: Answering to myself after having determined the root cause of the issue. Regarding Linkerd, everything was correctly setup. The main precaution that one needs to take is adding the linkerd-await binary to the Flink image and making sure to override the entrypoint for the jobmanager since otherwise you will run into issues when upgrading your jobs. The jobmanager won't kill the Linkerd proxy, and because of that it will hang around with NotReady status. Again, that is easily solved by wrapping the main cmd in a linkerd-await call. So, first add the linkerd-await binary to your docker image: # Add linkerd-await and linkerd-entrypoint.sh USER root RUN apt-get update && apt-get install -y wget RUN wget https://github.com/linkerd/linkerd-await/releases/download/release%2Fv0.2.7/linkerd-await-v0.2.7-amd64 -O ./linkerd-await && chmod +x ./linkerd-await COPY scripts/flink/linkerd-entrypoint.sh ./linkerd-entrypoint.sh Then, for the jobmanager only, override the entrypoint like this: spec: containers: - name: flink-jobmanager command: - linkerd-entrypoint.sh # defined above Alternatively one could use the LINKERD_DISABLED or LINKERD_AWAIT_DISABLED env vars for bypassing the linkerd-await wrapper. For more info on using jobs & Linkerd consult the following resources: https://itnext.io/three-ways-to-use-linkerd-with-kubernetes-jobs-c12ccc6d4c7c (solution #3 is the one explained here) https://github.com/linkerd/linkerd-await Also, regarding the annotation config.linkerd.io/proxy-await: enabled , it does only the waiting but not the shutdown part, so if we are going to manually run linkerd-await --shutdown -- "$@" anyway, that annotation can be safely removed since it's redundant: https://github.com/linkerd/linkerd2/issues/8006 Finally, regarding: java.lang.NoClassDefFoundError: Could not initialize class foo.bar.Job let me clarify that this had nothing to do with Linkerd. This was mostly a config error along the lines of: java.lang.NoClassDefFoundError: Could not initialize class XXX Essentially (the specific details are irrelevant), there were some env vars missing in the taskmanager pods. Note that the exception message says "Could not initialize class foo.bar.Job" which is different from "Could not find class...". Sorry for the confusion!
I am deploying some Flink jobs which require access to some services under a service mesh implemented via Linkerd and I'm running into this error: java.lang.NoClassDefFoundError: Could not initialize class foo.bar.Job I can confirm that the jar file contains the class that cannot be found apparently, so it's not a problem with the jar itself, but seems to be related to Linkerd. In particular, I'm using the following pod annotations for both the jobmanager and the taskmanager pods (taken from my Helm Chart values file): podAnnotations: linkerd.io/inject: enabled config.linkerd.io/skip-outbound-ports: 6123,6124 config.linkerd.io/proxy-await: enabled For what it's worth, I'm using the Ververica Platform (Community Edition) for deploying my jobs to Kubernetes, although I don't think the issue is VVP-specific: {{- define "vvp.deployment" }} kind: Deployment apiVersion: v1 metadata: name: my-job spec: template: spec: artifact: kind: jar flinkImageRegistry: {{ .Values.flink.imageRegistry }} flinkVersion: "1.15.1" flinkImageTag: 1.15.1-stream1-scala_2.12-java11-linkerd entryClass: foo.bar.Job kubernetes: jobManagerPodTemplate: metadata: {{- with .Values.flink.podAnnotations }} annotations: {{- toYaml . | nindent 14 }} {{- end }} spec: containers: - name: flink-jobmanager command: - linkerd-entrypoint.sh taskManagerPodTemplate: metadata: {{- with .Values.flink.podAnnotations }} annotations: {{- toYaml . | nindent 14 }} {{- end }} {{- end }} where the contents of linkerd-entrypoint.sh are: #!/bin/bash set -e exec linkerd-await --shutdown -- "$@" For extra context, the VVP and the flink jobs are deployed into different namespaces. Also, for the VVP pods, I'm not using any linkerd annotations whatsoever. Has anyone encountered similar problems? The closest troubleshooting resource/guide that I've found so far is this one, which targets Istio instead of Linkerd.
Your use case is essentially autoscaling based on RabbitMQ queue length, which is a bit challenging due to the limitations of the native Kubernetes HPA controller, as it "typically" autoscale based on CPU and memory usage. Alternatively, you may consider utilizing KEDA (Kubernetes Event-Driven Autoscaling): KEDA is a Kubernetes-based event-driven autoscaler. With KEDA, you can drive the scaling of any deployment based on the various criteria (depends on the scaler adapter). It supports RabbitMQ, which makes it particularly suitable for your case. Here's a sample ScaledObject for your case: apiVersion: keda.sh/v1alpha1 kind: ScaledObject spec: scaleTargetRef: apiVersion: apps/v1 # Optional (default: apps/v1) kind: Deployment name: task-processor triggers: - type: rabbitmq metadata: queueName: 'my-task-queue' mode: QueueLength # Trigger on number of messages in the queue. value: '5' # Target number of tasks per pod.
###Instruction: I have this use case: When there is much load on a specific queue in RabbitMQ, i want to start more replicas. Let's say, my app can handle 5 messages ( = tasks) simultaneously and they all take 1 min to complete. When there are more than 10 "ready" messages in the rabbitmq Queue, i want the HPA to start a new replica. When there are 20 "ready" messages, start 2, at 30 "ready" messages start 3 etc. I used this helm chart for install prometheus-adapter: https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-adapter And in my helm values yaml i added: rules: default: true custom: - seriesQuery: '{__name__=~"rabbitmq_detailed_queue_messages_ready"}' name: matches: "^(.*)" as: "open_tasks" resources: overrides: kubernetes_namespace: { resource: "namespace" } kubernetes_name: { resource: "service" } metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>,queue="my-task-queue"}) by (<<.GroupBy>>) Now, this should be exposed, but it isn't: $ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq . | grep open_tasks Now this is my main question. After that i could deploy an HPA for my app task-processor like this: kind: HorizontalPodAutoscaler apiVersion: autoscaling/v2 metadata: name: task-processor-autoscaler spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: task-processor minReplicas: 1 maxReplicas: 10 metrics: - type: Object object: metric: name: open_tasks describedObject: apiVersion: "/v1" kind: Service name: open_tasks target: type: Value value: 10 Now my questions: Why is the metric not exposed in the raw query? Is my yaml for the HPA correct? I have the feeling that i'm missing some essential stuff here, but i'm not able to get my head around that. ### Response: Your use case is essentially autoscaling based on RabbitMQ queue length, which is a bit challenging due to the limitations of the native Kubernetes HPA controller, as it "typically" autoscale based on CPU and memory usage. Alternatively, you may consider utilizing KEDA (Kubernetes Event-Driven Autoscaling): KEDA is a Kubernetes-based event-driven autoscaler. With KEDA, you can drive the scaling of any deployment based on the various criteria (depends on the scaler adapter). It supports RabbitMQ, which makes it particularly suitable for your case. Here's a sample ScaledObject for your case: apiVersion: keda.sh/v1alpha1 kind: ScaledObject spec: scaleTargetRef: apiVersion: apps/v1 # Optional (default: apps/v1) kind: Deployment name: task-processor triggers: - type: rabbitmq metadata: queueName: 'my-task-queue' mode: QueueLength # Trigger on number of messages in the queue. value: '5' # Target number of tasks per pod.
I have this use case: When there is much load on a specific queue in RabbitMQ, i want to start more replicas. Let's say, my app can handle 5 messages ( = tasks) simultaneously and they all take 1 min to complete. When there are more than 10 "ready" messages in the rabbitmq Queue, i want the HPA to start a new replica. When there are 20 "ready" messages, start 2, at 30 "ready" messages start 3 etc. I used this helm chart for install prometheus-adapter: https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-adapter And in my helm values yaml i added: rules: default: true custom: - seriesQuery: '{__name__=~"rabbitmq_detailed_queue_messages_ready"}' name: matches: "^(.*)" as: "open_tasks" resources: overrides: kubernetes_namespace: { resource: "namespace" } kubernetes_name: { resource: "service" } metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>,queue="my-task-queue"}) by (<<.GroupBy>>) Now, this should be exposed, but it isn't: $ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq . | grep open_tasks Now this is my main question. After that i could deploy an HPA for my app task-processor like this: kind: HorizontalPodAutoscaler apiVersion: autoscaling/v2 metadata: name: task-processor-autoscaler spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: task-processor minReplicas: 1 maxReplicas: 10 metrics: - type: Object object: metric: name: open_tasks describedObject: apiVersion: "/v1" kind: Service name: open_tasks target: type: Value value: 10 Now my questions: Why is the metric not exposed in the raw query? Is my yaml for the HPA correct? I have the feeling that i'm missing some essential stuff here, but i'm not able to get my head around that.
Try: kubectl get pods \ --output=custom-columns=\ "POD_PORT":.spec.containers[*].ports[*].containerPort You can include .metadata.name too to aid clarity: kubectl get pods \ --output=custom-columns=\ "NAME":.metadata.name,\ "POD_PORT":.spec.containers[*].ports[*].containerPort It's not clearly (!?) documented but I suspect the format is kubectl's JSONPath and there (appears to be) a subtle distinction between e.g. .spec.containers[] and .spec.containers[*] where the former stops when the property is not found and the latter includes everything. Because .spec will always include one or more .containers, but each container may not have .ports, you can also: kubectl get pods \ --output=custom-columns=\ "POD_PORT":.spec.containers[].ports[*].containerPort Which containers[] but ports[*] to the same effect. NOTE as explained in Container v1 core see "ports", ports that are exposed by the container need not be specified through ports i.e. this command will return documented ports but this list may exclude ports that are exposed by the containers (and not documented).
###Instruction: I am getting the IP address assigned to the pod using kubectl get pods -o custom-columns="POD_IP":.status.podIPs command. And based on same approach, I am using kubectl get pods -o custom-columns="POD_PORT":.spec.containers.ports.containerPort command to get the port number but it is coming as blank. cloudshell:~$ kubectl get pods -o custom-columns="POD_IP":.status.podIPs POD_IP [map[ip:10.32.0.194]] cloudshell:~$ kubectl get pods -o custom-columns="POD_PORT":.spec.containers.ports.containerPort POD_PORT <none> cloudshell:~$ kubectl get pods -o custom-columns="POD_PORT":.spec.containers POD_PORT [map[image:nginx:1.10.1 imagePullPolicy:IfNotPresent name:servic1 ports:[map[containerPort:8080 protocol:TCP]] resources:map[limits:map[cpu:500m ephemeral-storage:1Gi memory:2Gi] requests:map[cpu:500m ephemeral-storage:1Gi memory:2Gi]] securityContext:map[capabilities:map[drop:[NET_RAW]]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File volumeMounts:[map[mountPath:/var/run/secrets/kubernetes.io/serviceaccount name:kube-api-access-mgk8k readOnly:true]]]] cloudshell:~$ I have tried to use kubectl get pods -o custom-columns="Port Number of Pod":.spec.containers command and I can see that my mapping (.spec.containers.ports.containerPort) is correct but somehow it is still not working. I am totally sure that .spec.containers.ports.containerPort mapping correct, and same command format is giving IP address, so not able to catch what is wrong. Is anyone able to catch what is wrong here? ### Response: Try: kubectl get pods \ --output=custom-columns=\ "POD_PORT":.spec.containers[*].ports[*].containerPort You can include .metadata.name too to aid clarity: kubectl get pods \ --output=custom-columns=\ "NAME":.metadata.name,\ "POD_PORT":.spec.containers[*].ports[*].containerPort It's not clearly (!?) documented but I suspect the format is kubectl's JSONPath and there (appears to be) a subtle distinction between e.g. .spec.containers[] and .spec.containers[*] where the former stops when the property is not found and the latter includes everything. Because .spec will always include one or more .containers, but each container may not have .ports, you can also: kubectl get pods \ --output=custom-columns=\ "POD_PORT":.spec.containers[].ports[*].containerPort Which containers[] but ports[*] to the same effect. NOTE as explained in Container v1 core see "ports", ports that are exposed by the container need not be specified through ports i.e. this command will return documented ports but this list may exclude ports that are exposed by the containers (and not documented).
I am getting the IP address assigned to the pod using kubectl get pods -o custom-columns="POD_IP":.status.podIPs command. And based on same approach, I am using kubectl get pods -o custom-columns="POD_PORT":.spec.containers.ports.containerPort command to get the port number but it is coming as blank. cloudshell:~$ kubectl get pods -o custom-columns="POD_IP":.status.podIPs POD_IP [map[ip:10.32.0.194]] cloudshell:~$ kubectl get pods -o custom-columns="POD_PORT":.spec.containers.ports.containerPort POD_PORT <none> cloudshell:~$ kubectl get pods -o custom-columns="POD_PORT":.spec.containers POD_PORT [map[image:nginx:1.10.1 imagePullPolicy:IfNotPresent name:servic1 ports:[map[containerPort:8080 protocol:TCP]] resources:map[limits:map[cpu:500m ephemeral-storage:1Gi memory:2Gi] requests:map[cpu:500m ephemeral-storage:1Gi memory:2Gi]] securityContext:map[capabilities:map[drop:[NET_RAW]]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File volumeMounts:[map[mountPath:/var/run/secrets/kubernetes.io/serviceaccount name:kube-api-access-mgk8k readOnly:true]]]] cloudshell:~$ I have tried to use kubectl get pods -o custom-columns="Port Number of Pod":.spec.containers command and I can see that my mapping (.spec.containers.ports.containerPort) is correct but somehow it is still not working. I am totally sure that .spec.containers.ports.containerPort mapping correct, and same command format is giving IP address, so not able to catch what is wrong. Is anyone able to catch what is wrong here?
Using Azure Application Insights there are two [private monitoring options] (https://learn.microsoft.com/en-us/azure/azure-monitor/app/availability-private-test) described. Allowing limited inbound connectivity Using Azure Functions, as you have described in your Option 2. Personally I prefer endpoint monitoring to be more independent from the resource that's hosting the service.
###Instruction: I have a deployment with multiple pods in Azure Kubernetes Service. There is a K8s service that is used to connect deployment pods. The service has a private IP accessible in Azure Virtual Network. The service type is LoadBalancer. I want to monitor and see if the service is up. If it is not up, trigger an email alert. I have identified two options: Option 1: I enabled AKS diagnostics so that I get the service logs. When I check the logs with the query below, I can see service failure logs. I think I can use these logs in Azure Monitor to trigger an alert. I still need to verify if it will work in every type of failure. KubeEvents | where TimeGenerated > ago(7d) | where not(isempty(Namespace)) | where ObjectKind == 'Service' Option 2: Create an Azure Function with HTTPS API enabled so I can call it externally from Pingdom. Make sure the function uses AppService with a VM so that it can access private IPs and the service (As this is using VM, it is increasing the cost). The function checks the private IP and sees if it is returning 200, and it will return 200; otherwise, it will return an error code. So Pingdom will keep the uptime details and also alert accordingly when it is down. Summary: I am not 100% sure about option one. For the second option, it seems like doing too much work, and I think that there should be a better and more robust way of doing it. So I am interested in getting feedback from some Azure and K8s experts who dealt with the problem and solved it in a more robust way. ### Response: Using Azure Application Insights there are two [private monitoring options] (https://learn.microsoft.com/en-us/azure/azure-monitor/app/availability-private-test) described. Allowing limited inbound connectivity Using Azure Functions, as you have described in your Option 2. Personally I prefer endpoint monitoring to be more independent from the resource that's hosting the service.
I have a deployment with multiple pods in Azure Kubernetes Service. There is a K8s service that is used to connect deployment pods. The service has a private IP accessible in Azure Virtual Network. The service type is LoadBalancer. I want to monitor and see if the service is up. If it is not up, trigger an email alert. I have identified two options: Option 1: I enabled AKS diagnostics so that I get the service logs. When I check the logs with the query below, I can see service failure logs. I think I can use these logs in Azure Monitor to trigger an alert. I still need to verify if it will work in every type of failure. KubeEvents | where TimeGenerated > ago(7d) | where not(isempty(Namespace)) | where ObjectKind == 'Service' Option 2: Create an Azure Function with HTTPS API enabled so I can call it externally from Pingdom. Make sure the function uses AppService with a VM so that it can access private IPs and the service (As this is using VM, it is increasing the cost). The function checks the private IP and sees if it is returning 200, and it will return 200; otherwise, it will return an error code. So Pingdom will keep the uptime details and also alert accordingly when it is down. Summary: I am not 100% sure about option one. For the second option, it seems like doing too much work, and I think that there should be a better and more robust way of doing it. So I am interested in getting feedback from some Azure and K8s experts who dealt with the problem and solved it in a more robust way.
I can confirm what Michael said. Kafka is atm not supported and not part of Red Hats otel distro. Here you see what otel components will be productized in the next release. To bypass your issue, you can create your own distro. The easiest way would be to extend Red Hats manifest.yaml with the Kafka exporter. For testing, you can also use the contrib image offered by the OpenTelemetry community. There is a section in the OTEL CRD to overwrite the collector image. Example: apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: simplest spec: image: ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:0.78.0 config: ...
###Instruction: I'm having kafka cluster deployed in openshift in different namespace also opentelemetry is delpoyed in another namespace. This is my yaml file for opentelemetry apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: demo-otlcol-otlp namespace: otlcol-demo spec: config: | receivers: otlp: protocols: grpc: http: processors: batch: exporters: logging: loglevel: info kafka: brokers: - dev-kafka-kafka-brokers.kafka.svc.cluster.local:9092 protocol_version: 3.4.0 service: pipelines: traces: receivers: [otlp] processors: [batch] exporters: [logging,kafka] mode: daemonset resources: {} targetAllocator: {} The error i'm getting is, Error: failed to get config: cannot unmarshal the configuration: 1 error(s) decoding: * error decoding 'exporters': unknown type: "kafka" for id: "kafka" (valid values: [logging otlp otlphttp jaeger]) 2023/05/31 09:24:46 collector server run finished with error: failed to get config: cannot unmarshal the configuration: 1 error(s) decoding: * error decoding 'exporters': unknown type: "kafka" for id: "kafka" (valid values: [logging otlp otlphttp jaeger]) Can anyone explain how to use kafka in opentelemetry exporters ### Response: I can confirm what Michael said. Kafka is atm not supported and not part of Red Hats otel distro. Here you see what otel components will be productized in the next release. To bypass your issue, you can create your own distro. The easiest way would be to extend Red Hats manifest.yaml with the Kafka exporter. For testing, you can also use the contrib image offered by the OpenTelemetry community. There is a section in the OTEL CRD to overwrite the collector image. Example: apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: simplest spec: image: ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:0.78.0 config: ...
I'm having kafka cluster deployed in openshift in different namespace also opentelemetry is delpoyed in another namespace. This is my yaml file for opentelemetry apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: demo-otlcol-otlp namespace: otlcol-demo spec: config: | receivers: otlp: protocols: grpc: http: processors: batch: exporters: logging: loglevel: info kafka: brokers: - dev-kafka-kafka-brokers.kafka.svc.cluster.local:9092 protocol_version: 3.4.0 service: pipelines: traces: receivers: [otlp] processors: [batch] exporters: [logging,kafka] mode: daemonset resources: {} targetAllocator: {} The error i'm getting is, Error: failed to get config: cannot unmarshal the configuration: 1 error(s) decoding: * error decoding 'exporters': unknown type: "kafka" for id: "kafka" (valid values: [logging otlp otlphttp jaeger]) 2023/05/31 09:24:46 collector server run finished with error: failed to get config: cannot unmarshal the configuration: 1 error(s) decoding: * error decoding 'exporters': unknown type: "kafka" for id: "kafka" (valid values: [logging otlp otlphttp jaeger]) Can anyone explain how to use kafka in opentelemetry exporters
Sadly, Azure profiler just does not support dotnet 6. There might be other solutions for Azure witr this dotnet version
###Instruction: I got a .net6 service hosted in an AKS cluster with app insight and profiler enabled. Logs appear in app insight and live metrics are working and I can see every action in app insight. When I click the "Profile now" button in the performance tab, it says a profiling session is in progress and I fire a few requests. Eventually I get a message in app insight of a timeout and no session is added to the list. Why could it happen? ### Response: Sadly, Azure profiler just does not support dotnet 6. There might be other solutions for Azure witr this dotnet version
I got a .net6 service hosted in an AKS cluster with app insight and profiler enabled. Logs appear in app insight and live metrics are working and I can see every action in app insight. When I click the "Profile now" button in the performance tab, it says a profiling session is in progress and I fire a few requests. Eventually I get a message in app insight of a timeout and no session is added to the list. Why could it happen?
If you're running PostgreSQL, and you're using EKS 1.23 or higher, you'll need to install a CSI driver, e.g. the EBS CSI driver. Starting with 1.23, EKS no longer ships with a storage driver (the in-tree driver was deprecated). After installing the CSI driver, create a default storage class. Your pods should start shortly thereafter. If you're new to Kubernetes, I'd recommend installing the CSI driver through EKS addons.
###Instruction: I have created an EKS cluster with ALB setup. I tried installing superset by following the steps provided in https://superset.apache.org/docs/installation/running-on-kubernetes/ my-values.yaml ingress: enabled: true ingressClassName: ~ annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: instance # kubernetes.io/tls-acme: "true" ## Extend timeout to allow long running queries. # nginx.ingress.kubernetes.io/proxy-connect-timeout: "300" # nginx.ingress.kubernetes.io/proxy-read-timeout: "300" # nginx.ingress.kubernetes.io/proxy-send-timeout: "300" path: / pathType: ImplementationSpecific hosts: - chart-example.local tls: [] extraHostsRaw: [] # - secretName: chart-example-tls # hosts: # - chart-example.local When I am running helm upgrade --install --values my-values.yaml superset superset/superset --timeout 10m30s, it takes a lot of time and returns Error: UPGRADE FAILED: post-upgrade hooks failed: 1 error occurred: * timed out waiting for the condition and when I run [ec2-user@ip-1**-**-**-*** ~]$ kubectl get pods NAME READY STATUS RESTARTS AGE superset-7866fcc8b4-tcpk4 0/1 Init:0/1 8 (6m53s ago) 33m superset-init-db-6q9dp 0/1 Init:Error 0 5m24s superset-init-db-7hqz4 0/1 Init:Error 0 7m48s superset-init-db-jt87x 0/1 Init:Error 0 12m superset-init-db-rt85r 0/1 Init:Error 0 10m superset-init-db-zptz6 0/1 Init:Error 0 2m40s superset-postgresql-0 0/1 Pending 0 33m superset-redis-master-0 1/1 Running 0 33m superset-worker-748db75bf7-9kzfp 0/1 Init:0/1 8 (6m56s ago) 33m I am new to kubernetes and this is new to me. Please help! Edit:1 Added EBS CSI driver and Storage Class and went ahead with superset installation. Ran the following commands. Attaching responses kubectl get pods NAME READY STATUS RESTARTS AGE superset-7866fcc8b4-q59nd 0/1 Init:0/1 4 (109s ago) 13m superset-init-db-gq9b9 0/1 Pending 0 13m superset-postgresql-0 0/1 Pending 0 13m superset-redis-master-0 1/1 Running 0 13m superset-worker-748db75bf7-n7t2r 0/1 Init:0/1 5 (91s ago) 13m [ec2-user@ip-172-31-23-209 ~]$ kubectl logs superset-worker-748db75bf7-n7t2r Defaulted container "superset" out of: superset, wait-for-postgres-redis (init) Error from server (BadRequest): container "superset" in pod "superset-worker-748db75bf7-n7t2r" is waiting to start: PodInitializing [ec2-user@ip-172-31-23-209 ~]$ kubectl logs superset-7866fcc8b4-q59nd Defaulted container "superset" out of: superset, wait-for-postgres (init) Error from server (BadRequest): container "superset" in pod "superset-7866fcc8b4-q59nd" is waiting to start: PodInitializing kubectl describe pod superset-postgresql-0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 4m20s (x4 over 16m) default-scheduler 0/1 nodes are available: 1 Too many pods. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod. ### Response: If you're running PostgreSQL, and you're using EKS 1.23 or higher, you'll need to install a CSI driver, e.g. the EBS CSI driver. Starting with 1.23, EKS no longer ships with a storage driver (the in-tree driver was deprecated). After installing the CSI driver, create a default storage class. Your pods should start shortly thereafter. If you're new to Kubernetes, I'd recommend installing the CSI driver through EKS addons.
I have created an EKS cluster with ALB setup. I tried installing superset by following the steps provided in https://superset.apache.org/docs/installation/running-on-kubernetes/ my-values.yaml ingress: enabled: true ingressClassName: ~ annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: instance # kubernetes.io/tls-acme: "true" ## Extend timeout to allow long running queries. # nginx.ingress.kubernetes.io/proxy-connect-timeout: "300" # nginx.ingress.kubernetes.io/proxy-read-timeout: "300" # nginx.ingress.kubernetes.io/proxy-send-timeout: "300" path: / pathType: ImplementationSpecific hosts: - chart-example.local tls: [] extraHostsRaw: [] # - secretName: chart-example-tls # hosts: # - chart-example.local When I am running helm upgrade --install --values my-values.yaml superset superset/superset --timeout 10m30s, it takes a lot of time and returns Error: UPGRADE FAILED: post-upgrade hooks failed: 1 error occurred: * timed out waiting for the condition and when I run [ec2-user@ip-1**-**-**-*** ~]$ kubectl get pods NAME READY STATUS RESTARTS AGE superset-7866fcc8b4-tcpk4 0/1 Init:0/1 8 (6m53s ago) 33m superset-init-db-6q9dp 0/1 Init:Error 0 5m24s superset-init-db-7hqz4 0/1 Init:Error 0 7m48s superset-init-db-jt87x 0/1 Init:Error 0 12m superset-init-db-rt85r 0/1 Init:Error 0 10m superset-init-db-zptz6 0/1 Init:Error 0 2m40s superset-postgresql-0 0/1 Pending 0 33m superset-redis-master-0 1/1 Running 0 33m superset-worker-748db75bf7-9kzfp 0/1 Init:0/1 8 (6m56s ago) 33m I am new to kubernetes and this is new to me. Please help! Edit:1 Added EBS CSI driver and Storage Class and went ahead with superset installation. Ran the following commands. Attaching responses kubectl get pods NAME READY STATUS RESTARTS AGE superset-7866fcc8b4-q59nd 0/1 Init:0/1 4 (109s ago) 13m superset-init-db-gq9b9 0/1 Pending 0 13m superset-postgresql-0 0/1 Pending 0 13m superset-redis-master-0 1/1 Running 0 13m superset-worker-748db75bf7-n7t2r 0/1 Init:0/1 5 (91s ago) 13m [ec2-user@ip-172-31-23-209 ~]$ kubectl logs superset-worker-748db75bf7-n7t2r Defaulted container "superset" out of: superset, wait-for-postgres-redis (init) Error from server (BadRequest): container "superset" in pod "superset-worker-748db75bf7-n7t2r" is waiting to start: PodInitializing [ec2-user@ip-172-31-23-209 ~]$ kubectl logs superset-7866fcc8b4-q59nd Defaulted container "superset" out of: superset, wait-for-postgres (init) Error from server (BadRequest): container "superset" in pod "superset-7866fcc8b4-q59nd" is waiting to start: PodInitializing kubectl describe pod superset-postgresql-0 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 4m20s (x4 over 16m) default-scheduler 0/1 nodes are available: 1 Too many pods. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
In the query editor of the graph panel, enter the PromQL query to retrieve the maximum number of running pods throughout the day. The query should use the max_over_time function to find the highest value over a time range. This one worked for me - max_over_time(kube_pod_status_phase{phase="Running"}[1d]) I also found this open-source project on GitHub- Awesome-CloudOps-Automation. They are creating an open-source framework for writing Runbooks using Jupyter Notebooks. They have many Prometheus and EKS actions that can be used to automate some of the Prometheus and AWS EKS related tasks. You can use the Prometheus get metrics statistics action to fire your PromQL query.
###Instruction: I have deploy my kubernetes cluster in AWS EKS. I have setup prometheus and grafana to monitor the EKS cluster. How to find Max no of running pods throughout the day ?? ### Response: In the query editor of the graph panel, enter the PromQL query to retrieve the maximum number of running pods throughout the day. The query should use the max_over_time function to find the highest value over a time range. This one worked for me - max_over_time(kube_pod_status_phase{phase="Running"}[1d]) I also found this open-source project on GitHub- Awesome-CloudOps-Automation. They are creating an open-source framework for writing Runbooks using Jupyter Notebooks. They have many Prometheus and EKS actions that can be used to automate some of the Prometheus and AWS EKS related tasks. You can use the Prometheus get metrics statistics action to fire your PromQL query.
I have deploy my kubernetes cluster in AWS EKS. I have setup prometheus and grafana to monitor the EKS cluster. How to find Max no of running pods throughout the day ??
To your questions: This is the first time that I use status and conditions, is it right way or do I miss something? Your approach to managing the status and conditions of Kubernetes resources is generally fine. The status subresource in a Kubernetes API object is typically used to represent the current state of the system, and it can include conditions. A condition is a collection of fields that describe the state of an object in a more detailed way than just true or false. Each condition typically has a type, status, reason, message, and lastTransitionTime. Your code correctly sets these fields based on whether the vfm custom resource is ready or not. It is good to note that conditions should be leveled - meaning they should be set to their current observed value regardless of their previous value. They should also be set (either true, false, or unknown) for all the significant or user-meaningful aspects of the component's current state. This makes conditions a good mechanism to indicate "transient states" like Progressing or Degraded that might be expected to change over time or based on external state. Sometimes when I do the update r.Status().Update I got error: Operation cannot be fulfilled on eds.core.vtw.bmw.com “resource01”: the object has been modified; please apply your changes to the latest version and try again. This error occurs because another client updated the same object while your controller was processing it. This could be another controller or even another instance of the same controller (if you run more than one). One possible way to handle this is to use a retry mechanism that re-attempts the status update when this error occurs. In your case, you have implemented a conditionChanged check to only attempt the status update if the condition has changed. This is a good approach to avoid unnecessary updates, but it does not completely prevent the error, because another client could still update the object between your Get call and Status().Update call. You could also consider using Patch instead of Update to modify the status, which reduces the risk of conflicting with other updates. Patching allows for partial updates to an object, so you are less likely to encounter conflicts. Regarding the timing issue, you could consider updating the LastTransitionTime only when the status actually changes, instead of every time the health check is done. This would mean the LastTransitionTime reflects when the status last changed, rather than the last time the check was performed. One thing to keep in mind is that frequent updates to the status subresource, even if the status does not change, can cause unnecessary API server load. You should strive to update the status only when it changes. A possible updated version of your checkHealth function considering those points could be: func (r *ebdReconciler) checkHealth(ctx context.Context, req ctrl.Request, ebd ebdmanv1alpha1.ebd) (bool, error) { vfmReady, err := r.mbr.IsReady(ctx, req.Name, req.Namespace) condition := metav1.Condition{ Type: ebdmanv1alpha1.KubernetesvfmHealthy, Status: metav1.ConditionUnknown, // start with unknown status } latestebd := ebdmanv1alpha1.ebd{} if err := r.Get(ctx, req.NamespacedName, &latestebd); err != nil { return vfmReady, err } oldCondition := meta.FindStatusCondition(latestebd.Status.Conditions, ebdmanv1alpha1.KubernetesvfmHealthy) if err != nil { // There was an error checking readiness - Set status to false condition.Status = metav1.ConditionFalse condition.Reason = ebdmanv1alpha1.ReasonError condition.Message = fmt.Sprintf("Failed to check vfm readiness: %v", err) } else if vfmReady { // The vfm is ready - Set status to true condition.Status = metav1.ConditionTrue condition.Reason = ebdmanv1alpha1.ReasonReady condition.Message = "vfm custom resource is ready" } else { // The vfm is not ready - Set status to false condition.Status = metav1.ConditionFalse condition.Reason = ebdmanv1alpha1.ResourceProgressing condition.Message = "vfm custom resource is not ready" } // Only update the LastTransitionTime if the status has changed if oldCondition == nil || oldCondition.Status != condition.Status { condition.LastTransitionTime = metav1.Now() } else { condition.LastTransitionTime = oldCondition.LastTransitionTime } meta.SetStatusCondition(&latestebd.Status.Conditions, condition) if oldCondition != nil && condition.Status == oldCondition.Status && condition.Reason == oldCondition.Reason && condition.Message == oldCondition.Message { return vfmReady, nil } // Retry on conflict retryErr := retry.RetryOnConflict(retry.DefaultRetry, func() error { // Retrieve the latest version of ebd before attempting update // RetryOnConflict uses exponential backoff to avoid exhausting the apiserver if getErr := r.Get(ctx, req.NamespacedName, &latestebd); getErr != nil { return getErr } if updateErr := r.Status().Update(ctx, &latestebd); updateErr != nil { return updateErr } return nil }) if retryErr != nil { r.Log.Error(retryErr, "Failed to update vfm status after retries") return vfmReady, retryErr } return vfmReady, nil } In this updated version: The LastTransitionTime field is updated only when the condition's status changes. This will ensure that the LastTransitionTime accurately reflects when the status was last changed rather than when the checkHealth function was last run. This should provide a more accurate timeline of when the resource's status actually changed, rather than when the reconciliation loop was run. A retry mechanism is added using retry.RetryOnConflict to re-attempt the status update when a conflict error occurs. Note that you'll need to import the "k8s.io/client-go/util/retry" package for this. This is a common pattern for dealing with the Operation cannot be fulfilled... error. These changes should help to address the issues you were facing with updating the status and conditions of your Kubernetes resources. Remember that you may still occasionally get conflict errors, especially if there are other clients updating the same object. In these cases, the RetryOnConflict function will retry the update with the latest version of the object.
###Instruction: Im having k8s controller which needs to install some resources and update the status and condition accordantly The flow in the reconcile is like following: Install the resource and don’t wait Call to the function checkAvailability and update the status accordantly if ready/ pending install/ error I’ve two main questions: This is the first time that I use status and conditions, is it right way or do I miss something Sometimes when I do the update r.Status().Update I got error :Operation cannot be fulfilled on eds.core.vtw.bmw.com “resouce01”: the object has been modified; please apply your changes to the latest version and try again , so I’ve added the check conditionChanged` which solve the problem but not sure if its correct as I update the status once and if it doesn’t chanted I don’t touch it so user can see status ready from a while ago and the reconcile doesn’t update the date and time for the ready condition as it skip it when its already “ready” I use the following func (r *ebdReconciler) checkHealth(ctx context.Context, req ctrl.Request, ebd ebdmanv1alpha1.ebd) (bool, error) { vfmReady, err := r.mbr.IsReady(ctx, req.Name, req.Namespace) condition := metav1.Condition{ Type: ebdmanv1alpha1.KubernetesvfmHealthy, Observebdneration: ebd.Generation, LastTransitionTime: metav1.Now(), } if err != nil { // There was an error checking readiness - Set status to false condition.Status = metav1.ConditionFalse condition.Reason = ebdmanv1alpha1.ReasonError condition.Message = fmt.Sprintf("Failed to check vfm readiness: %v", err) } else if vfmReady { // The vfm is ready - Set status to true condition.Status = metav1.ConditionTrue condition.Reason = ebdmanv1alpha1.ReasonReady condition.Message = "vfm custom resource is ready" } else { // The vfm is not ready - Set status to false condition.Status = metav1.ConditionFalse condition.Reason = ebdmanv1alpha1.ResourceProgressing condition.Message = "vfm custom resource is not ready" } // Get the latest version of the ebd latestebd := ebdmanv1alpha1.ebd{} if err := r.Get(ctx, req.NamespacedName, &latestebd); err != nil { return vfmReady, err } oldConditions := latestebd.Status.Conditions meta.SetStatusCondition(&latestebd.Status.Conditions, condition) if !conditionChanged(&oldConditions, &latestebd.Status.Conditions, ebdmanv1alpha1.KubernetesvfmHealthy) { return vfmReady, nil } if err := r.Status().Update(ctx, &latestebd); err != nil { r.Log.Error(err, "failed to update vfm status") return vfmReady, err } return vfmReady, nil } func conditionChanged(oldConditions, newConditions *[]metav1.Condition, conditionType string) bool { newCondition := meta.FindStatusCondition(*newConditions, conditionType) oldCondition := meta.FindStatusCondition(*oldConditions, conditionType) if oldCondition == nil && newCondition == nil { return false } if oldCondition == nil || newCondition == nil { return true } return oldCondition.Status != newCondition.Status || oldCondition.Reason != newCondition.Reason || oldCondition.Message != newCondition.Message } ### Response: To your questions: This is the first time that I use status and conditions, is it right way or do I miss something? Your approach to managing the status and conditions of Kubernetes resources is generally fine. The status subresource in a Kubernetes API object is typically used to represent the current state of the system, and it can include conditions. A condition is a collection of fields that describe the state of an object in a more detailed way than just true or false. Each condition typically has a type, status, reason, message, and lastTransitionTime. Your code correctly sets these fields based on whether the vfm custom resource is ready or not. It is good to note that conditions should be leveled - meaning they should be set to their current observed value regardless of their previous value. They should also be set (either true, false, or unknown) for all the significant or user-meaningful aspects of the component's current state. This makes conditions a good mechanism to indicate "transient states" like Progressing or Degraded that might be expected to change over time or based on external state. Sometimes when I do the update r.Status().Update I got error: Operation cannot be fulfilled on eds.core.vtw.bmw.com “resource01”: the object has been modified; please apply your changes to the latest version and try again. This error occurs because another client updated the same object while your controller was processing it. This could be another controller or even another instance of the same controller (if you run more than one). One possible way to handle this is to use a retry mechanism that re-attempts the status update when this error occurs. In your case, you have implemented a conditionChanged check to only attempt the status update if the condition has changed. This is a good approach to avoid unnecessary updates, but it does not completely prevent the error, because another client could still update the object between your Get call and Status().Update call. You could also consider using Patch instead of Update to modify the status, which reduces the risk of conflicting with other updates. Patching allows for partial updates to an object, so you are less likely to encounter conflicts. Regarding the timing issue, you could consider updating the LastTransitionTime only when the status actually changes, instead of every time the health check is done. This would mean the LastTransitionTime reflects when the status last changed, rather than the last time the check was performed. One thing to keep in mind is that frequent updates to the status subresource, even if the status does not change, can cause unnecessary API server load. You should strive to update the status only when it changes. A possible updated version of your checkHealth function considering those points could be: func (r *ebdReconciler) checkHealth(ctx context.Context, req ctrl.Request, ebd ebdmanv1alpha1.ebd) (bool, error) { vfmReady, err := r.mbr.IsReady(ctx, req.Name, req.Namespace) condition := metav1.Condition{ Type: ebdmanv1alpha1.KubernetesvfmHealthy, Status: metav1.ConditionUnknown, // start with unknown status } latestebd := ebdmanv1alpha1.ebd{} if err := r.Get(ctx, req.NamespacedName, &latestebd); err != nil { return vfmReady, err } oldCondition := meta.FindStatusCondition(latestebd.Status.Conditions, ebdmanv1alpha1.KubernetesvfmHealthy) if err != nil { // There was an error checking readiness - Set status to false condition.Status = metav1.ConditionFalse condition.Reason = ebdmanv1alpha1.ReasonError condition.Message = fmt.Sprintf("Failed to check vfm readiness: %v", err) } else if vfmReady { // The vfm is ready - Set status to true condition.Status = metav1.ConditionTrue condition.Reason = ebdmanv1alpha1.ReasonReady condition.Message = "vfm custom resource is ready" } else { // The vfm is not ready - Set status to false condition.Status = metav1.ConditionFalse condition.Reason = ebdmanv1alpha1.ResourceProgressing condition.Message = "vfm custom resource is not ready" } // Only update the LastTransitionTime if the status has changed if oldCondition == nil || oldCondition.Status != condition.Status { condition.LastTransitionTime = metav1.Now() } else { condition.LastTransitionTime = oldCondition.LastTransitionTime } meta.SetStatusCondition(&latestebd.Status.Conditions, condition) if oldCondition != nil && condition.Status == oldCondition.Status && condition.Reason == oldCondition.Reason && condition.Message == oldCondition.Message { return vfmReady, nil } // Retry on conflict retryErr := retry.RetryOnConflict(retry.DefaultRetry, func() error { // Retrieve the latest version of ebd before attempting update // RetryOnConflict uses exponential backoff to avoid exhausting the apiserver if getErr := r.Get(ctx, req.NamespacedName, &latestebd); getErr != nil { return getErr } if updateErr := r.Status().Update(ctx, &latestebd); updateErr != nil { return updateErr } return nil }) if retryErr != nil { r.Log.Error(retryErr, "Failed to update vfm status after retries") return vfmReady, retryErr } return vfmReady, nil } In this updated version: The LastTransitionTime field is updated only when the condition's status changes. This will ensure that the LastTransitionTime accurately reflects when the status was last changed rather than when the checkHealth function was last run. This should provide a more accurate timeline of when the resource's status actually changed, rather than when the reconciliation loop was run. A retry mechanism is added using retry.RetryOnConflict to re-attempt the status update when a conflict error occurs. Note that you'll need to import the "k8s.io/client-go/util/retry" package for this. This is a common pattern for dealing with the Operation cannot be fulfilled... error. These changes should help to address the issues you were facing with updating the status and conditions of your Kubernetes resources. Remember that you may still occasionally get conflict errors, especially if there are other clients updating the same object. In these cases, the RetryOnConflict function will retry the update with the latest version of the object.
Im having k8s controller which needs to install some resources and update the status and condition accordantly The flow in the reconcile is like following: Install the resource and don’t wait Call to the function checkAvailability and update the status accordantly if ready/ pending install/ error I’ve two main questions: This is the first time that I use status and conditions, is it right way or do I miss something Sometimes when I do the update r.Status().Update I got error :Operation cannot be fulfilled on eds.core.vtw.bmw.com “resouce01”: the object has been modified; please apply your changes to the latest version and try again , so I’ve added the check conditionChanged` which solve the problem but not sure if its correct as I update the status once and if it doesn’t chanted I don’t touch it so user can see status ready from a while ago and the reconcile doesn’t update the date and time for the ready condition as it skip it when its already “ready” I use the following func (r *ebdReconciler) checkHealth(ctx context.Context, req ctrl.Request, ebd ebdmanv1alpha1.ebd) (bool, error) { vfmReady, err := r.mbr.IsReady(ctx, req.Name, req.Namespace) condition := metav1.Condition{ Type: ebdmanv1alpha1.KubernetesvfmHealthy, Observebdneration: ebd.Generation, LastTransitionTime: metav1.Now(), } if err != nil { // There was an error checking readiness - Set status to false condition.Status = metav1.ConditionFalse condition.Reason = ebdmanv1alpha1.ReasonError condition.Message = fmt.Sprintf("Failed to check vfm readiness: %v", err) } else if vfmReady { // The vfm is ready - Set status to true condition.Status = metav1.ConditionTrue condition.Reason = ebdmanv1alpha1.ReasonReady condition.Message = "vfm custom resource is ready" } else { // The vfm is not ready - Set status to false condition.Status = metav1.ConditionFalse condition.Reason = ebdmanv1alpha1.ResourceProgressing condition.Message = "vfm custom resource is not ready" } // Get the latest version of the ebd latestebd := ebdmanv1alpha1.ebd{} if err := r.Get(ctx, req.NamespacedName, &latestebd); err != nil { return vfmReady, err } oldConditions := latestebd.Status.Conditions meta.SetStatusCondition(&latestebd.Status.Conditions, condition) if !conditionChanged(&oldConditions, &latestebd.Status.Conditions, ebdmanv1alpha1.KubernetesvfmHealthy) { return vfmReady, nil } if err := r.Status().Update(ctx, &latestebd); err != nil { r.Log.Error(err, "failed to update vfm status") return vfmReady, err } return vfmReady, nil } func conditionChanged(oldConditions, newConditions *[]metav1.Condition, conditionType string) bool { newCondition := meta.FindStatusCondition(*newConditions, conditionType) oldCondition := meta.FindStatusCondition(*oldConditions, conditionType) if oldCondition == nil && newCondition == nil { return false } if oldCondition == nil || newCondition == nil { return true } return oldCondition.Status != newCondition.Status || oldCondition.Reason != newCondition.Reason || oldCondition.Message != newCondition.Message }
Figured it out. The kubernetes retention policy of the volume was set to retain. I tried setting it to delete but the data was still persisting. I ended up logging on to the instance and blowing away everything where I mounted the PV. This forced the pod to restart, and when it did it actually went through the correct postgres init process.
###Instruction: I think I'm going crazy. I had kubernetes setup to deploy a postgres database. Worked fine. I wanted to add an init script so I made a custom image, deployed to private docker hub repo, and then updated the deployment to pull new image. But no matter what I do kubernetes keeps deploying the old database.... I updated the user, password, and database name to confirm that its not getting upgraded. Here's the kubectl file: # PostgreSQL StatefulSet Service apiVersion: v1 kind: Service metadata: name: postgres-db-lb spec: selector: app: postgres-stockticker ports: - name: "5432" port: 5432 targetPort: 5432 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: postgres-stockticker spec: selector: matchLabels: app: postgres-stockticker replicas: 1 template: metadata: labels: app: postgres-stockticker spec: containers: - name: postgres-stockticker image: goodwinmcd/postgres:1.0 imagePullPolicy: Always ports: - containerPort: 5432 # Data Volume volumeMounts: - name: postgresql-db-disk mountPath: /var/lib/postgresql/data volumes: - name: postgresql-db-disk persistentVolumeClaim: claimName: postgres-pv-claim imagePullSecrets: - name: docker-hub-repo-key --- kind: PersistentVolume apiVersion: v1 metadata: name: postgres-pv-volume # Sets PV's name labels: type: local # Sets PV's type to local app: postgres-stockticker spec: storageClassName: manual capacity: storage: 5Gi # Sets PV Volume accessModes: - ReadWriteMany hostPath: path: "/mnt/data" --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: postgres-pv-claim # Sets name of PVC labels: app: postgres-stockticker spec: storageClassName: manual accessModes: - ReadWriteMany # Sets read and write access resources: requests: storage: 5Gi # Sets volume size I've verified the following: Pulled down the image in a local docker image and confirmed that it has the correct user, password, database, and tables in it. Described the pod to make sure it's pulling and using the correct image Logged onto the pod and confirmed that the new username, password, and database are present on the pod with the correct values. I tried running psql to connect to the database with the new username and password but no luck. It succeeded with the old credentials Recreated and mounted new pv and new pvc to the pod I deleted the service, the deployment, the pv, and the pvc and recreated them with the apply command I tried restarting the postgres service (service postgres restart) and get the following error: No PostgreSQL clusters exist; see "man pg_createcluster" ... (warning). So somehow kubernetes is pulling down this container that has the new username, password, and database baked into the image but then starting up the old postgres server on the cluster. It makes no sense to me. Someone please help. I'm losing my sanity making sure I'm not missing something obvious. Let me know if you need any other information. ### Response: Figured it out. The kubernetes retention policy of the volume was set to retain. I tried setting it to delete but the data was still persisting. I ended up logging on to the instance and blowing away everything where I mounted the PV. This forced the pod to restart, and when it did it actually went through the correct postgres init process.
I think I'm going crazy. I had kubernetes setup to deploy a postgres database. Worked fine. I wanted to add an init script so I made a custom image, deployed to private docker hub repo, and then updated the deployment to pull new image. But no matter what I do kubernetes keeps deploying the old database.... I updated the user, password, and database name to confirm that its not getting upgraded. Here's the kubectl file: # PostgreSQL StatefulSet Service apiVersion: v1 kind: Service metadata: name: postgres-db-lb spec: selector: app: postgres-stockticker ports: - name: "5432" port: 5432 targetPort: 5432 type: ClusterIP --- apiVersion: apps/v1 kind: Deployment metadata: name: postgres-stockticker spec: selector: matchLabels: app: postgres-stockticker replicas: 1 template: metadata: labels: app: postgres-stockticker spec: containers: - name: postgres-stockticker image: goodwinmcd/postgres:1.0 imagePullPolicy: Always ports: - containerPort: 5432 # Data Volume volumeMounts: - name: postgresql-db-disk mountPath: /var/lib/postgresql/data volumes: - name: postgresql-db-disk persistentVolumeClaim: claimName: postgres-pv-claim imagePullSecrets: - name: docker-hub-repo-key --- kind: PersistentVolume apiVersion: v1 metadata: name: postgres-pv-volume # Sets PV's name labels: type: local # Sets PV's type to local app: postgres-stockticker spec: storageClassName: manual capacity: storage: 5Gi # Sets PV Volume accessModes: - ReadWriteMany hostPath: path: "/mnt/data" --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: postgres-pv-claim # Sets name of PVC labels: app: postgres-stockticker spec: storageClassName: manual accessModes: - ReadWriteMany # Sets read and write access resources: requests: storage: 5Gi # Sets volume size I've verified the following: Pulled down the image in a local docker image and confirmed that it has the correct user, password, database, and tables in it. Described the pod to make sure it's pulling and using the correct image Logged onto the pod and confirmed that the new username, password, and database are present on the pod with the correct values. I tried running psql to connect to the database with the new username and password but no luck. It succeeded with the old credentials Recreated and mounted new pv and new pvc to the pod I deleted the service, the deployment, the pv, and the pvc and recreated them with the apply command I tried restarting the postgres service (service postgres restart) and get the following error: No PostgreSQL clusters exist; see "man pg_createcluster" ... (warning). So somehow kubernetes is pulling down this container that has the new username, password, and database baked into the image but then starting up the old postgres server on the cluster. It makes no sense to me. Someone please help. I'm losing my sanity making sure I'm not missing something obvious. Let me know if you need any other information.
Previous answers didn't work to me, the regex seems wrong and the field with the actual creationTimestamp is the sixth as per today, so : kubectl get pods -A --sort-by=.metadata.creationTimestamp | awk 'match($6,/^[0-9]+[sm]{1}|^[0-5][0-9]m|^[1-9]h|^1[0-9]*h|^2[0-4]h/) {print $0}' It may be also useful to append |grep -v Completed to filter out Completed pods/jobs. https://regex101.com/r/l5gLKu/1 Client Version: v1.26.1 Kustomize Version: v4.5.7 Server Version: v1.25.5 Fatality: alias kpods1day="kubectl get pods -A --sort-by=.metadata.creationTimestamp | awk 'match(\$6,/^[0-9]+[sm]{1}|^[0-5][0-9]m|^[1-9]h|^1[0-9]*h|^2[0-4]h/) {print \$0}' |grep -v \"Completed\|kube-system\"" alias kpods2days="kubectl get pods -A --sort-by=.metadata.creationTimestamp | awk 'match(\$6,/^[0-9]+[sm]{1}|^[0-5][0-9]m|^[1-9]h|^1[0-9]*h|^[2-3][0-9]h|^4[0-8]h/) {print \$0}' |grep -v \"Completed\|kube-system\""
###Instruction: I would like to list pods created within 24 hours. I didn't find any kubectl commands or anything to get those. Could anyone please help me with the kubectl command to get only the pods created in last 24 hours. ### Response: Previous answers didn't work to me, the regex seems wrong and the field with the actual creationTimestamp is the sixth as per today, so : kubectl get pods -A --sort-by=.metadata.creationTimestamp | awk 'match($6,/^[0-9]+[sm]{1}|^[0-5][0-9]m|^[1-9]h|^1[0-9]*h|^2[0-4]h/) {print $0}' It may be also useful to append |grep -v Completed to filter out Completed pods/jobs. https://regex101.com/r/l5gLKu/1 Client Version: v1.26.1 Kustomize Version: v4.5.7 Server Version: v1.25.5 Fatality: alias kpods1day="kubectl get pods -A --sort-by=.metadata.creationTimestamp | awk 'match(\$6,/^[0-9]+[sm]{1}|^[0-5][0-9]m|^[1-9]h|^1[0-9]*h|^2[0-4]h/) {print \$0}' |grep -v \"Completed\|kube-system\"" alias kpods2days="kubectl get pods -A --sort-by=.metadata.creationTimestamp | awk 'match(\$6,/^[0-9]+[sm]{1}|^[0-5][0-9]m|^[1-9]h|^1[0-9]*h|^[2-3][0-9]h|^4[0-8]h/) {print \$0}' |grep -v \"Completed\|kube-system\""
I would like to list pods created within 24 hours. I didn't find any kubectl commands or anything to get those. Could anyone please help me with the kubectl command to get only the pods created in last 24 hours.
In the Argo CD ApplicationSet controller, you are using {{ index .Clusters.values "nameOverride" }} to access the nameOverride value. However, Clusters is an array in your clusters.yaml file, not a dictionary. So, you should not be trying to directly index it as if it is a dictionary. (In YAML, an array (or list) is denoted by items beginning with a dash (-).) The .Clusters field will contain an array of clusters from your clusters.yaml file, and you want to access the values.nameOverride field of each cluster. However, your current syntax is treating Clusters as if it were a dictionary that can be indexed directly with .values. You should instead iterate over the Clusters array to access each values dictionary individually. You may need to use a loop structure to do this, or modify your configuration so that values is not nested within an array. You can also use a different structure for your clusters.yaml file. If you only have one cluster, you could structure your clusters.yaml file like this: clusters: name: cluster1-eu-k8s-002 url: https://cluster1.hcp.northeurope.azmk8s.io:443 values: nameOverride: ReplaceWithThis Then, in this case, you can directly access nameOverride with {{ index .Clusters.values "nameOverride" }}. If you have multiple clusters and need a unique nameOverride for each, you could create a separate file for each cluster in your repository and adjust the files field in your ApplicationSet to match the new file structure. That would be how a Git file generator would be able to read each of those files, and access the values.nameOverride field of each cluster in their respective file.
###Instruction: I just started working with ArgoCD and I have an issue I can't find the answer for. I have a file called clusters.yaml in my Git repo: clusters: - name: cluster1-eu-k8s-002 url: https://cluster1.hcp.northeurope.azmk8s.io:443 values: nameOverride: ReplaceWithThis And I am using the following ApplicationSet in order to deploy Opentelemetry-collector on a bunch of clusters grouped under the label group:dev. apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: opentelemetry-applicationset namespace: argocd spec: generators: - git: repoURL: git@github.com:removed/cloud.git revision: HEAD files: - path: GitOps/argocd-apps/clusters.yaml - clusters: selector: matchLabels: argocd.argoproj.io/secret-type: cluster group: dev template: metadata: name: 'opentelemetry-{{name}}' spec: project: default sources: - repoURL: https://open-telemetry.github.io/opentelemetry-helm-charts chart: opentelemetry-collector targetRevision: 0.51.3 helm: valueFiles: - $values/GitOps/argocd-apps/opentelemetry-collector/values/values-dev.yaml parameters: - name: nameOverride value: '{{ index .Clusters.values "nameOverride" }}' - repoURL: git@github.com:removed/cloud.git ref: values destination: server: '{{ server }}' namespace: opentelemetry I am trying to replace a parameter called nameOverride with my value ReplaceWithThis from clusters.yaml. ArgoCD is not deploying my app because of this line: value: '{{ index .Clusters.values "nameOverride" }}' ArgoCD ApplicationSet controller logs throw some nonsense errors. I am sure I identified the problem correctly, because it works as expected if I just hardcore the string. What exactly is the issue with the way I am trying to pull that value? ### Response: In the Argo CD ApplicationSet controller, you are using {{ index .Clusters.values "nameOverride" }} to access the nameOverride value. However, Clusters is an array in your clusters.yaml file, not a dictionary. So, you should not be trying to directly index it as if it is a dictionary. (In YAML, an array (or list) is denoted by items beginning with a dash (-).) The .Clusters field will contain an array of clusters from your clusters.yaml file, and you want to access the values.nameOverride field of each cluster. However, your current syntax is treating Clusters as if it were a dictionary that can be indexed directly with .values. You should instead iterate over the Clusters array to access each values dictionary individually. You may need to use a loop structure to do this, or modify your configuration so that values is not nested within an array. You can also use a different structure for your clusters.yaml file. If you only have one cluster, you could structure your clusters.yaml file like this: clusters: name: cluster1-eu-k8s-002 url: https://cluster1.hcp.northeurope.azmk8s.io:443 values: nameOverride: ReplaceWithThis Then, in this case, you can directly access nameOverride with {{ index .Clusters.values "nameOverride" }}. If you have multiple clusters and need a unique nameOverride for each, you could create a separate file for each cluster in your repository and adjust the files field in your ApplicationSet to match the new file structure. That would be how a Git file generator would be able to read each of those files, and access the values.nameOverride field of each cluster in their respective file.
I just started working with ArgoCD and I have an issue I can't find the answer for. I have a file called clusters.yaml in my Git repo: clusters: - name: cluster1-eu-k8s-002 url: https://cluster1.hcp.northeurope.azmk8s.io:443 values: nameOverride: ReplaceWithThis And I am using the following ApplicationSet in order to deploy Opentelemetry-collector on a bunch of clusters grouped under the label group:dev. apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: opentelemetry-applicationset namespace: argocd spec: generators: - git: repoURL: git@github.com:removed/cloud.git revision: HEAD files: - path: GitOps/argocd-apps/clusters.yaml - clusters: selector: matchLabels: argocd.argoproj.io/secret-type: cluster group: dev template: metadata: name: 'opentelemetry-{{name}}' spec: project: default sources: - repoURL: https://open-telemetry.github.io/opentelemetry-helm-charts chart: opentelemetry-collector targetRevision: 0.51.3 helm: valueFiles: - $values/GitOps/argocd-apps/opentelemetry-collector/values/values-dev.yaml parameters: - name: nameOverride value: '{{ index .Clusters.values "nameOverride" }}' - repoURL: git@github.com:removed/cloud.git ref: values destination: server: '{{ server }}' namespace: opentelemetry I am trying to replace a parameter called nameOverride with my value ReplaceWithThis from clusters.yaml. ArgoCD is not deploying my app because of this line: value: '{{ index .Clusters.values "nameOverride" }}' ArgoCD ApplicationSet controller logs throw some nonsense errors. I am sure I identified the problem correctly, because it works as expected if I just hardcore the string. What exactly is the issue with the way I am trying to pull that value?
It's actually not related to WordPress, it's general purpose only you can use similar way Docker mounts the secret content at /run/secrets/connection_string Your code can read the variable and it will get path from there which is /run/secrets/connection_string now your code can read that file runtime and get the content of the secret as simple as that Here you can refer my Github repo for example : https://github.com/harsh4870/docker-compose-secret Node js example code : https://github.com/harsh4870/docker-compose-secret/blob/main/index.js Update you can use write docker-compose file with entrypoint will set the Environment variable from a file and after that it will the Main process version: '3' services: redis-server: image: 'redis' node-app: secrets: - connection_string build: . restart: "no" entrypoint: [ "sh", "-c", "export connection=$(cat /run/secrets/connection_string) && npm start"] ports: - "4001:8000" secrets: connection_string: file: ./connection_string You application or code will be able to use Env var connection and access direct value
###Instruction: I have a docker-compose file that connects an app with a database. I'm using docker secrets to hide sensitive data, such as in this case DB credentials, while I can do so with the database service, I'm not able to do the same within the app service, specifically the ConnectionString. I couldn't find much in the documentation regarding this, neither did I find such cases online, there is one example that is given in the documentation with Wordpress service that suggests the solution I'm seeking, but that seems to be only Wordpress related. Where do I go from here? Wordpress example in documentation: wordpress: depends_on: - db image: wordpress:latest ports: - "8000:80" environment: WORDPRESS_DB_HOST: db:3306 WORDPRESS_DB_USER: wordpress WORDPRESS_DB_PASSWORD_FILE: /run/secrets/db_password secrets: - db_password My docker-compose file: version: '3.8' services: testapp: image: testapp depends_on: - testapp_db build: context: . dockerfile: TestApp/Dockerfile environment: - ASPNETCORE_ENVIRONMENT=Development - ASPNETCORE_URLS=http://+:80 # This works - ConnectionStrings__DefaultConnection=Server=testapp_db;Database=TestApp;UserId=postgres;Password=postgres; # But I want to do something like this - ConnectionStrings__DefaultConnection=/run/secrets/connection_string ports: - "60001:80" secrets: - connection_string networks: - testapp-network testapp_db: image: postgres:latest environment: POSTGRES_USER_FILE: /run/secrets/db_user POSTGRES_PASSWORD_FILE: /run/secrets/db_password POSTGRES_DB: TestApp ports: - "5500:5432" volumes: - postgres-data:/var/lib/postgresql/data - ./Persistence/Scripts/db_init.sql:/docker-entrypoint-initdb.d/create_tables.sql restart: always secrets: - db_user - db_password networks: - testapp-network volumes: postgres-data: secrets: db_password: file: ./run/secrets/docker-postgres/db_password db_user: file: ./run/secrets/docker-postgres/db_user connection_string: file: ./run/secrets/docker-postgres/connection_string networks: testapp-network: driver: bridge ### Response: It's actually not related to WordPress, it's general purpose only you can use similar way Docker mounts the secret content at /run/secrets/connection_string Your code can read the variable and it will get path from there which is /run/secrets/connection_string now your code can read that file runtime and get the content of the secret as simple as that Here you can refer my Github repo for example : https://github.com/harsh4870/docker-compose-secret Node js example code : https://github.com/harsh4870/docker-compose-secret/blob/main/index.js Update you can use write docker-compose file with entrypoint will set the Environment variable from a file and after that it will the Main process version: '3' services: redis-server: image: 'redis' node-app: secrets: - connection_string build: . restart: "no" entrypoint: [ "sh", "-c", "export connection=$(cat /run/secrets/connection_string) && npm start"] ports: - "4001:8000" secrets: connection_string: file: ./connection_string You application or code will be able to use Env var connection and access direct value
I have a docker-compose file that connects an app with a database. I'm using docker secrets to hide sensitive data, such as in this case DB credentials, while I can do so with the database service, I'm not able to do the same within the app service, specifically the ConnectionString. I couldn't find much in the documentation regarding this, neither did I find such cases online, there is one example that is given in the documentation with Wordpress service that suggests the solution I'm seeking, but that seems to be only Wordpress related. Where do I go from here? Wordpress example in documentation: wordpress: depends_on: - db image: wordpress:latest ports: - "8000:80" environment: WORDPRESS_DB_HOST: db:3306 WORDPRESS_DB_USER: wordpress WORDPRESS_DB_PASSWORD_FILE: /run/secrets/db_password secrets: - db_password My docker-compose file: version: '3.8' services: testapp: image: testapp depends_on: - testapp_db build: context: . dockerfile: TestApp/Dockerfile environment: - ASPNETCORE_ENVIRONMENT=Development - ASPNETCORE_URLS=http://+:80 # This works - ConnectionStrings__DefaultConnection=Server=testapp_db;Database=TestApp;UserId=postgres;Password=postgres; # But I want to do something like this - ConnectionStrings__DefaultConnection=/run/secrets/connection_string ports: - "60001:80" secrets: - connection_string networks: - testapp-network testapp_db: image: postgres:latest environment: POSTGRES_USER_FILE: /run/secrets/db_user POSTGRES_PASSWORD_FILE: /run/secrets/db_password POSTGRES_DB: TestApp ports: - "5500:5432" volumes: - postgres-data:/var/lib/postgresql/data - ./Persistence/Scripts/db_init.sql:/docker-entrypoint-initdb.d/create_tables.sql restart: always secrets: - db_user - db_password networks: - testapp-network volumes: postgres-data: secrets: db_password: file: ./run/secrets/docker-postgres/db_password db_user: file: ./run/secrets/docker-postgres/db_user connection_string: file: ./run/secrets/docker-postgres/connection_string networks: testapp-network: driver: bridge
As suggested by @Harsh Manver you can set up a multi cluster mesh outside Google Cloud to achieve internal service to service communication in Anthos multiple clusters. As mentioned in the document: The cluster's Kubernetes control plane address and the gateway address need to be reachable from every cluster in the mesh. The Google Cloud project in which GKE clusters are located should be allowed to create external load balancing types. We recommend that you use authorized networks and VPC firewall rules to restrict the access and ensure traffic should not exposed to public internet
###Instruction: How to achieve internal service to service communication in Anthos multiple clusters Example service A deployed in GKE cluster and service B deployed in AKS cluster, here how we can call service A from service B (internally) ? ### Response: As suggested by @Harsh Manver you can set up a multi cluster mesh outside Google Cloud to achieve internal service to service communication in Anthos multiple clusters. As mentioned in the document: The cluster's Kubernetes control plane address and the gateway address need to be reachable from every cluster in the mesh. The Google Cloud project in which GKE clusters are located should be allowed to create external load balancing types. We recommend that you use authorized networks and VPC firewall rules to restrict the access and ensure traffic should not exposed to public internet
How to achieve internal service to service communication in Anthos multiple clusters Example service A deployed in GKE cluster and service B deployed in AKS cluster, here how we can call service A from service B (internally) ?
The Answer is kubectl patch (docs). For example, Given a deployment you can for example update the .spec.template.spec.containers[].image field using the following command: kubectl patch deployment my-deploy \ --patch '{"spec": {"template": {"spec": {"containers": [{"image": "nginx:1.12.5"}]}}}}' This way, you only specify the fields of the object that have changed. You may also pass a file with the changed fields instead of inline json, eg: cat ./patch.yaml spec: template: spec: containers: - image: nginx:1.12.5 kubectl patch deployment my-deploy --patch-file ./patch.yaml Both versions support json and yaml to specify the changeset.
###Instruction: Is it possible to modify a live Kubernetes manifest, on-the-fly and non-interactively? I know kubectl edit allows for this behavior, but this requires user interaction in an editor that is opened when the command is invoked. I need to be able to do this without user interaction (for example in a script, etc.). Is this possible with a simple command - perhaps a variation of kubectl edit? ### Response: The Answer is kubectl patch (docs). For example, Given a deployment you can for example update the .spec.template.spec.containers[].image field using the following command: kubectl patch deployment my-deploy \ --patch '{"spec": {"template": {"spec": {"containers": [{"image": "nginx:1.12.5"}]}}}}' This way, you only specify the fields of the object that have changed. You may also pass a file with the changed fields instead of inline json, eg: cat ./patch.yaml spec: template: spec: containers: - image: nginx:1.12.5 kubectl patch deployment my-deploy --patch-file ./patch.yaml Both versions support json and yaml to specify the changeset.
Is it possible to modify a live Kubernetes manifest, on-the-fly and non-interactively? I know kubectl edit allows for this behavior, but this requires user interaction in an editor that is opened when the command is invoked. I need to be able to do this without user interaction (for example in a script, etc.). Is this possible with a simple command - perhaps a variation of kubectl edit?
We have faced the same issue and the problem was related to the fact, that the Ingress generated by the certificate manger contained the Ingress Controller reference using the deprecated Annotation kubernetes.io/ingress.class. What we wanted: spec: ingressClassName: my-traefik-controller What we got: annotations: kubernetes.io/ingress.class: "my-traefik-controller" This way, the traefik Ingress Controlelr found the Ingress, but was not able to find the service. There is a whole discussion on this topic in the cert-manger Github repo. The solution was to use the cert-manager Annotation acme.cert-manager.io/http01-edit-in-place: "true" on an existing Ingress. annotations: cert-manager.io/cluster-issuer: my-issuer acme.cert-manager.io/http01-edit-in-place: "true" spec: ingressClassName: my-traefik-controller This way, only the existing Ingress (containing the correct ingressClassName reference) gets modified and no new solver Ingress gets created.
###Instruction: I've been trying to get the http-01 challenge method working with traefik v2 and cert-manager, both installed through their current helm charts. The LB endpoint can be requested through the ip and hostname, and I've tested that the http host passes on letsdebug (No issues were found with <domain>). Traefik lives in the traefik namespace, while cert-manager lives in its own cert-manager namespace. I've created a ClusterIssuer inside the cert-manager namespace: apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-staging spec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directory email: removed@example.com privateKeySecretRef: name: letsencrypt-staging solvers: - http01: ingress: class: traefik ingressTemplate: metadata: namespace: cert-manager annotations: traefik.ingress.kubernetes.io/router.entrypoints: web The ingressTemplate part is my attempt at making the randomly created ingress from cert-manager map to the correct traefik endpoint - this hasn't changed anything, but I leave it in in case I've fubared anything here. I've then created a Certificate and applied it - I've tried using both the cert-manager, traefik and default namespace for this, without any differing luck (the actual domain name has been replaced with domain.example.com): apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: domain.example.com spec: secretName: domain-example-com-tls issuerRef: kind: ClusterIssuer name: letsencrypt-staging commonName: domain.example.com dnsNames: - domain.example.com Looking at the logs for the cert-manager pod, I can see both a 404 error and then a "DNS A record error" - the DNS record error seems spurious as it can be resolved with other services and has been present for > 24hrs. I0413 12:37:51.478359 1 conditions.go:201] Setting lastTransitionTime for Certificate "domain.example.com" condition "Issuing" to 2022-04-13 12:37:51.478353098 +0000 UTC m=+6998.327004050 I0413 12:37:51.760018 1 controller.go:161] cert-manager/certificates-key-manager "msg"="re-queuing item due to optimistic locking on resource" "key"="default/domain.example.com" "error"="Operation cannot be fulfilled on certificates.cert-manager.io \"domain.example.com\": the object has been modified; please apply your changes to the latest version and try again" I0413 12:37:51.769026 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest "domain.example.com-r98k2" condition "Approved" to 2022-04-13 12:37:51.769016958 +0000 UTC m=+6998.617667914 I0413 12:37:51.836517 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest "domain.example.com-r98k2" condition "Ready" to 2022-04-13 12:37:51.836496254 +0000 UTC m=+6998.685147170 I0413 12:37:51.868932 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest "domain.example.com-r98k2" condition "Ready" to 2022-04-13 12:37:51.868921204 +0000 UTC m=+6998.717572135 I0413 12:37:51.888553 1 controller.go:161] cert-manager/certificaterequests-issuer-acme "msg"="re-queuing item due to optimistic locking on resource" "key"="default/domain.example.com-r98k2" "error"="Operation cannot be fulfilled on certificaterequests.cert-manager.io \"domain.example.com-r98k2\": the object has been modified; please apply your changes to the latest version and try again" E0413 12:37:53.529269 1 controller.go:210] cert-manager/challenges/scheduler "msg"="error scheduling challenge for processing" "error"="Operation cannot be fulfilled on challenges.acme.cert-manager.io \"domain.example.com-r98k2-2809069211-587139531\": the object has been modified; please apply your changes to the latest version and try again" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" I0413 12:37:55.028477 1 pod.go:71] cert-manager/challenges/http01/ensurePod "msg"="creating HTTP01 challenge solver pod" "dnsName"="domain.example.com" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" I0413 12:37:55.237109 1 pod.go:59] cert-manager/challenges/http01/selfCheck/http01/ensurePod "msg"="found one existing HTTP01 solver pod" "dnsName"="domain.example.com" "related_resource_kind"="Pod" "related_resource_name"="cm-acme-http-solver-k8wl8" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" I0413 12:37:55.237350 1 service.go:43] cert-manager/challenges/http01/selfCheck/http01/ensureService "msg"="found one existing HTTP01 solver Service for challenge resource" "dnsName"="domain.example.com" "related_resource_kind"="Service" "related_resource_name"="cm-acme-http-solver-gvvkt" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" I0413 12:37:55.237539 1 ingress.go:99] cert-manager/challenges/http01/selfCheck/http01/ensureIngress "msg"="found one existing HTTP01 solver ingress" "dnsName"="domain.example.com" "related_resource_kind"="Ingress" "related_resource_name"="cm-acme-http-solver-pbs7c" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" E0413 12:37:55.260608 1 sync.go:186] cert-manager/challenges "msg"="propagation check failed" "error"="wrong status code '404', expected '200'" "dnsName"="domain.example.com" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" I0413 12:37:55.299879 1 pod.go:59] cert-manager/challenges/http01/selfCheck/http01/ensurePod "msg"="found one existing HTTP01 solver pod" "dnsName"="domain.example.com" "related_resource_kind"="Pod" "related_resource_name"="cm-acme-http-solver-k8wl8" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" I0413 12:37:55.300223 1 service.go:43] cert-manager/challenges/http01/selfCheck/http01/ensureService "msg"="found one existing HTTP01 solver Service for challenge resource" "dnsName"="domain.example.com" "related_resource_kind"="Service" "related_resource_name"="cm-acme-http-solver-gvvkt" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" I0413 12:37:55.300570 1 ingress.go:99] cert-manager/challenges/http01/selfCheck/http01/ensureIngress "msg"="found one existing HTTP01 solver ingress" "dnsName"="domain.example.com" "related_resource_kind"="Ingress" "related_resource_name"="cm-acme-http-solver-pbs7c" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" E0413 12:37:55.316802 1 sync.go:186] cert-manager/challenges "msg"="propagation check failed" "error"="wrong status code '404', expected '200'" "dnsName"="domain.example.com" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" I0413 12:38:05.261345 1 pod.go:59] cert-manager/challenges/http01/selfCheck/http01/ensurePod "msg"="found one existing HTTP01 solver pod" "dnsName"="domain.example.com" "related_resource_kind"="Pod" "related_resource_name"="cm-acme-http-solver-k8wl8" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" I0413 12:38:05.263416 1 service.go:43] cert-manager/challenges/http01/selfCheck/http01/ensureService "msg"="found one existing HTTP01 solver Service for challenge resource" "dnsName"="domain.example.com" "related_resource_kind"="Service" "related_resource_name"="cm-acme-http-solver-gvvkt" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" I0413 12:38:05.263822 1 ingress.go:99] cert-manager/challenges/http01/selfCheck/http01/ensureIngress "msg"="found one existing HTTP01 solver ingress" "dnsName"="domain.example.com" "related_resource_kind"="Ingress" "related_resource_name"="cm-acme-http-solver-pbs7c" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" E0413 12:38:25.541964 1 sync.go:386] cert-manager/challenges/acceptChallenge "msg"="error waiting for authorization" "error"="context deadline exceeded" "dnsName"="domain.example.com" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" E0413 12:38:25.542087 1 controller.go:166] cert-manager/challenges "msg"="re-queuing item due to error processing" "error"="context deadline exceeded" "key"="default/domain.example.com-r98k2-2809069211-587139531" I0413 12:38:30.542803 1 pod.go:59] cert-manager/challenges/http01/selfCheck/http01/ensurePod "msg"="found one existing HTTP01 solver pod" "dnsName"="domain.example.com" "related_resource_kind"="Pod" "related_resource_name"="cm-acme-http-solver-k8wl8" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" I0413 12:38:30.543062 1 service.go:43] cert-manager/challenges/http01/selfCheck/http01/ensureService "msg"="found one existing HTTP01 solver Service for challenge resource" "dnsName"="domain.example.com" "related_resource_kind"="Service" "related_resource_name"="cm-acme-http-solver-gvvkt" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" I0413 12:38:30.543218 1 ingress.go:99] cert-manager/challenges/http01/selfCheck/http01/ensureIngress "msg"="found one existing HTTP01 solver ingress" "dnsName"="domain.example.com" "related_resource_kind"="Ingress" "related_resource_name"="cm-acme-http-solver-pbs7c" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" E0413 12:38:46.682039 1 sync.go:386] cert-manager/challenges/acceptChallenge "msg"="error waiting for authorization" "error"="acme: authorization error for domain.example.com: 400 urn:ietf:params:acme:error:dns: During secondary validation: DNS problem: query timed out looking up A for domain.example.com; DNS problem: query timed out looking up AAAA for domain.example.com" "dnsName"="domain.example.com" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" E0413 12:38:46.888731 1 controller.go:102] ingress 'default/cm-acme-http-solver-pbs7c' in work queue no longer exists Looking at Traefik's pod log, I can see that the ingress gets created, but that Traefik is unable to route any requests to it because it can't find the endpoint (this is what I tried to fix with the annotation in the ingressTemplate above): time="2022-04-13T12:37:57Z" level=error msg="Skipping service: no endpoints found" providerName=kubernetes namespace=default servicePort="&ServiceBackendPort{Name:,Number:8089,}" ingress=cm-acme-http-solver-pbs7c serviceName=cm-acme-http-solver-gvvkt time="2022-04-13T12:38:46Z" level=error msg="Skipping service: no endpoints found" serviceName=cm-acme-http-solver-gvvkt servicePort="&ServiceBackendPort{Name:,Number:8089,}" providerName=kubernetes ingress=cm-acme-http-solver-pbs7c namespace=default time="2022-04-13T12:38:46Z" level=error msg="Cannot create service: service not found" servicePort="&ServiceBackendPort{Name:,Number:8089,}" providerName=kubernetes ingress=cm-acme-http-solver-pbs7c namespace=default serviceName=cm-acme-http-solver-gvvkt time="2022-04-13T12:38:46Z" level=error msg="Cannot create service: service not found" servicePort="&ServiceBackendPort{Name:,Number:8089,}" namespace=default providerName=kubernetes serviceName=cm-acme-http-solver-gvvkt ingress=cm-acme-http-solver-pbs7c And there's where I'm stuck currently, since the plan is to use Traefik's IngressRoute CRD for exposing hosts and not use regular ingress entries. Another option would be to test the experimental Gateway support, but as this is the initial setup for a prod cluster I'm not planning to go down that route yet. Any ideas or further debug information that could be useful? ### Response: We have faced the same issue and the problem was related to the fact, that the Ingress generated by the certificate manger contained the Ingress Controller reference using the deprecated Annotation kubernetes.io/ingress.class. What we wanted: spec: ingressClassName: my-traefik-controller What we got: annotations: kubernetes.io/ingress.class: "my-traefik-controller" This way, the traefik Ingress Controlelr found the Ingress, but was not able to find the service. There is a whole discussion on this topic in the cert-manger Github repo. The solution was to use the cert-manager Annotation acme.cert-manager.io/http01-edit-in-place: "true" on an existing Ingress. annotations: cert-manager.io/cluster-issuer: my-issuer acme.cert-manager.io/http01-edit-in-place: "true" spec: ingressClassName: my-traefik-controller This way, only the existing Ingress (containing the correct ingressClassName reference) gets modified and no new solver Ingress gets created.
I've been trying to get the http-01 challenge method working with traefik v2 and cert-manager, both installed through their current helm charts. The LB endpoint can be requested through the ip and hostname, and I've tested that the http host passes on letsdebug (No issues were found with <domain>). Traefik lives in the traefik namespace, while cert-manager lives in its own cert-manager namespace. I've created a ClusterIssuer inside the cert-manager namespace: apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: letsencrypt-staging spec: acme: server: https://acme-staging-v02.api.letsencrypt.org/directory email: removed@example.com privateKeySecretRef: name: letsencrypt-staging solvers: - http01: ingress: class: traefik ingressTemplate: metadata: namespace: cert-manager annotations: traefik.ingress.kubernetes.io/router.entrypoints: web The ingressTemplate part is my attempt at making the randomly created ingress from cert-manager map to the correct traefik endpoint - this hasn't changed anything, but I leave it in in case I've fubared anything here. I've then created a Certificate and applied it - I've tried using both the cert-manager, traefik and default namespace for this, without any differing luck (the actual domain name has been replaced with domain.example.com): apiVersion: cert-manager.io/v1 kind: Certificate metadata: name: domain.example.com spec: secretName: domain-example-com-tls issuerRef: kind: ClusterIssuer name: letsencrypt-staging commonName: domain.example.com dnsNames: - domain.example.com Looking at the logs for the cert-manager pod, I can see both a 404 error and then a "DNS A record error" - the DNS record error seems spurious as it can be resolved with other services and has been present for > 24hrs. I0413 12:37:51.478359 1 conditions.go:201] Setting lastTransitionTime for Certificate "domain.example.com" condition "Issuing" to 2022-04-13 12:37:51.478353098 +0000 UTC m=+6998.327004050 I0413 12:37:51.760018 1 controller.go:161] cert-manager/certificates-key-manager "msg"="re-queuing item due to optimistic locking on resource" "key"="default/domain.example.com" "error"="Operation cannot be fulfilled on certificates.cert-manager.io \"domain.example.com\": the object has been modified; please apply your changes to the latest version and try again" I0413 12:37:51.769026 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest "domain.example.com-r98k2" condition "Approved" to 2022-04-13 12:37:51.769016958 +0000 UTC m=+6998.617667914 I0413 12:37:51.836517 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest "domain.example.com-r98k2" condition "Ready" to 2022-04-13 12:37:51.836496254 +0000 UTC m=+6998.685147170 I0413 12:37:51.868932 1 conditions.go:261] Setting lastTransitionTime for CertificateRequest "domain.example.com-r98k2" condition "Ready" to 2022-04-13 12:37:51.868921204 +0000 UTC m=+6998.717572135 I0413 12:37:51.888553 1 controller.go:161] cert-manager/certificaterequests-issuer-acme "msg"="re-queuing item due to optimistic locking on resource" "key"="default/domain.example.com-r98k2" "error"="Operation cannot be fulfilled on certificaterequests.cert-manager.io \"domain.example.com-r98k2\": the object has been modified; please apply your changes to the latest version and try again" E0413 12:37:53.529269 1 controller.go:210] cert-manager/challenges/scheduler "msg"="error scheduling challenge for processing" "error"="Operation cannot be fulfilled on challenges.acme.cert-manager.io \"domain.example.com-r98k2-2809069211-587139531\": the object has been modified; please apply your changes to the latest version and try again" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" I0413 12:37:55.028477 1 pod.go:71] cert-manager/challenges/http01/ensurePod "msg"="creating HTTP01 challenge solver pod" "dnsName"="domain.example.com" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" I0413 12:37:55.237109 1 pod.go:59] cert-manager/challenges/http01/selfCheck/http01/ensurePod "msg"="found one existing HTTP01 solver pod" "dnsName"="domain.example.com" "related_resource_kind"="Pod" "related_resource_name"="cm-acme-http-solver-k8wl8" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" I0413 12:37:55.237350 1 service.go:43] cert-manager/challenges/http01/selfCheck/http01/ensureService "msg"="found one existing HTTP01 solver Service for challenge resource" "dnsName"="domain.example.com" "related_resource_kind"="Service" "related_resource_name"="cm-acme-http-solver-gvvkt" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" I0413 12:37:55.237539 1 ingress.go:99] cert-manager/challenges/http01/selfCheck/http01/ensureIngress "msg"="found one existing HTTP01 solver ingress" "dnsName"="domain.example.com" "related_resource_kind"="Ingress" "related_resource_name"="cm-acme-http-solver-pbs7c" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" E0413 12:37:55.260608 1 sync.go:186] cert-manager/challenges "msg"="propagation check failed" "error"="wrong status code '404', expected '200'" "dnsName"="domain.example.com" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" I0413 12:37:55.299879 1 pod.go:59] cert-manager/challenges/http01/selfCheck/http01/ensurePod "msg"="found one existing HTTP01 solver pod" "dnsName"="domain.example.com" "related_resource_kind"="Pod" "related_resource_name"="cm-acme-http-solver-k8wl8" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" I0413 12:37:55.300223 1 service.go:43] cert-manager/challenges/http01/selfCheck/http01/ensureService "msg"="found one existing HTTP01 solver Service for challenge resource" "dnsName"="domain.example.com" "related_resource_kind"="Service" "related_resource_name"="cm-acme-http-solver-gvvkt" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" I0413 12:37:55.300570 1 ingress.go:99] cert-manager/challenges/http01/selfCheck/http01/ensureIngress "msg"="found one existing HTTP01 solver ingress" "dnsName"="domain.example.com" "related_resource_kind"="Ingress" "related_resource_name"="cm-acme-http-solver-pbs7c" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" E0413 12:37:55.316802 1 sync.go:186] cert-manager/challenges "msg"="propagation check failed" "error"="wrong status code '404', expected '200'" "dnsName"="domain.example.com" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" I0413 12:38:05.261345 1 pod.go:59] cert-manager/challenges/http01/selfCheck/http01/ensurePod "msg"="found one existing HTTP01 solver pod" "dnsName"="domain.example.com" "related_resource_kind"="Pod" "related_resource_name"="cm-acme-http-solver-k8wl8" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" I0413 12:38:05.263416 1 service.go:43] cert-manager/challenges/http01/selfCheck/http01/ensureService "msg"="found one existing HTTP01 solver Service for challenge resource" "dnsName"="domain.example.com" "related_resource_kind"="Service" "related_resource_name"="cm-acme-http-solver-gvvkt" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" I0413 12:38:05.263822 1 ingress.go:99] cert-manager/challenges/http01/selfCheck/http01/ensureIngress "msg"="found one existing HTTP01 solver ingress" "dnsName"="domain.example.com" "related_resource_kind"="Ingress" "related_resource_name"="cm-acme-http-solver-pbs7c" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" E0413 12:38:25.541964 1 sync.go:386] cert-manager/challenges/acceptChallenge "msg"="error waiting for authorization" "error"="context deadline exceeded" "dnsName"="domain.example.com" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" E0413 12:38:25.542087 1 controller.go:166] cert-manager/challenges "msg"="re-queuing item due to error processing" "error"="context deadline exceeded" "key"="default/domain.example.com-r98k2-2809069211-587139531" I0413 12:38:30.542803 1 pod.go:59] cert-manager/challenges/http01/selfCheck/http01/ensurePod "msg"="found one existing HTTP01 solver pod" "dnsName"="domain.example.com" "related_resource_kind"="Pod" "related_resource_name"="cm-acme-http-solver-k8wl8" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" I0413 12:38:30.543062 1 service.go:43] cert-manager/challenges/http01/selfCheck/http01/ensureService "msg"="found one existing HTTP01 solver Service for challenge resource" "dnsName"="domain.example.com" "related_resource_kind"="Service" "related_resource_name"="cm-acme-http-solver-gvvkt" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" I0413 12:38:30.543218 1 ingress.go:99] cert-manager/challenges/http01/selfCheck/http01/ensureIngress "msg"="found one existing HTTP01 solver ingress" "dnsName"="domain.example.com" "related_resource_kind"="Ingress" "related_resource_name"="cm-acme-http-solver-pbs7c" "related_resource_namespace"="default" "related_resource_version"="v1" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" E0413 12:38:46.682039 1 sync.go:386] cert-manager/challenges/acceptChallenge "msg"="error waiting for authorization" "error"="acme: authorization error for domain.example.com: 400 urn:ietf:params:acme:error:dns: During secondary validation: DNS problem: query timed out looking up A for domain.example.com; DNS problem: query timed out looking up AAAA for domain.example.com" "dnsName"="domain.example.com" "resource_kind"="Challenge" "resource_name"="domain.example.com-r98k2-2809069211-587139531" "resource_namespace"="default" "resource_version"="v1" "type"="HTTP-01" E0413 12:38:46.888731 1 controller.go:102] ingress 'default/cm-acme-http-solver-pbs7c' in work queue no longer exists Looking at Traefik's pod log, I can see that the ingress gets created, but that Traefik is unable to route any requests to it because it can't find the endpoint (this is what I tried to fix with the annotation in the ingressTemplate above): time="2022-04-13T12:37:57Z" level=error msg="Skipping service: no endpoints found" providerName=kubernetes namespace=default servicePort="&ServiceBackendPort{Name:,Number:8089,}" ingress=cm-acme-http-solver-pbs7c serviceName=cm-acme-http-solver-gvvkt time="2022-04-13T12:38:46Z" level=error msg="Skipping service: no endpoints found" serviceName=cm-acme-http-solver-gvvkt servicePort="&ServiceBackendPort{Name:,Number:8089,}" providerName=kubernetes ingress=cm-acme-http-solver-pbs7c namespace=default time="2022-04-13T12:38:46Z" level=error msg="Cannot create service: service not found" servicePort="&ServiceBackendPort{Name:,Number:8089,}" providerName=kubernetes ingress=cm-acme-http-solver-pbs7c namespace=default serviceName=cm-acme-http-solver-gvvkt time="2022-04-13T12:38:46Z" level=error msg="Cannot create service: service not found" servicePort="&ServiceBackendPort{Name:,Number:8089,}" namespace=default providerName=kubernetes serviceName=cm-acme-http-solver-gvvkt ingress=cm-acme-http-solver-pbs7c And there's where I'm stuck currently, since the plan is to use Traefik's IngressRoute CRD for exposing hosts and not use regular ingress entries. Another option would be to test the experimental Gateway support, but as this is the initial setup for a prod cluster I'm not planning to go down that route yet. Any ideas or further debug information that could be useful?
Can you try this #!/bin/bash pod_name="devopslk" namespace="devops" # Get the pod status pod_status=$(kubectl get pod "$pod_name" -n "$namespace" -o jsonpath='{.status.phase}') # Check if any matching pod is running if [[ -n "$pod_status" ]]; then echo "Pod running" else echo "Pod not running, Current status: $pod_status" fi
###Instruction: In my k8 cluster I have pods running that the pod name contains the word "inventory" (actually 2pods with this name) I need to write a shell script to delete deployment and re-deploy and then it should check the specific pod is running or not and then display a message. Here is what I tried. #!/bin/bash cd /apps/application/application_yamls_develop/deployment-artifacts/inventory-service kubectl delete -f inventory-service.yaml -n back-end kubectl apply -f inventory-service.yaml -n back-end sleep 20 pod_name="inventory" namespace="back-end" # Get the pod status pod_status=$(kubectl get pods -n "$namespace" -o jsonpath="{.items[?(@.metadata.name.includes('$pod_name'))].status.phase}") # Check if any matching pod is running if [[ -n "$pod_status" ]]; then echo "Pod running" else echo "Pod not running" fi But this is giving the bellow error. error: error parsing jsonpath {.items[?(@.metadata.name.includes('inventory'))].status.phase}, unclosed array expect ] Can someone please pointout the issue. Thanks..! ### Response: Can you try this #!/bin/bash pod_name="devopslk" namespace="devops" # Get the pod status pod_status=$(kubectl get pod "$pod_name" -n "$namespace" -o jsonpath='{.status.phase}') # Check if any matching pod is running if [[ -n "$pod_status" ]]; then echo "Pod running" else echo "Pod not running, Current status: $pod_status" fi
In my k8 cluster I have pods running that the pod name contains the word "inventory" (actually 2pods with this name) I need to write a shell script to delete deployment and re-deploy and then it should check the specific pod is running or not and then display a message. Here is what I tried. #!/bin/bash cd /apps/application/application_yamls_develop/deployment-artifacts/inventory-service kubectl delete -f inventory-service.yaml -n back-end kubectl apply -f inventory-service.yaml -n back-end sleep 20 pod_name="inventory" namespace="back-end" # Get the pod status pod_status=$(kubectl get pods -n "$namespace" -o jsonpath="{.items[?(@.metadata.name.includes('$pod_name'))].status.phase}") # Check if any matching pod is running if [[ -n "$pod_status" ]]; then echo "Pod running" else echo "Pod not running" fi But this is giving the bellow error. error: error parsing jsonpath {.items[?(@.metadata.name.includes('inventory'))].status.phase}, unclosed array expect ] Can someone please pointout the issue. Thanks..!
I need a Listener in the second microservice in order to receive the message from Queue. Why polling, when you can be notified through the KubeMQ Pub/Sub pattern? In the context of message queues, "polling" refers to a process where your application continually checks the queue to see if a new message has arrived. This can be inefficient, as it requires your application to make many requests when there may not be any new messages to process. On the other hand, a "listener" (also known as a subscriber or a callback) is a function that is automatically called when a new message arrives. This is more efficient because your application does not need to continually check the queue; instead, it can wait and react when a message arrives. The Publish-Subscribe pattern (or pub/sub) is a messaging pattern supported by KubeMQ, and it differs slightly from the queue-based pattern you are currently using. In the pub/sub pattern, senders of messages (publishers) do not program the messages to be sent directly to specific receivers (subscribers). Instead, the programmer “publishes” messages (events), without any knowledge of any subscribers there may be. Similarly, subscribers express interest in one or more events and only receive messages that are of interest, without any knowledge of any publishers. In this pattern, KubeMQ provides two types of event handling, Events and Events Store. The Events type is an asynchronous real-time Pub/Sub pattern, meaning that messages are sent and received in real-time but only if the receiver is currently connected to KubeMQ. There is no message persistence available in this pattern. The Events Store type, however, is an asynchronous Pub/Sub pattern with persistence. This means that messages are stored and can be replayed by any receiver, even if they were not connected at the time the message was sent. The system also supports replaying all events from the first stored event, replaying only the last event, or only sending new events. However, it is important to note that the uniqueness of a client ID is essential when using Events Store. At any given time, only one receiver can connect with a unique Client ID. If two receivers try to connect to KubeMQ with the same Client ID, one of them will be rejected. Messages can only be replayed once per Client ID and Subscription type. If a receiver disconnects and reconnects with any subscription type, only new events will be delivered for this specific receiver with that Client ID. To replay messages, a receiver needs to connect with a different Client ID. Given these features, if you switch your architecture to a pub/sub pattern using the Events Store type, your second microservice could instantly receive messages as they are added into the channel, and even replay old messages if needed. You would need to ensure each microservice has a unique Client ID and manages its subscriptions appropriately. However, the pub/sub pattern may require changes in the architecture and coding of your microservices, so you would need to evaluate whether this change is suitable for your use case. It is also important to note that the pub/sub pattern, especially with message persistence, may have different performance characteristics and resource requirements compared to the queue pattern. Here is a high-level overview of the classes that are present and their usage: Channel.java: This class appears to represent a channel for sending events in a publish-subscribe model. ChannelParameters.java: This class defines the parameters for creating a Channel instance. Event.java: This class represents an event that can be sent via a Channel. EventReceive.java: This class is used to process received events. Result.java: This class contains the result of a sent event. Subscriber.java: This class allows you to subscribe to a channel and handle incoming events. So here is an example of how you might use the existing classes to publish and subscribe to messages. import io.kubemq.sdk.Channel; import io.kubemq.sdk.ChannelParameters; import io.kubemq.sdk.Result; import io.kubemq.sdk.event.Event; import io.kubemq.sdk.event.Subscriber; public class KubeMQExample { public static void main(String[] args) { try { // Initialize ChannelParameters ChannelParameters params = new ChannelParameters(); params.setChannel("your_channel"); params.setClient("your_client_id"); // Initialize a new Channel Channel channel = new Channel(params); // Create a new Event Event event = new Event(); event.setBody("Your message here".getBytes()); // Send the Event Result sendResult = channel.SendEvent(event); System.out.println("Event sent, Result: " + sendResult.getIsError()); // Initialize a new Subscriber Subscriber subscriber = new Subscriber("localhost:5000"); // Subscribe to the Channel subscriber.SubscribeToEvents(params, (eventReceive) -> { System.out.println("Received Event: " + new String(eventReceive.getBody())); }); } catch (Exception e) { e.printStackTrace(); } } } Do note that this code is based on the existing SDK and may not reflect the functionality of the original code. You will need to replace "your_channel" and "your_client_id" with your actual channel name and client ID. The event body can also be replaced with the actual message you want to send. The Subscriber class is used here to listen for and process incoming events. The SubscribeToEvents method takes a ChannelParameters object and a lambda function that processes received events. Do also note that the Queue and EventsStore classes seem to have been removed from the SDK. The SDK now seems to primarily use the publish-subscribe model, which differs from queue-based communication in that messages are not stored if no consumer is available to consume them. Events Store was a hybrid model that allowed for persistence in the pub/sub model, storing events that could be replayed by receivers connecting at a later time. For your original functionality of reading queue messages and peeking at messages in a queue, unfortunately, it does not seem like the current state of the Kubemq Java SDK on the provided GitHub repository supports these actions.
###Instruction: Currently, I have two microservices. I want to send message to KubeMQ Queue from first microservice and want it to be received by Second microservice. I am able to send message to a KubeMQ Queue using below code: Queue queue = new Queue("QueueName", "ClientID", "localhost:50000"); SendMessageResult resSend = queue.SendQueueMessage(new Message() .setBody(Converter.ToByteArray("some-simple_queue-queue-message")) .setMetadata("someMeta")); if (resSend.getIsError()) { System.out.printf("Message enqueue error, error: %s", resSend.getError()); } I need Listener in the second microservice in order to receive the message from Queue. Below is code provided by KubeMQ to receive the message: Queue queue = new Queue("QueueName", "ClientID", "localhost:50000"); ReceiveMessagesResponse resRec = queue.ReceiveQueueMessages(10, 1); if (resRec.getIsError()) { System.out.printf("Message dequeue error, error: %s", resRec.getError()); return; } System.out.printf("Received Messages %s:", resRec.getMessagesReceived()); for (Message msg : resRec.getMessages()) { System.out.printf("MessageID: %s, Body:%s", msg.getMessageID(), Converter.FromByteArray(msg.getBody())); } How to configure it in the second microservice to receive message instantly as they are added into the queue? Please help. ### Response: I need a Listener in the second microservice in order to receive the message from Queue. Why polling, when you can be notified through the KubeMQ Pub/Sub pattern? In the context of message queues, "polling" refers to a process where your application continually checks the queue to see if a new message has arrived. This can be inefficient, as it requires your application to make many requests when there may not be any new messages to process. On the other hand, a "listener" (also known as a subscriber or a callback) is a function that is automatically called when a new message arrives. This is more efficient because your application does not need to continually check the queue; instead, it can wait and react when a message arrives. The Publish-Subscribe pattern (or pub/sub) is a messaging pattern supported by KubeMQ, and it differs slightly from the queue-based pattern you are currently using. In the pub/sub pattern, senders of messages (publishers) do not program the messages to be sent directly to specific receivers (subscribers). Instead, the programmer “publishes” messages (events), without any knowledge of any subscribers there may be. Similarly, subscribers express interest in one or more events and only receive messages that are of interest, without any knowledge of any publishers. In this pattern, KubeMQ provides two types of event handling, Events and Events Store. The Events type is an asynchronous real-time Pub/Sub pattern, meaning that messages are sent and received in real-time but only if the receiver is currently connected to KubeMQ. There is no message persistence available in this pattern. The Events Store type, however, is an asynchronous Pub/Sub pattern with persistence. This means that messages are stored and can be replayed by any receiver, even if they were not connected at the time the message was sent. The system also supports replaying all events from the first stored event, replaying only the last event, or only sending new events. However, it is important to note that the uniqueness of a client ID is essential when using Events Store. At any given time, only one receiver can connect with a unique Client ID. If two receivers try to connect to KubeMQ with the same Client ID, one of them will be rejected. Messages can only be replayed once per Client ID and Subscription type. If a receiver disconnects and reconnects with any subscription type, only new events will be delivered for this specific receiver with that Client ID. To replay messages, a receiver needs to connect with a different Client ID. Given these features, if you switch your architecture to a pub/sub pattern using the Events Store type, your second microservice could instantly receive messages as they are added into the channel, and even replay old messages if needed. You would need to ensure each microservice has a unique Client ID and manages its subscriptions appropriately. However, the pub/sub pattern may require changes in the architecture and coding of your microservices, so you would need to evaluate whether this change is suitable for your use case. It is also important to note that the pub/sub pattern, especially with message persistence, may have different performance characteristics and resource requirements compared to the queue pattern. Here is a high-level overview of the classes that are present and their usage: Channel.java: This class appears to represent a channel for sending events in a publish-subscribe model. ChannelParameters.java: This class defines the parameters for creating a Channel instance. Event.java: This class represents an event that can be sent via a Channel. EventReceive.java: This class is used to process received events. Result.java: This class contains the result of a sent event. Subscriber.java: This class allows you to subscribe to a channel and handle incoming events. So here is an example of how you might use the existing classes to publish and subscribe to messages. import io.kubemq.sdk.Channel; import io.kubemq.sdk.ChannelParameters; import io.kubemq.sdk.Result; import io.kubemq.sdk.event.Event; import io.kubemq.sdk.event.Subscriber; public class KubeMQExample { public static void main(String[] args) { try { // Initialize ChannelParameters ChannelParameters params = new ChannelParameters(); params.setChannel("your_channel"); params.setClient("your_client_id"); // Initialize a new Channel Channel channel = new Channel(params); // Create a new Event Event event = new Event(); event.setBody("Your message here".getBytes()); // Send the Event Result sendResult = channel.SendEvent(event); System.out.println("Event sent, Result: " + sendResult.getIsError()); // Initialize a new Subscriber Subscriber subscriber = new Subscriber("localhost:5000"); // Subscribe to the Channel subscriber.SubscribeToEvents(params, (eventReceive) -> { System.out.println("Received Event: " + new String(eventReceive.getBody())); }); } catch (Exception e) { e.printStackTrace(); } } } Do note that this code is based on the existing SDK and may not reflect the functionality of the original code. You will need to replace "your_channel" and "your_client_id" with your actual channel name and client ID. The event body can also be replaced with the actual message you want to send. The Subscriber class is used here to listen for and process incoming events. The SubscribeToEvents method takes a ChannelParameters object and a lambda function that processes received events. Do also note that the Queue and EventsStore classes seem to have been removed from the SDK. The SDK now seems to primarily use the publish-subscribe model, which differs from queue-based communication in that messages are not stored if no consumer is available to consume them. Events Store was a hybrid model that allowed for persistence in the pub/sub model, storing events that could be replayed by receivers connecting at a later time. For your original functionality of reading queue messages and peeking at messages in a queue, unfortunately, it does not seem like the current state of the Kubemq Java SDK on the provided GitHub repository supports these actions.
Currently, I have two microservices. I want to send message to KubeMQ Queue from first microservice and want it to be received by Second microservice. I am able to send message to a KubeMQ Queue using below code: Queue queue = new Queue("QueueName", "ClientID", "localhost:50000"); SendMessageResult resSend = queue.SendQueueMessage(new Message() .setBody(Converter.ToByteArray("some-simple_queue-queue-message")) .setMetadata("someMeta")); if (resSend.getIsError()) { System.out.printf("Message enqueue error, error: %s", resSend.getError()); } I need Listener in the second microservice in order to receive the message from Queue. Below is code provided by KubeMQ to receive the message: Queue queue = new Queue("QueueName", "ClientID", "localhost:50000"); ReceiveMessagesResponse resRec = queue.ReceiveQueueMessages(10, 1); if (resRec.getIsError()) { System.out.printf("Message dequeue error, error: %s", resRec.getError()); return; } System.out.printf("Received Messages %s:", resRec.getMessagesReceived()); for (Message msg : resRec.getMessages()) { System.out.printf("MessageID: %s, Body:%s", msg.getMessageID(), Converter.FromByteArray(msg.getBody())); } How to configure it in the second microservice to receive message instantly as they are added into the queue? Please help.
My advice: don't do it! You don't want to have to deal with certs inside your pods. Get the TLS done on your load balancers, managed by your cloud provider, or managed in your cluster by cert-manager and a proper cert provider (like Let's Encrypt) If you really want to terminate TLS in your pod, then use TCP as the protocol, and that will let encrypted messages through.
###Instruction: I am trying to do tls termination at pod level. ingress(nlb) -->service(clusterip) --> pod I have set ingress with ssl passthrough by adding the following annotations, annotations: nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" nginx.ingress.kubernetes.io/ssl-passthrough: "true" nginx.ingress.kubernetes.io/ssl-redirect: "true" After this, I am not sure what needs to be done at pod to terminate tls. I am unable to find any document related to this. Is mTLS the only solution? I need to do tls termination for only 2 pods that are running in the namespace. I tried below configuration. It is not working. Am I missing anything? nginx ingress controller: enabled ssl-passthrough Ingress annotations: nginx.ingress.kubernetes.io/backend-protocol: HTTPS nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/ssl-passthrough: "true" nginx.ingress.kubernetes.io/ssl-redirect: "true" Ingress spec: spec: rules: - host: xyz.test.com http: paths: - backend: service: name: xyz port: number: 443 path: / pathType: Prefix Service - ports section in spec: ports: - name: https port: 443 protocol: TCP targetPort: 1443 nginx sidecar container in app pod: ports: - containerPort: 1443 name: https protocol: TCP volumeMounts: - name: secret mountPath: /etc/nginx/ssl readOnly: true volumes: - name: secret secret: secretName: xyz-tls I could exec into the pod and see that the certificates and key files are present in /etc/nginx/ssl I also updated /etc/nginx/conf.d with following changes: server { listen 1443 default_server ssl; listen [::]:1443 default_server ipv6only=on; ssl_certificate /etc/nginx/ssl/tls.crt; ssl_certificate_key /etc/nginx/ssl/tls.key; ..... ### Response: My advice: don't do it! You don't want to have to deal with certs inside your pods. Get the TLS done on your load balancers, managed by your cloud provider, or managed in your cluster by cert-manager and a proper cert provider (like Let's Encrypt) If you really want to terminate TLS in your pod, then use TCP as the protocol, and that will let encrypted messages through.
I am trying to do tls termination at pod level. ingress(nlb) -->service(clusterip) --> pod I have set ingress with ssl passthrough by adding the following annotations, annotations: nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" nginx.ingress.kubernetes.io/ssl-passthrough: "true" nginx.ingress.kubernetes.io/ssl-redirect: "true" After this, I am not sure what needs to be done at pod to terminate tls. I am unable to find any document related to this. Is mTLS the only solution? I need to do tls termination for only 2 pods that are running in the namespace. I tried below configuration. It is not working. Am I missing anything? nginx ingress controller: enabled ssl-passthrough Ingress annotations: nginx.ingress.kubernetes.io/backend-protocol: HTTPS nginx.ingress.kubernetes.io/force-ssl-redirect: "true" nginx.ingress.kubernetes.io/ssl-passthrough: "true" nginx.ingress.kubernetes.io/ssl-redirect: "true" Ingress spec: spec: rules: - host: xyz.test.com http: paths: - backend: service: name: xyz port: number: 443 path: / pathType: Prefix Service - ports section in spec: ports: - name: https port: 443 protocol: TCP targetPort: 1443 nginx sidecar container in app pod: ports: - containerPort: 1443 name: https protocol: TCP volumeMounts: - name: secret mountPath: /etc/nginx/ssl readOnly: true volumes: - name: secret secret: secretName: xyz-tls I could exec into the pod and see that the certificates and key files are present in /etc/nginx/ssl I also updated /etc/nginx/conf.d with following changes: server { listen 1443 default_server ssl; listen [::]:1443 default_server ipv6only=on; ssl_certificate /etc/nginx/ssl/tls.crt; ssl_certificate_key /etc/nginx/ssl/tls.key; .....
With certbot there is type of verification method DNS and HTTP DNS method verify the record in DNS while HTTP check for 200 response from your Endpoint. In CI/CD you can go with the HTTP method which checks for the Domain status Refer repo for more : Auth script certbot certonly $CERTBOT_DEBUG --non-interactive --manual --preferred-challenges=http \ -m "$LETSENCRYPT_CONTACT_EMAIL" \ --manual-auth-hook authenticator.sh \ --no-self-upgrade --agree-tos \ $DOMAIN_OPTS Refer gist for DNS verification Another option Manual one During CI/CD or just want set cert in docker, i would suggest downloading/creating a cert first and use multiple time SSL for free You locally create the cert first and re-use multiple times by injecting it during CI/CD process storing it in a variable or downloading from Bucket if you are using any cloud.
###Instruction: Im using nginx as a reverse proxy to serve an Express.js backend and a Vue.js frontend. The whole application runs in two Docker Compose images, one for development, and one for production. I'd like to be able to test the production image locally with HTTPS (especially because Vue's PWA framework depends on HTTPS to properly register service workers). The issue is that to configure a let's encrypt certificate with certbot, I need to be running on a server associated with my domain through DNS. What is the correct way to set up a CI/CD workflow where I can test HTTPS locally and also push it to my production server? Do I have to buy a certificate from a different CA and update it manually? I tried using certbot, but it will not work since I am not on the server that is associated with mine through DNS. ### Response: With certbot there is type of verification method DNS and HTTP DNS method verify the record in DNS while HTTP check for 200 response from your Endpoint. In CI/CD you can go with the HTTP method which checks for the Domain status Refer repo for more : Auth script certbot certonly $CERTBOT_DEBUG --non-interactive --manual --preferred-challenges=http \ -m "$LETSENCRYPT_CONTACT_EMAIL" \ --manual-auth-hook authenticator.sh \ --no-self-upgrade --agree-tos \ $DOMAIN_OPTS Refer gist for DNS verification Another option Manual one During CI/CD or just want set cert in docker, i would suggest downloading/creating a cert first and use multiple time SSL for free You locally create the cert first and re-use multiple times by injecting it during CI/CD process storing it in a variable or downloading from Bucket if you are using any cloud.
Im using nginx as a reverse proxy to serve an Express.js backend and a Vue.js frontend. The whole application runs in two Docker Compose images, one for development, and one for production. I'd like to be able to test the production image locally with HTTPS (especially because Vue's PWA framework depends on HTTPS to properly register service workers). The issue is that to configure a let's encrypt certificate with certbot, I need to be running on a server associated with my domain through DNS. What is the correct way to set up a CI/CD workflow where I can test HTTPS locally and also push it to my production server? Do I have to buy a certificate from a different CA and update it manually? I tried using certbot, but it will not work since I am not on the server that is associated with mine through DNS.
JMESPath does have a to_string function. So, you can use a JMESPath query in the path field and do: path = "to_string(port)"
###Instruction: I am trying to mount a Kubernetes secret via a kubernetes_manifest like this, however, port is a number 5342 resource "kubernetes_manifest" "test" { manifest = { "apiVersion" = "secrets-store.csi.x-k8s.io/v1alpha1" "kind" = "SecretProviderClass" "metadata" = { namespace = "test-namespace" "name" = "test" } "spec" = { "provider" = "aws" "secretObjects" = [{ "secretName" = "test" "type" = "Opaque" data = [{ "objectName" = "test123" "key" = "port" } ] }] "parameters" = { objects = yamlencode([{ objectName = aws_secretsmanager_secret.test.name objectType = "secretsmanager" objectAlias = "test" jmesPath = [{ path = "port" objectAlias = "test123" }] }]) } } } } When I terraform apply this, I get the error: err: rpc error: code = Unknown desc = Invalid JMES search result type for path:port. Only string is allowed Is there a way to mount port despite it being a number? Can I convert it to a string somehow? ### Response: JMESPath does have a to_string function. So, you can use a JMESPath query in the path field and do: path = "to_string(port)"
I am trying to mount a Kubernetes secret via a kubernetes_manifest like this, however, port is a number 5342 resource "kubernetes_manifest" "test" { manifest = { "apiVersion" = "secrets-store.csi.x-k8s.io/v1alpha1" "kind" = "SecretProviderClass" "metadata" = { namespace = "test-namespace" "name" = "test" } "spec" = { "provider" = "aws" "secretObjects" = [{ "secretName" = "test" "type" = "Opaque" data = [{ "objectName" = "test123" "key" = "port" } ] }] "parameters" = { objects = yamlencode([{ objectName = aws_secretsmanager_secret.test.name objectType = "secretsmanager" objectAlias = "test" jmesPath = [{ path = "port" objectAlias = "test123" }] }]) } } } } When I terraform apply this, I get the error: err: rpc error: code = Unknown desc = Invalid JMES search result type for path:port. Only string is allowed Is there a way to mount port despite it being a number? Can I convert it to a string somehow?
your type field on BackendConfig is TCP, docs say only http/https allowed https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-configuration#direct_health
###Instruction: I have a deployment. The pod container's have no readinessProbe(s), because the healtcheck will be configured using a BackendConfig. The service is as follows: --- apiVersion: v1 kind: Service metadata: name: my-app-service namespace: my-namespace annotations: cloud.google.com/neg: '{"ingress": true}' cloud.google.com/backend-config: '{"default": "my-app-backendconfig"}' spec: type: NodePort externalTrafficPolicy: Local ports: - name: flower-nodeport-port port: 80 targetPort: 5555 protocol: TCP selector: app: my-app And this is the BackendConfig to create the health check: apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-app-backendconfig namespace: my-namespace labels: app: my-app spec: healthCheck: checkIntervalSec: 20 timeoutSec: 1 healthyThreshold: 1 unhealthyThreshold: 5 type: TCP The problem is that Google Cloud is not applying the healtcheck as I described it. Check the screenshot below: As you can see, values like "Unhealthy threshold" and "Timeout" are not being taken into account. What am I doing wrong? ### Response: your type field on BackendConfig is TCP, docs say only http/https allowed https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-configuration#direct_health
I have a deployment. The pod container's have no readinessProbe(s), because the healtcheck will be configured using a BackendConfig. The service is as follows: --- apiVersion: v1 kind: Service metadata: name: my-app-service namespace: my-namespace annotations: cloud.google.com/neg: '{"ingress": true}' cloud.google.com/backend-config: '{"default": "my-app-backendconfig"}' spec: type: NodePort externalTrafficPolicy: Local ports: - name: flower-nodeport-port port: 80 targetPort: 5555 protocol: TCP selector: app: my-app And this is the BackendConfig to create the health check: apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: my-app-backendconfig namespace: my-namespace labels: app: my-app spec: healthCheck: checkIntervalSec: 20 timeoutSec: 1 healthyThreshold: 1 unhealthyThreshold: 5 type: TCP The problem is that Google Cloud is not applying the healtcheck as I described it. Check the screenshot below: As you can see, values like "Unhealthy threshold" and "Timeout" are not being taken into account. What am I doing wrong?
As per this doc by Platform9. Restoring ETCD Backup to Recover Cluster From Loss of Quorum: Master nodes going offline or a lack of connectivity between the master nodes leading to an unhealthy cluster state could cause a loss of quorum. The idea behind Etcd restore is to restore Etcd from a backup using etcdctl and reduce the master count to 1. In order for it to start up as a brand-new etcd cluster once that is finished, we may need to manually make some adjustments. When things are back up, we increment the expert count individually for example join hubs. If the master nodes are hard offline or unreachable after restoring from the etcd backup, proceed with deauthorizing the nodes as well. From Kubectl perspective, the detached master nodes will be seen to be in "NotReady" state. Delete these nodes from the cluster. At this point, the cluster should be back up and running with a single master node. Verify the same Once the nodes are scaled back up, they should have a PMK stack running on them which will ensure ETCD members will sync amongst each other. Refer to this doc 1 and doc 2 by Red hat for more information about Restoring etcd quorum and Replacing an unhealthy etcd member
###Instruction: In a scenario where a zone/dc dropped and 2 master nodes out of 5 are now offline, I would like to restore etcd on the remaining 3 master nodes. So far the best I could manage was restoring from etcd backup, but I found myself needing to destroy the remaining 2 and recreating them. Otherwise, I got a split brain issue. Is there a way to remove the 2 dropped members from etcd and restore quorum for the remaining 3? (OKD 4.7) ### Response: As per this doc by Platform9. Restoring ETCD Backup to Recover Cluster From Loss of Quorum: Master nodes going offline or a lack of connectivity between the master nodes leading to an unhealthy cluster state could cause a loss of quorum. The idea behind Etcd restore is to restore Etcd from a backup using etcdctl and reduce the master count to 1. In order for it to start up as a brand-new etcd cluster once that is finished, we may need to manually make some adjustments. When things are back up, we increment the expert count individually for example join hubs. If the master nodes are hard offline or unreachable after restoring from the etcd backup, proceed with deauthorizing the nodes as well. From Kubectl perspective, the detached master nodes will be seen to be in "NotReady" state. Delete these nodes from the cluster. At this point, the cluster should be back up and running with a single master node. Verify the same Once the nodes are scaled back up, they should have a PMK stack running on them which will ensure ETCD members will sync amongst each other. Refer to this doc 1 and doc 2 by Red hat for more information about Restoring etcd quorum and Replacing an unhealthy etcd member
In a scenario where a zone/dc dropped and 2 master nodes out of 5 are now offline, I would like to restore etcd on the remaining 3 master nodes. So far the best I could manage was restoring from etcd backup, but I found myself needing to destroy the remaining 2 and recreating them. Otherwise, I got a split brain issue. Is there a way to remove the 2 dropped members from etcd and restore quorum for the remaining 3? (OKD 4.7)
Update your script with the Postgress User and database name to pass to the psql script as environmental variables. apiVersion: v1 kind: ConfigMap metadata: name: {{ .Values.postgres.configmap.name }} namespace: {{ .Release.Namespace }} data: init_script.sh: |+ #!/bin/bash set -e psql -v ON_ERROR_STOP=1 -U $POSTGRES_USER --dbname $POSTGRES_DB <<-EOSQL CREATE DATABASE my-db; GRANT ALL PRIVILEGES ON DATABASE my-db to $POSTGRES_USER; ... EOSQL Then, add into your Deloyment file the two enviroment variables in the Env section. apiVersion: apps/v1 kind: Deployment spec: template: spec: containers: - name: {{ .Values.postgres.image }} env: - name: POSTGRES_USER value: {{ .Values.postgres.user }} - name: POSTGRES_DB value: {{ .Values.postgres.db }}
###Instruction: I've setup a PostgreSQL database with Helm Chart using this ConfigMap, that contains the initial script which allows to create the databases and schemas I need: apiVersion: v1 kind: ConfigMap metadata: name: {{ .Values.postgres.configmap }} namespace: {{ .Release.Namespace }} data: init_script.sh: |+ #!/bin/bash psql -v <<-EOSQL CREATE DATABASE my-db; GRANT ALL PRIVILEGES ON DATABASE my-db to <user> ... I want to add an environmental variable for the user and one for the database name in order to: use them when I connect to psql use the user name to give the correct rights to access the Database created with the script. I don't know how to set them. ### Response: Update your script with the Postgress User and database name to pass to the psql script as environmental variables. apiVersion: v1 kind: ConfigMap metadata: name: {{ .Values.postgres.configmap.name }} namespace: {{ .Release.Namespace }} data: init_script.sh: |+ #!/bin/bash set -e psql -v ON_ERROR_STOP=1 -U $POSTGRES_USER --dbname $POSTGRES_DB <<-EOSQL CREATE DATABASE my-db; GRANT ALL PRIVILEGES ON DATABASE my-db to $POSTGRES_USER; ... EOSQL Then, add into your Deloyment file the two enviroment variables in the Env section. apiVersion: apps/v1 kind: Deployment spec: template: spec: containers: - name: {{ .Values.postgres.image }} env: - name: POSTGRES_USER value: {{ .Values.postgres.user }} - name: POSTGRES_DB value: {{ .Values.postgres.db }}
I've setup a PostgreSQL database with Helm Chart using this ConfigMap, that contains the initial script which allows to create the databases and schemas I need: apiVersion: v1 kind: ConfigMap metadata: name: {{ .Values.postgres.configmap }} namespace: {{ .Release.Namespace }} data: init_script.sh: |+ #!/bin/bash psql -v <<-EOSQL CREATE DATABASE my-db; GRANT ALL PRIVILEGES ON DATABASE my-db to <user> ... I want to add an environmental variable for the user and one for the database name in order to: use them when I connect to psql use the user name to give the correct rights to access the Database created with the script. I don't know how to set them.
The issue you're encountering is related to the way you're accessing the nameOverride value from clusters.yaml in the value field of your ArgoCD ApplicationSet. In your current configuration, you're using the following expression to access the value: value: '{{ index .Clusters.values "nameOverride" }}' However, the problem lies in the fact that the values field is defined at the top level of the clusters.yaml file, not nested under each cluster. Therefore, the correct path to access the value would be: value: '{{ index .ApplicationSetParameters.values "nameOverride" }}' By modifying the expression as shown above, you should be able to access the nameOverride value correctly from clusters.yaml and deploy your application successfully.
###Instruction: I just started working with ArgoCD and I have an issue I can't find the answer for. I have a file called clusters.yaml in my Git repo: clusters: - name: cluster1-eu-k8s-002 url: https://cluster1.hcp.northeurope.azmk8s.io:443 values: nameOverride: ReplaceWithThis And I am using the following ApplicationSet in order to deploy Opentelemetry-collector on a bunch of clusters grouped under the label group:dev. apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: opentelemetry-applicationset namespace: argocd spec: generators: - git: repoURL: git@github.com:removed/cloud.git revision: HEAD files: - path: GitOps/argocd-apps/clusters.yaml - clusters: selector: matchLabels: argocd.argoproj.io/secret-type: cluster group: dev template: metadata: name: 'opentelemetry-{{name}}' spec: project: default sources: - repoURL: https://open-telemetry.github.io/opentelemetry-helm-charts chart: opentelemetry-collector targetRevision: 0.51.3 helm: valueFiles: - $values/GitOps/argocd-apps/opentelemetry-collector/values/values-dev.yaml parameters: - name: nameOverride value: '{{ index .Clusters.values "nameOverride" }}' - repoURL: git@github.com:removed/cloud.git ref: values destination: server: '{{ server }}' namespace: opentelemetry I am trying to replace a parameter called nameOverride with my value ReplaceWithThis from clusters.yaml. ArgoCD is not deploying my app because of this line: value: '{{ index .Clusters.values "nameOverride" }}' ArgoCD ApplicationSet controller logs throw some nonsense errors. I am sure I identified the problem correctly, because it works as expected if I just hardcore the string. What exactly is the issue with the way I am trying to pull that value? ### Response: The issue you're encountering is related to the way you're accessing the nameOverride value from clusters.yaml in the value field of your ArgoCD ApplicationSet. In your current configuration, you're using the following expression to access the value: value: '{{ index .Clusters.values "nameOverride" }}' However, the problem lies in the fact that the values field is defined at the top level of the clusters.yaml file, not nested under each cluster. Therefore, the correct path to access the value would be: value: '{{ index .ApplicationSetParameters.values "nameOverride" }}' By modifying the expression as shown above, you should be able to access the nameOverride value correctly from clusters.yaml and deploy your application successfully.
I just started working with ArgoCD and I have an issue I can't find the answer for. I have a file called clusters.yaml in my Git repo: clusters: - name: cluster1-eu-k8s-002 url: https://cluster1.hcp.northeurope.azmk8s.io:443 values: nameOverride: ReplaceWithThis And I am using the following ApplicationSet in order to deploy Opentelemetry-collector on a bunch of clusters grouped under the label group:dev. apiVersion: argoproj.io/v1alpha1 kind: ApplicationSet metadata: name: opentelemetry-applicationset namespace: argocd spec: generators: - git: repoURL: git@github.com:removed/cloud.git revision: HEAD files: - path: GitOps/argocd-apps/clusters.yaml - clusters: selector: matchLabels: argocd.argoproj.io/secret-type: cluster group: dev template: metadata: name: 'opentelemetry-{{name}}' spec: project: default sources: - repoURL: https://open-telemetry.github.io/opentelemetry-helm-charts chart: opentelemetry-collector targetRevision: 0.51.3 helm: valueFiles: - $values/GitOps/argocd-apps/opentelemetry-collector/values/values-dev.yaml parameters: - name: nameOverride value: '{{ index .Clusters.values "nameOverride" }}' - repoURL: git@github.com:removed/cloud.git ref: values destination: server: '{{ server }}' namespace: opentelemetry I am trying to replace a parameter called nameOverride with my value ReplaceWithThis from clusters.yaml. ArgoCD is not deploying my app because of this line: value: '{{ index .Clusters.values "nameOverride" }}' ArgoCD ApplicationSet controller logs throw some nonsense errors. I am sure I identified the problem correctly, because it works as expected if I just hardcore the string. What exactly is the issue with the way I am trying to pull that value?
If you are using the standard Docker Hub postgres image and you are just trying to create a database, the easiest thing to do is to use its environment-variable settings image: postgres:15 env: - name: POSTGRES_USER value: <user> - name: POSTGRES_DB value: my-db For these settings you do not need a separate ConfigMap. In the context of a Helm chart, if you want to make these values configurable, you can value: {{ .Values.postgres.db | default "my-db" }} which will use a value from the configuration (values.yaml, helm install --set option, additional helm install -f files) postgres: db: database-name or my-db if it's not set. If you do specifically want to use an init script, but won't know the database user name until deploy time, you can ask Helm to inject this into the init script. If you name the script *.sql then the image will run it under psql for you so you don't need the credentials. apiVersion: v1 kind: ConfigMap metadata: name: {{ .Values.postgres.configmap }} data: init_script.sql: |+ CREATE DATABASE my-db; GRANT ALL PRIVILEGES ON DATABASE my-db to {{ .Values.postgres.user }} ... Helm will substitute the templated value before creating the ConfigMap. In all of these cases, note that the initialization scripts and environment variables are only considered the very first time the database is used, if the corresponding storage is empty. If you change one of the Helm values, it will change the environment-variable or ConfigMap setting, but that won't actually cause a change in the database. Practically, my experience has been that the best approach here is to use the environment variables to create a database and user, and then to use your application framework's database-migration system to actually create the tables. You'll need the migrations in other contexts so it's good to have a path to run them, and they're useful if the database schema ever changes; you can't just re-run the /docker-entrypoint-initdb.d scripts.
###Instruction: I've setup a PostgreSQL database with Helm Chart using this ConfigMap, that contains the initial script which allows to create the databases and schemas I need: apiVersion: v1 kind: ConfigMap metadata: name: {{ .Values.postgres.configmap }} namespace: {{ .Release.Namespace }} data: init_script.sh: |+ #!/bin/bash psql -v <<-EOSQL CREATE DATABASE my-db; GRANT ALL PRIVILEGES ON DATABASE my-db to <user> ... I want to add an environmental variable for the user and one for the database name in order to: use them when I connect to psql use the user name to give the correct rights to access the Database created with the script. I don't know how to set them. ### Response: If you are using the standard Docker Hub postgres image and you are just trying to create a database, the easiest thing to do is to use its environment-variable settings image: postgres:15 env: - name: POSTGRES_USER value: <user> - name: POSTGRES_DB value: my-db For these settings you do not need a separate ConfigMap. In the context of a Helm chart, if you want to make these values configurable, you can value: {{ .Values.postgres.db | default "my-db" }} which will use a value from the configuration (values.yaml, helm install --set option, additional helm install -f files) postgres: db: database-name or my-db if it's not set. If you do specifically want to use an init script, but won't know the database user name until deploy time, you can ask Helm to inject this into the init script. If you name the script *.sql then the image will run it under psql for you so you don't need the credentials. apiVersion: v1 kind: ConfigMap metadata: name: {{ .Values.postgres.configmap }} data: init_script.sql: |+ CREATE DATABASE my-db; GRANT ALL PRIVILEGES ON DATABASE my-db to {{ .Values.postgres.user }} ... Helm will substitute the templated value before creating the ConfigMap. In all of these cases, note that the initialization scripts and environment variables are only considered the very first time the database is used, if the corresponding storage is empty. If you change one of the Helm values, it will change the environment-variable or ConfigMap setting, but that won't actually cause a change in the database. Practically, my experience has been that the best approach here is to use the environment variables to create a database and user, and then to use your application framework's database-migration system to actually create the tables. You'll need the migrations in other contexts so it's good to have a path to run them, and they're useful if the database schema ever changes; you can't just re-run the /docker-entrypoint-initdb.d scripts.
I've setup a PostgreSQL database with Helm Chart using this ConfigMap, that contains the initial script which allows to create the databases and schemas I need: apiVersion: v1 kind: ConfigMap metadata: name: {{ .Values.postgres.configmap }} namespace: {{ .Release.Namespace }} data: init_script.sh: |+ #!/bin/bash psql -v <<-EOSQL CREATE DATABASE my-db; GRANT ALL PRIVILEGES ON DATABASE my-db to <user> ... I want to add an environmental variable for the user and one for the database name in order to: use them when I connect to psql use the user name to give the correct rights to access the Database created with the script. I don't know how to set them.
I'm pretty sure that you create a cluster by default. If you did that, you used the Compute Engine default parameter that you can see here Default service account and access scope. If you did that, it's normal you have no access: the minimal scope does not allow the Cloud SQL access. To solve that, you have to select either a user managed service account (the best solution) or still use the default service account but allow full scopes access. 2 solutions to enforce that: Either delete and recreate correctly your cluster Or, you can create another node pool with the correct parameters.
###Instruction: I am following this tutorial: Connect to Cloud SQL for MySQL from Google Kubernetes Engine. I have created a cluster. I have created a docker image in the repository. I have created a database. I am able to run my application outside of Kubernetes and it connects to the database. But after deploying application, pods are not in a valid state and I see in the logs of the pod error: Caused by: java.lang.RuntimeException: [quizdev:us-central1:my-instance] Failed to update metadata for Cloud SQL instance. ...[na:na] Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 Forbidden GET https://sqladmin.googleapis.com/sql/v1beta4/projects/quizdev/instances/my-instance/connectSettings { "code": 403, "details": [ { "@type": "type.googleapis.com/google.rpc.ErrorInfo", "reason": "ACCESS_TOKEN_SCOPE_INSUFFICIENT", "domain": "googleapis.com", "metadata": { "service": "sqladmin.googleapis.com", "method": "google.cloud.sql.v1beta4.SqlConnectService.GetConnectSettings" } } ], "errors": [ { "domain": "global", "message": "Insufficient Permission", "reason": "insufficientPermissions" } ], "message": "Request had insufficient authentication scopes.", "status": "PERMISSION_DENIED" } at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:146) ~[google-api-client-2.2.0.jar:2.2.0] ... 2023-06-14T06:57:49.508Z WARN 1 --- [ main] o.h.e.j.e.i.JdbcEnvironmentInitiator : HHH000342: Could not obtain connection to query metadata What could be the issue? What can I check to diagnose the problem? Edit I have created the cluster using this command: gcloud container clusters create questy-java-cluster \ --num-nodes 2 \ --machine-type n1-standard-1 \ --zone us-central1-c ### Response: I'm pretty sure that you create a cluster by default. If you did that, you used the Compute Engine default parameter that you can see here Default service account and access scope. If you did that, it's normal you have no access: the minimal scope does not allow the Cloud SQL access. To solve that, you have to select either a user managed service account (the best solution) or still use the default service account but allow full scopes access. 2 solutions to enforce that: Either delete and recreate correctly your cluster Or, you can create another node pool with the correct parameters.
I am following this tutorial: Connect to Cloud SQL for MySQL from Google Kubernetes Engine. I have created a cluster. I have created a docker image in the repository. I have created a database. I am able to run my application outside of Kubernetes and it connects to the database. But after deploying application, pods are not in a valid state and I see in the logs of the pod error: Caused by: java.lang.RuntimeException: [quizdev:us-central1:my-instance] Failed to update metadata for Cloud SQL instance. ...[na:na] Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 Forbidden GET https://sqladmin.googleapis.com/sql/v1beta4/projects/quizdev/instances/my-instance/connectSettings { "code": 403, "details": [ { "@type": "type.googleapis.com/google.rpc.ErrorInfo", "reason": "ACCESS_TOKEN_SCOPE_INSUFFICIENT", "domain": "googleapis.com", "metadata": { "service": "sqladmin.googleapis.com", "method": "google.cloud.sql.v1beta4.SqlConnectService.GetConnectSettings" } } ], "errors": [ { "domain": "global", "message": "Insufficient Permission", "reason": "insufficientPermissions" } ], "message": "Request had insufficient authentication scopes.", "status": "PERMISSION_DENIED" } at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:146) ~[google-api-client-2.2.0.jar:2.2.0] ... 2023-06-14T06:57:49.508Z WARN 1 --- [ main] o.h.e.j.e.i.JdbcEnvironmentInitiator : HHH000342: Could not obtain connection to query metadata What could be the issue? What can I check to diagnose the problem? Edit I have created the cluster using this command: gcloud container clusters create questy-java-cluster \ --num-nodes 2 \ --machine-type n1-standard-1 \ --zone us-central1-c
If the servers are independent - You don't want to expose your production systems private key on a development machine - , you want different certificates and the question becomes "How do you use TLS on your development / testing infrastructure". This is a widely discussed question. E.g.: https://web.dev/how-to-use-local-https/ https://security.stackexchange.com/questions/121163/how-do-i-run-proper-https-on-an-internal-network If your Testing Infrastructure is accessible from the Internet for a Verification Method, you can setup certbot similar to a production machine, but you'd have to aquire a domain/subdomain for this. Since the reverse-proxys config and certificates are likely not part of your software, you can treat it as infrastructure for your tests and just leave them in between tests. Treating infrastructure concerns independently of your application improves modularity and therefore maintainability.
###Instruction: Im using nginx as a reverse proxy to serve an Express.js backend and a Vue.js frontend. The whole application runs in two Docker Compose images, one for development, and one for production. I'd like to be able to test the production image locally with HTTPS (especially because Vue's PWA framework depends on HTTPS to properly register service workers). The issue is that to configure a let's encrypt certificate with certbot, I need to be running on a server associated with my domain through DNS. What is the correct way to set up a CI/CD workflow where I can test HTTPS locally and also push it to my production server? Do I have to buy a certificate from a different CA and update it manually? I tried using certbot, but it will not work since I am not on the server that is associated with mine through DNS. ### Response: If the servers are independent - You don't want to expose your production systems private key on a development machine - , you want different certificates and the question becomes "How do you use TLS on your development / testing infrastructure". This is a widely discussed question. E.g.: https://web.dev/how-to-use-local-https/ https://security.stackexchange.com/questions/121163/how-do-i-run-proper-https-on-an-internal-network If your Testing Infrastructure is accessible from the Internet for a Verification Method, you can setup certbot similar to a production machine, but you'd have to aquire a domain/subdomain for this. Since the reverse-proxys config and certificates are likely not part of your software, you can treat it as infrastructure for your tests and just leave them in between tests. Treating infrastructure concerns independently of your application improves modularity and therefore maintainability.
Im using nginx as a reverse proxy to serve an Express.js backend and a Vue.js frontend. The whole application runs in two Docker Compose images, one for development, and one for production. I'd like to be able to test the production image locally with HTTPS (especially because Vue's PWA framework depends on HTTPS to properly register service workers). The issue is that to configure a let's encrypt certificate with certbot, I need to be running on a server associated with my domain through DNS. What is the correct way to set up a CI/CD workflow where I can test HTTPS locally and also push it to my production server? Do I have to buy a certificate from a different CA and update it manually? I tried using certbot, but it will not work since I am not on the server that is associated with mine through DNS.
Experiencing the same issue while using kube-hetzner as you, I looked at the configuration file and found the culprit: by default, outbound traffic is filtered, except for a few popular ports like HTTP or HTTPS. There are at least 2 solutions: you set the variable restrict_outbound_traffic to false: https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner/blob/master/kube.tf.example#L395 You add a firewall rule to allow outbound traffic on port 27017. Once done, the connection to a Mongo Atlas cluster is working perfectly!
###Instruction: I set up a Kubernetes Cluster on Hetzner following theses steps: https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner Client Version: v1.26.3 Kustomize Version: v4.5.7 Server Version: v1.26.4+k3s1 Mongosh Version: 1.8.1 I am unable to connect to either my own mongodb server (docker deployment) or a hosted one on mongodb.net: root@trustsigner-frontend-deployment-59644b6b55-pqgmm:/usr/share/nginx/html# mongosh mongodb+srv://<removed-user>:<removed-password>@cluster0.fdofntt.mongodb.net/test Current Mongosh Log ID: 6447807561ebcee04b00165d Connecting to: mongodb+srv://<credentials>@cluster0.fdofntt.mongodb.net/test?appName=mongosh+1.8.1 MongoServerSelectionError: Server selection timed out after 30000 ms Same error when using my own one with mongodb:// instead of mongodb+srv//. But surprisingly it is possible to use the same connection string with mongodb compass or mongosh that is installed on my machine (not in a kubernetes pod) Ping to 8.8.8.8 or to any other side works and I can fetch via curl. But no chance to establish a mongodb connection... ### Response: Experiencing the same issue while using kube-hetzner as you, I looked at the configuration file and found the culprit: by default, outbound traffic is filtered, except for a few popular ports like HTTP or HTTPS. There are at least 2 solutions: you set the variable restrict_outbound_traffic to false: https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner/blob/master/kube.tf.example#L395 You add a firewall rule to allow outbound traffic on port 27017. Once done, the connection to a Mongo Atlas cluster is working perfectly!
I set up a Kubernetes Cluster on Hetzner following theses steps: https://github.com/kube-hetzner/terraform-hcloud-kube-hetzner Client Version: v1.26.3 Kustomize Version: v4.5.7 Server Version: v1.26.4+k3s1 Mongosh Version: 1.8.1 I am unable to connect to either my own mongodb server (docker deployment) or a hosted one on mongodb.net: root@trustsigner-frontend-deployment-59644b6b55-pqgmm:/usr/share/nginx/html# mongosh mongodb+srv://<removed-user>:<removed-password>@cluster0.fdofntt.mongodb.net/test Current Mongosh Log ID: 6447807561ebcee04b00165d Connecting to: mongodb+srv://<credentials>@cluster0.fdofntt.mongodb.net/test?appName=mongosh+1.8.1 MongoServerSelectionError: Server selection timed out after 30000 ms Same error when using my own one with mongodb:// instead of mongodb+srv//. But surprisingly it is possible to use the same connection string with mongodb compass or mongosh that is installed on my machine (not in a kubernetes pod) Ping to 8.8.8.8 or to any other side works and I can fetch via curl. But no chance to establish a mongodb connection...